Oic Module-4-6
Oic Module-4-6
And once you complete these videos, don't forget to take the Module 4 skill check.
Keep going strong.
Welcome. I'll begin these series of data transformation videos with an overview of
the map editor. To revisit our basic Integration Development Workflow, after you've
configured one or more connection resources, the next step involves the mapping of
data. Mappings and connections work together since connections identify the
applications that an integration interacts with, and a mapping identifies the data
to move from one or more data sources to the target destinations fields.
In most cases, the messages you want to transfer between the applications and an
integration will have different data structures. This visual mapper enables you to
map data by dragging source element nodes onto target element nodes. You can create
mappings from simple data assignments from various source data structures to more
complex expressions or computations. This is because the map editor creates a
transformation map using XSL, the Extensible Stylesheet Language, to describe the
data mappings.
So how do you access the map editor? Well, it displays as a map action in the
integration design canvas, which is automatically created for you once you've
configured an invoke connection to map the input data for that connection
invocation. The map action is also generated once you have configured a trigger
connection that is synchronous as it allows you to map the response message. And it
also appears once you've configured the write file operation of a stage file
action, where you will map the data to create a new file.
Now for the trigger response map action, editing the mapper will be one of the last
things you do since the data you'll need for the response will only be available in
its mapping canvas after you have implemented all the integration flow logic. By
contrast, for those invoke or write file operation map actions, you'll likely be
doing the mapping right away, since hopefully, all the source data you'll need is
already available and visible in the flow, which leads me to discussing the layout
of the mapping canvas.
In the Target section, on the right side will be the data element nodes in XML
schema format based on how you configured the adapter connection. Since once you
map those element node values, that will be the data sent to that service or
application. Now, don't be concerned if the service is expecting the data in a
different format. As I discussed in earlier lessons, while OIC presents and handles
all data internally as XML, other formats such as JSON, SQL queries, CSV files, or
even HTTP parameters will be translated automatically by the connections adapter at
runtime.
Now on the left side is the Sources section, which displays all the data structures
that are available at this point in the flow within the integration instance. This
always includes the initial data object that was received to trigger the
integration. By the way, for schedule pattern integrations, this will only be the
start time.
Then for every invoke that has already occurred, you'll have access to both the
request data that was sent, as well as the received response message. As a
reminder, invoke data will not be visible here if invoked from within another scope
container.
You'll have access to the values of the business identifiers, as well as both the
design time and runtime metadata associated with this integration instance. And if
you've implemented integration properties or variables, those data values will be
available to you as well.
Here's an example of mapping canvas. In this case, the target data structure is an
organization, business object, with nodes for the data elements. One or more of
these will be required by this particular invocation to the Oracle Service Cloud.
Now, within the Sources section, these data structures are automatically populated
with the information available at this point in the flow. You can expand data
structure levels to view nested nodes as there is no limit on the levels of
display.
And as you can see in this example, the number of element nodes to populate can be
quite expensive. Also notice that in addition to using drag and drop from a source
data element node to a target node, there are other options available within the
map editor to populate target data elements.
But that's enough for this overview as you'll learn more about those options as
well as other features and capabilities with using the map editor in upcoming
videos. That's it for this lesson. Thanks for watching.
Well, welcome back. In this lesson, we'll dive a bit deeper into the features and
capabilities of the Map Editor. If you'll recall from the previous video, the
Mapping Canvas is the default view, which includes the Target data structure
internally represented with XML schema elements, and the Sources section, which
includes all data structures and variables that are available at this point in the
integration flow.
Additionally, there is an Expression Builder that allows you to view and edit your
xPath expressions. We'll explore this a bit later. The Mapping Canvas also provides
a Components pane on the right side that allows you to add functions, operators and
XSL constructors to your mappings.
Besides the Mapping Canvas, clicking this Code icon switches to the XSLT Code
Editor, which is useful for those use cases which mapping is not possible in the
Graphical Mapper. Clicking this Test icon opens the Test Mapper, which allows you
to test a completed mapping by entering sample content, then viewing the target
output. To return to the Mapping Canvas, you click the Designer icon, as shown
here. And once again, this will be the default view whenever you launch the Map
Editor. Let's now shift our focus to the Expression Builder.
To view the Expression Builder, just click the Target element node. And now, an
empty box appears at the bottom in the Designer view. Then, when you do a mapping--
in this case, I drag the Account's PartyId to the orgID-- the value is shown. Now
when you click this Icon on the right side, it switches you to the Developer view
where you can now see the full expression.
The menu icons also expand. Clicking this icon will Erase the mapping contents.
After any edits, you click here to Save the mapping. You click here to Close the
Expression Editor box. And this icon is used to Toggle back to the Design view
where we started out. And of course, you can Toggle back and forth when needed.
Finally, this icon is used to toggle Set Text mode.
When there is no mapping in the Expression Editor, this Set Text button is
automatically enabled, which allows you to directly enter text in the Expression
box. In this case, a Text icon is added to the node, as shown here next to that
RequestId element. If you look in the Code Editor, you'll see it creates a value-of
expression in the XSLT with that text value.
By contrast, when you disable this Set Text mode, the Expression Builder box mode
changes, allowing you to enter a literal value, as shown here. And of course, that
will be shown directly in the XSLT as that node's value. And finally, you should
note that whether Set Text is enabled or disabled, when you choose to drag and drop
a mapping, it will always be brought in as an XSLT expression, as shown here.
So now that we've learned about the Expression Builder, that brings us to a great
place to take a break. In the next video, we'll continue looking at the other
details and features of the Map Editor. I'll see you in Part 2.
Welcome back to using the map editor part 2. Let's now explore a bit further into
the features you'll see when working with source and target data structures on the
mapping canvas. To learn more details about a source or target node element, you
can right click the node and select Node info, which will show you specific schema
element details, such as the data type, if mapping is required, the full xPath
location, as well as other information.
In addition to qualified schemas where the elements and attributes are within the
target namespace of the schema, elements and attributes with and without namespace
prefixes are also supported, which allows for what is called unqualified schemas.
Substitution groups in schemas are supported. And you can see all the substitutable
elements in a base element within the mapper, then select the one to use. The
mapper also supports data types that have been extended within the schema.
Additionally, you can extend a data type in the mapper. These become user-defined
types, creating a primitive data type or container with a supplementary name and
some additional properties as shown in this example. The manner in which source and
target element names are displayed is controlled by this developer button at the
top of the mapper where you can click to toggle between the two views.
User-friendly names are displayed by default when you open the map editor. And
notice that adapter names are also visible in this view. Also other sections of the
mapper in which source and target elements are displayed will show names based on
the mode that is selected for the mapper, user-friendly or technical. One example
of that is here when selecting filter options.
Speaking of filter options, this allows you to display only those source or target
data structures that you're interested in. For both source and target data
structures, you can filter to show just the mapped or unmapped elements. Likewise,
both source and target, you can filter to show just required or custom field
elements. For the third option on the target side, in addition to viewing all
validation details, you can limit to any combination of these options-- errors,
warnings, or just those with no issues.
On the source side, here you can limit to display only one or more data source
structures, which is very handy to remove the clutter on the canvas when you're
doing your mapping from just one or two structures. The mapper also allows you to
search for specific element nodes or attributes in either the sources section or
within the target data structure.
After clicking the Search icon, you enter a full or partial name and press Enter.
The tree is automatically expanded and scrolls to the first map-- or the first
match, I should say. Click Next to scroll down as any element nodes or attributes
that contain that text string will be the next match. And when you're done
searching, you click X to dismiss the search tool.
There are some common tasks that are available from the context menu, which is
available when you right-click the node. You see in the target data structure,
initially all element nodes are grayed out. We call them ghost nodes. In order to
do a mapping, they need to become a target node. Now, if you simply drag and drop
the data from a source node, or drag and drop a function onto the ghost node, it
automatically becomes a target node.
However, there are some circumstances where you'll need to manually convert it into
a target node first to use it. Another common task is when you need to delete a
mapping associated with a target node. So this is how it's done. Or as you can see,
you can click here to revert this element back into a ghost node, which of course,
also removes the mapping. Be aware that if you delete a parent element node, all of
its child element nodes and any of their mappings will be deleted as well.
Essentially, they all revert to ghost nodes.
You click this repeat node option to create another target element in the mapper,
which will then allow you to map different sources to the same target element
array. Of course, this option is only enabled and available for elements that can
be repeated, which are indicated by this icon, which means their max occurs
attribute has a value of greater than 1.
And finally, of course, it is sometimes possible that your mappings may contain
errors. If so, once you click the Validate button, warning or error messages will
be displayed above the sources section of the mapper. Warnings are identified with
yellow icons as shown in this example, along with a yellow highlight in the
corresponding targets expression editor box.
In this example, you can see that errors are identified by red icons. And once you
click the target node element, the expression box is highlighted in red. At this
point, you'll need to examine the contents of the XSL expression to track down the
issue. And that brings us to the end of this two-part lesson on using the map
editor. But there are more videos coming up. We'll see you then. Thanks for
watching.
Welcome back. In this video, I'll discuss working with Map Editor functions,
operators, and XSLT statements. To access these resources, you click this icon at
the top right of the Mapping Canvas to open the Components pane.
Let's start with the functions. As you can see, they are organized into these nine
categories. Simply expand one of the categories to see the list. Then when you
select one of them, embedded online help provides a description, signature,
parameters, what the function returns, and supported properties.
For this example, the concat function is dragged to the intended target element
node. At that point, a function icon is added to the Mapping Canvas section with a
line to that node. And the function signature template is added to the Expression
Builder at the bottom of the page.
In the Sources section, you drag the source element nodes to the function within
the Expression Builder. For this example, the Employee External Number and Legal
Employer Id source elements are dragged to the two sides of the comma in the concat
function. We then save to save the completed mapping for this target element.
To add an operator, you just expand the Operator section to see the list. Then when
you select one of them, again, the embedded online help provides a description,
signature, parameters, what the operator returns, along with examples. Now, when
you drag the operator to the target element node for this example, the GreaterThan
operator is added to the Is Manager node.
The operator symbol is shown in the Mapping Canvas, next to the node. And the
operator is added to the Expressions Builder, as shown here. You can then drag
appropriate source elements to both sides of the operator in the Expression
Builder, or manually enter values. In this case, based on the result of the
expression, the Is Manager data value will be mapped as either true or false. We
then click to save the completed mapping for this target element.
To add XSLT statements, you first need to click this XSLT toggle button. Then
you'll see the XSLT Constructors header added to the Components panel. When you
expand the header, it lists the constructs under two dropdown areas-- Flow Control
and Output.
When you select one of the constructs, once again, there is embedded online help
providing a description, the signature, and examples. Unlike functions or
operators, you can only drag XSLT statements onto created elements. So if the
element on which you want to drag the statement is grayed out-- remember, we call
that a ghost node-- you first need to right-click the element and select Create
target node.
Here's another example of one that is already created-- Last Update Date. And it
has a line, which means that there is already a mapping as well. In this
screenshot, you can see that I'm hovering the if statement to the front of the
element-- notice the green icon on the left side-- which will add the statement as
a child element. And in this case, the value-of construct is added to capture that
existing mapping.
Now, from here, I can define a conditional expression for the if statement and
proceed with logic as required. As another example, if I instead drag the if icon
to the back of the element name-- notice, this time, the green icon on the right
side-- the statement is added as a parent to the node. In this example, I've edited
an expression for the if statement to check to see if the Updated Time is greater
than the Created Time. If so, we will do the mapping to the Last Update Date.
For looping logic, you use a for-each statement. But you can only drag it to a
repeatable array node element, which is easy to identify by this icon. However,
instead, you can automatically create for-each statements doing a drag-and-drop
mapping between repeatable source and target elements in the mapper. When you
toggle the XSLT button, you'll see that same for-each construct just above the
element.
You may have scenarios in which you need to set some fields to default values.
Here, the conditional when statement checks to see if the source id value is less
than 1,000. If so, the mapping occurs normally. But if not, the otherwise statement
adds a default literal value of 1001 to the incident id element.
You can also add multiple value-of statements and multiple XSLT conditional
statements under a leaf node. Then, from there, you can define appropriate mapping
logic for each of those value-of statements.
And for our last example-- and this is a common use case-- instead of having to
individually map each source child element to each target child element, as in this
example, you can perform a deep copy of a source parent node to the target parent
node in the mapper if they are the same data structure type.
Here's how it works. You drag the copy of constructor to the target parent element.
For this example, it is Response Wrapper. Then, when you drag the source parent
element to the copy-of constructor, it will automatically map all the source child
elements to the target child elements, as you see in this XSLT code.
So to wrap up, to implement more extended mapping use cases, you access the
Components pane. Then expand Functions, Operators, or XSLT Constructs to select and
use as needed for target node elements in the Mapping Canvas. And that's it for
this lesson. Thanks for watching.
Welcome. Let's now revisit how we can view and Edit XSLT Code in the Map Editor.
Recall from an earlier lesson when I mentioned that you can switch from the Mapping
Canvas over to the XSLT Code Editor, which allows you to further edit the code for
more advanced use cases, such as creating internal variables using the xsl:variable
construct, or correlating multiple data sources grouped by key fields using the
xsl:for-each-group construct.
You may need to dynamically create target-name value pairs based on runtime data
using xsl:element and attribute constructs. And using the xsl:template, call
template, and apply-templates constructs you can implement "push style" XSLT. Now
perhaps you wish to write your own functions in XSLT using the xsl:function
construct, or you can copy node sets from source to target using xsl:copy and copy-
of.
The toolbar provides a series of shortcuts for navigating through and editing the
XSLT code. You can Undo or Redo your last editing change, Search for specific
entries, then use Next and Previous buttons to navigate through the code. Now this
is a typical Find and Replace tool. And this icon prompts you to enter a line
number in the code you wish to jump to.
So as an example of a Code Editor use case, you can define a counter inside a for-
each loop to track the number of iterations processed by the loop. Here's an XSL
code snippet of a source-sample purchase order with multiple items. The pseudo XSL
code snippet includes using the count xPath function, which contains the location
path to the element, and returns the number of instances for the node set. Within
the XSL for-each loop, the position function returns the iteration number. And
here's what the resulting output would be for this particular illustration.
Another option is to edit the code externally on your computer using the XSLT
Mapper in JDeveloper. Now some folks may prefer this approach, but only because
they are already so familiar with it using this tool. If so, real quickly, here's
how that works. Before you can export, you need to complete the basics of the
integration flow. Then, open the Map action that has the XSL file you're interested
in implementing.
Now inside that Mapping Canvas for each source data structure that you'll need
access to, map one data element over to the target so that the exported XSL file
will have the required source and target schema definitions. Once you've closed the
map action, you saved the integration, and now you need to export the entire
integration archive, which is an iar file.
Now, if you've created your integration inside of a project, you'll have to first
export the project archive file, then you can retrieve the iar file from there. To
import the integration archive into JDeveloper, you first create an Oracle Service
Bus application with a project, then right click the project to import the
integration. And you'll see all the integration resource files to include that XSL
file you're interested in.
And at this point you're ready to edit, just open the file where you can use the
Design Editor to map from source to target, or click the tab to edit the Source
code directly. The XSLT Mapper allows for complete xPath-expression editing, as
well as access to other XSLT elements. Now even though you earlier exported the
entire integration, you only import just the edited XSL file. So in the OIC
console, open the integration. Then, on the map action element, you click Import.
And so to wrap up, we discussed the use cases as to why you may need to edit XSLT
code manually. And although using JDeveloper is an option, since the Map Editor
within OIC now allows you to edit the code directly, we recommend that you get
comfortable using it instead. That's it for this lesson. Thanks for watching.
Welcome back. In this lesson, you'll learn about OIC lookups, which can be used for
many use cases to include assisting with data transformations. Different
applications sometimes store the same information by using different data values,
which makes mapping data between them a greater challenge.
A lookup associates values used by one application for a specific field to the
values used by other applications for the same field. This provides the capability
to map values across vocabularies or systems. For example, you can create a lookup
for currency codes, organization or product identifiers, or as in this example,
country codes.
These lookups will be based on static definitions where the values are preloaded at
design time. Then later, those values can be retrieved at runtime within an
integrations map action. Now, under the covers, the lookup is loaded into internal
memory, much like a read-only database table. To retrieve the data, you'll use a
special OIC function within the mapping canvas to provide the appropriate value to
the target data element.
To create a lookup for your project, you click the Add icon in the Lookups pane,
choose to create, then provide a meaningful name. Now, once you click the Create
button, you'll be on the lookups edit page. As shown here when choosing domain
names, there are logically two types of lookups, application-specific and those
that serve as a more generic list of values.
Now for application-specific use cases, for the domain name columns, you specify
adapter types such as, in this example, logically, this type of lookup is intended
for integrations that need to map between two of these applications, or you can
simply use generic names for the data columns as shown in this example, which is
often the case when the lookup is likely to be used in a wider variety of
integration mapping use cases. Now to add columns, you click this plus icon to the
right. And to add more rows, you click the icon at the bottom of the page.
Even though the values are static, a lookup can still be updated even after
integrations that are using it have been activated, which means you can edit when
required, such as adding another row of data. Once saved, there is no need to
reactivate any integrations since it will be reloaded into memory within a couple
of minutes. You can also export the lookup to other OIC environments or projects as
a CSV file. However, this export option can also be used to update the lookup
outside of OIC.
Here is my example exported CSV file. As you can see, it only has the three
countries listed. Now I update the spreadsheet to add a whole lot more then save
the file. Next, I'll need to click here to add the lookup. But this time, I choose
to import a CSV file. At this point, I locate the updated CSV file. And then when I
click the Import button, OIC recognizes that this lookup already exists, prompting
me to replace the country codes lookup that is already in memory. And once that is
done, I get a confirmation banner that has been imported and replaced. So now to
confirm, I can open the lookup where I can see that there are now 249 items.
To configure access to a lookup in the mapping canvas, you'll find the lookup value
function within the Integration Cloud area, where you can view the embedded help
info on this function. However, when you drag and drop the function icon onto a
target element node as in this example, it launches this four-step map lookup value
wizard to help build the expression. First, you will select the appropriate lookup.
In this case, I've highlighted that country codes lookup. Then click the right
navigation arrow.
Next, you select the source where you indicate which type of data value will be
coming from the source. In this case, the source is providing an alpha-2 country
code. Then you select the value that will be extracted from the lookup to be mapped
to the target node. In this case, the target requires an alpha-3 country code. You
then click to advance to the next page.
Optionally, you can provide a default value to be mapped to the target node element
if a source match is not found. However, while we could provide a default country
code, in this case, I chose to leave it blank. Now we click the right arrow to
advance to the Summary page, where we can review the five parameter values added to
the function, one of which serves as a placeholder, because that source value
parameter will require editing in the expression editor. We exit the map lookup
wizard by clicking the Done button, which will show the function icon mapped to the
target element as shown here, along with the function shown down in the expression
editor.
Now we need to replace that text for the third parameter with the source value. In
this case, it is the country ISO code from that source data structure. And when
that is completed and saved, the expression editor displays the source xPath
expression in the third parameter and the visual mapper shows the line as an input
to the function.
Once you've completed and saved your integration, you can navigate over to the
lookup pane to view your lookup. Notice that it shows it as being used by one
integration. And when you click that link, it opens this pane to show the list of
integrations that are using this lookup, even those that are not currently
activated as in this example. In addition to simple data transformations, there are
other use cases to consider for OIC lookups. Lookups can be used to enrich incoming
data with additional information from reference tables. For example, you might have
a lookup table containing product codes and corresponding product names. When
processing sales data, you can use lookups to retrieve the product names based on
the product codes and the incoming data.
Lookups can help route messages to different endpoints or apply filters based on
specific criteria. For example, you might use lookups to determine the destination
system for an incoming order based on the customer's location or order type.
Lookups can be used to validate incoming data against predefined criteria and
handle errors accordingly. For instance, you could use lookups to check if a
provided customer ID has the correct format before processing an order.
Lookups can be used to cache frequently accessed data to improve performance and
reduce the load on back end systems. And this is particularly beneficial when
dealing with large static data sets or systems with limited resources. And finally,
consider using lookups to store certain configuration settings that will change.
This allows you to update configurations without modifying your integration flows.
For example, you can use lookups to store certain API endpoints or credentials or
connection parameters.
So to wrap up, while lookups use pre-loaded design time values, you can update
values when needed. And since they are loaded into memory, they provide much
quicker performance as compared to making a database query. And of course, you'll
leverage the lookup value function wizard in the mapping canvas when you need to
access a value to map to a target node element. And that's it for this lesson.
Thanks for watching.
Welcome. In this session, we'll explore these elements which can be used in your
integration design while seeking to understand their capabilities and differences.
Let's get started.
While all three of these constructs-- variables, integration and properties, and
schedule parameters-- look similar when viewed as a source and used in an
expression editor or a mapper, they provide very distinct sets of capabilities to
developers of integration flows. Variables in OIC are used to dynamically store and
manage data during the execution of a specific integration instance. And there are
two types.
Global variables are accessible throughout the entire scope of the integration
instance, while scoped variables exist only within the scope of where they are
created, such as within a loop, a switch action route, or a scope action container.
Integration properties are static values defined at design time and made available
as read-only variables to all instances of that integration. These values cannot be
modified during runtime.
You create global variables by clicking this icon on the right side of the
Integration Design canvas. Then click the plus icon in the Global Variables
section. Here, you type in a meaningful name.
Then click the drop down list to select a data type. Notice that you can choose a
scalar type, such as a string or a number. Or if you choose object data type, the
panel is expanded on the left to display a sources tree, where you can select any
complex data type that already exists within the integration flow. Now, these are
based on an XML schema root or child element.
It is then up to you to decide where and how to use it. You'll need to configure a
data stitch action to assign an initial value. And then if you need to change the
value, you'll need to add and configure another data stitch action.
You can access the value of the variable from the Input Sources section when
configuring expressions associated with other integration actions anywhere within
your flow. You can also add a maximum of up to 20 global variables. And as a
reminder, these allow you to choose either simple or complex data types. An example
use case would be a global variable using a number data type to keep a running
count of the number of records that have been processed within your integration
flow instance.
However, there is another way to create a global variable. And it's the same way in
which you create a scoped variable. That is by using an Assign action. Since this
action only allows you to choose string or fault, these variables are limited to
just those two options.
When created on the main flow path, it serves as a simple global variable available
to be used just like any other typed global variable throughout the integration
instance. When created within a segment, such as a scope container, as in this
example, it becomes a scoped or local variable and remains visible only within that
segment. And notice that when creating these variable types of using the Assign
action, you can also assign its initial value. And since these are simple type
variables, you can use either an Assign action or a Data Stitch action to modify
their values.
Shifting our focus down to just scoped or local variables, again, these exist and
are visible only with the scope that has been created, such as within the main
section or within a fault handler of the scope action container or within a loop,
such as the foreach action, a while action loop, or within a read file and segments
operation loop of the stage file action. They can also be used within a branch
route of a switch action. In general, scoped variables are useful for storing
temporary data that is only needed within a smaller segment of the flow.
Again, these are created dynamically with the Assign action. And your data types
are limited to a string or a fault. An example use case would be a scoped variable
within a processing loop that is used to determine the total cost of all the items
in an order.
Next are integration properties. You may have already noticed that within the
Sources area under self, there is metadata already provided as seven read-only
integration properties that you can access related to the runtime and environment
data associated within your integration flow instance. Well, you can also create
user-defined properties at design time, which will then become visible under self
properties, as shown here.
However, while they are global in scope, they are distinct from global variables.
You create these by clicking this icon on the right side of the Integration Design
canvas. But instead of Global variable, you select the Integration properties
section.
Then click the plus icon. You then type in a meaningful name and an optional
description. Then define its initial value, which can be expressed as a string or
any scalar value. You can create up to 10 of these user-defined integration
properties.
So then how can that value be updated? Well, since these properties serve as read-
only variables, they simply can't be updated at runtime. However, they can be
updated externally.
First, if the integration is activated, you must first deactivate it. Then select
Update property values from the Actions menu. There, you can provide a new value.
Then after activation, that new value will be available to all future integration
instances.
Let's review. Integration properties are user-defined values that can be accessed
from the Sources section, which can be any scalar value such as a string or a
number. But they remain read-only. And the values stay consistent across all
instances unless you deactivate the integration and choose to update the property
value.
Now, use cases include creating an API_URL property to store a relative URI string
that could be used when configuring REST adapter invokes. Or perhaps an email
address used by a notification action in an integration can be externalized as a
sendTo integration property. This allows for integration design implementation
without the need for hard coding certain static values that rarely change.
Next, schedule parameters are exclusive to schedule pattern integration and have a
unique lifecycle that spans across multiple runs of the integration. You create and
assign an initial default value to these parameters from the Edit menu of the
Schedule element within the integration. You can later update values to these
parameters at runtime using an Assign action or a Data Stitch action.
These values can also be changed manually when starting a new schedule or launching
an ad hoc run. An example use case would be to create a last processed parameter
that would indicate the timestamp or record number of the last record that was
processed by a schedule integration run. You then update this value dynamically,
ensuring the next run starts where the previous one left off.
And for now, I won't cover any more details here since I have a separate lesson
entitled Using Schedule Parameters. It's available in this course in a later
module. Please be sure to check that out.
And finally, this table highlights the differences for these elements. To
summarize, only global variables, when created in the Integration Canvas pane, can
be any scalar or complex object data type. Scoped variables must be created with an
Assign action and remain visible only within a particular segment or scope and are
limited to a string or fault type.
Of course, you should leverage the data construct that makes the most sense for
your implementation use case, perhaps using this table as a guide. That's it for
this lesson. Thanks for watching.
Welcome now to Module 5, Using Integration Actions. With over 20 actions available
to integration designers, this module starts with an action overview lesson, then
teaches you about the usage and configuration of many of those specific actions,
facilitating the logic within your integration flows. Other actions not covered
here will be discussed later in modules such as file processing and error handling.
And once you complete these videos, don't forget to take the Module 5 skill check.
Keep going strong.
Welcome. In this lesson, I'll provide an overview regarding the use of actions in
an integration flow. As part of integration design, in addition to adding and
configuring outbound invokes to access external services, you'll need to add and
configure other actions to perform various activities to complete the logic of your
integration flow implementation. For example, map actions.
To add an action, you have a couple of methods. One is to click the Actions icon on
the right side of the design canvas, then locate and drag the icon of the action
you want to use to the appropriate location in your flow. The other is to open the
inline menu at that appropriate flow location, then locate and select the desired
action. Each action, once selected, triggers an edit pane that pops out from the
right side of the design canvas, where you can configure as needed to include
dynamic data values available from input sources or XQuery functions.
However, in some cases, such as the stage file action, Unique Workflow wizard will
walk you through various pages to expose the Edit options you'll use to configure
the desired logic or data mapping assignments appropriate for the use case to
expose the pane for accessing input data sources and functions. You simply click
the ellipse icon as shown here.
So now let's quickly look at the actions you have to choose from. While the inline
menu only displays actions that are valid for use at the particular location in
your flow, you'll see that the action pane exposed on the right side of the design
canvas actually lists all the integration actions but disables the icons of those
that are not valid for your specific type of integration. For example, the stop
action can't be used in an application integration that uses a trigger configured
with a synchronous request response message exchange pattern.
As you can see, the action icons are organized by categories, and action options
will depend on the type of integration you have. For example, a callback action is
only valid for delayed response integrations. Call actions provide functionality to
invoke other integrations or custom JavaScript functions available within the same
OIC instance and within your OCI tenancy, calling a stateless function or accessing
content in object storage buckets.
Logic actions include while and for each looping constructs. Scope container used
to organize other actions, a switch action for defining conditional branching
logic, and the parallel action which allows you to process other actions in
parallel on different branches. Within your integration flow, you can explicitly
throw a new fault or rethrow a fault that was caught within a fault handler.
And all others are displayed at the top, generically categorized simply as actions.
These include notifications for sending emails, stage file for creating or managing
file contents, as well as several others. But don't worry, we'll take a closer look
at all of these integration actions in other lessons or demos. To wrap up, most
actions can be used in your integration flow regardless of which option you use
using an event or schedule pattern or in an application integration once you've
defined the trigger connection. This concludes this overview lesson. Thanks for
watching.
Welcome. In this lesson, we'll look at using a Data Stitch action to support
certain data manipulation use cases in an integration flow. You can incrementally
build a message payload from one or more existing payloads with the data stitch
action. While it supports both scalar and complex data types, complex data types
are not limited to message payloads. You can work with arrays, partial and full
message payloads, or any global variable. This action provides a configure stitch
panel that enables you to assign, append, and remove data, values or elements as
needed.
The assign operation places the selected value or element or attribute into the
target element, overwriting any existing data in the target. For example, say, you
want to change the current address in an existing purchase Order, The stitch action
enables you to change the address. You can either map fields individually or copy
the entire address object itself.
The append operation adds data at the end of a repeating unbounded target element
or after a selected element or value. For example, say, you have an existing
purchase order payload containing five lines of items, and you want to add a sixth
line. The stitch action enables you to append another item to the existing array in
the purchase order.
The remove operation completely eliminates the target element or attribute from the
variable. For example, say, you have an existing purchase order payload and want to
remove the price to enable the endpoint application to calculate a new price.
Incidentally, for repeating bounded elements, all instances are removed unless a
specific array instance is selected by index or predicates.
Of course, you don't need additional data stitch actions for each operation. You
can instead define multiple assignments or operations on variables and child
elements of variables by simply adding them, then configuring their values in one
data stitch action. You can also define the sequence of variables to update. For
example, if you want to copy an address and then override a child element such as a
street, just place those statements in the correct order of execution.
To avoid any confusion you might have had as to which use cases warrant leveraging
this action, there are two other data-related actions that have some overlapping
capabilities with data stitch. So as compared to the Assign action, since it is
limited to only scalar type variables, complex data objects or full payloads, of
course, are not supported. For the mapper, it is designed to only build a full
message payload specific to a subsequent invoke connection or other action
requiring a message payload. If you attempt to map into an existing message, a full
replacement of that payload occurs. The data stitch action, on the other hand, can
be used for both a partial or full assignment of data to a message payload.
So while there are many use cases for the data stitch action, I'll show you two
common examples. The first scenario involves making data available outside of a
scope container. You see, a common practice, of course, is to invoke service
connections within a scope, so that you can take advantage of fault handling and
mitigation logic specific to that service invocation. The problem is that the data
in the message payload response from service A, as in this example, won't be
available outside of that scope.
So the solution is to copy that message payload to a complex type global variable,
which can then be accessed elsewhere in the integration to include anywhere in any
other scope container. Now the assign action won't work since it is limited to only
scalar variables, but the data stitch action can copy the full message payload
easily.
The second scenario involves building a message payload that perhaps aggregates
multiple data items. Suppose you're invoking a fine-grained service several times
within a for each loop, then for each response, you can use the data stitch action
to append that data to a complex data type global variable at the array-- as an
array of items. When the loop is complete, you can then use that variable to
provide a full message payload response containing all the data items as the reply.
Be sure to watch the next video which demonstrates the implementation of both of
those common data stitch use cases. That's it for this lesson. Thanks for watching.
Welcome. In this video, I'll demonstrate how to leverage the data stitch action in
an integration flow. I'll actually break this demonstration into two parts. The
first part involves the use of assign operations. The second also uses the append
operation to add multiple items. So you may recall this use case from the data
stitch action lesson where we need to make a message payload available outside the
scope in which it was retrieved.
Well, the data is going to come internally from calling an external API, and we're
going to achieve this by getting the data from the response of this get status
invocation to the service by assigning the data that is returned within the stitch
action. First, though, we need a global variable to paste that to make it available
outside of the mapper.
If I can open up the mapper, I can show you that global variable and how the
mapping was achieved. The global variable, I called it status_result. And the
structure, the JSON structure of that complex object type is exactly the same as
what is expected as the response back to our client of this service. So very simple
to create a global variable. If you haven't done it before, you simply click on
there. If you indicate of a type-- if it's not a scalar type, but a type of object,
then this will open up to give you various data sources.
In my particular case, I simply took that get status response, and I was able to
use that as the value of the type to create this status_result variable. Well, we
don't need another one. So I'll close that. And now let's take a look at the stitch
action itself. In order to get to this stitch action, we simply dragged and dropped
the data stitch into the flow where we needed it. And then once it was there, it
opens up this configuration pane.
You'll notice that I've actually created four assign operations. Each one is going
to place into that status result complex variable. The first one is the ID. The
second one is the created by. The third one is the date. And the fourth one is the
owner. And the value that we're placing into there is the value of each of those in
turn from that get status response from that external service. So that's where that
data mapping takes place.
So before we do a test, let me show you why we use the scope activity. I wanted to
handle a fault handling logic within the scope. And so within the default fault
handler, notice that we put a note in that-- I want to handle the error in some
case. For now, notice I can use that stitch action again, this time to populate an
appropriate error message.
So if you'll notice, what I chose to do was to take the status variable, and I want
to get the value of that-- in this case, I can't get it from the external service
that we called because that call would have failed. So in the fault handler, if the
call fails in there, we're going to provide the ID they provided when they sent the
request in the first place. In this case, this was the ID that they were curious
about.
The second one, though, under created by, I just kind of created a simple error
message to provide a value for the created by variable for the status result
response. And in this case, I simply hard coded an invalid item ID. Let's return
back to the scope, and I'll close that up. And let's go ahead and save this. And
then we will close and activate the integration.
Once the integration is activated, we can go ahead and do a test. I will simply
click on Run. To open up my test harness, first, I will go with an item that I know
exists, number 123. We'll run the request. And I get back a response accordingly.
Let's do another one that works, 789 works. And I get back the response information
for that one.
If you'll notice in the activity stream, once it renders, I can see the data
stitches that were done along the way as it built from those assignments. I can see
the payload as it was building in each step along the way for each of those assign
operations. But let's now do a test where I send an invalid item. So that means the
call to external service will fail.
But rather than returning that fault back to the client, we're going to return them
something more interesting, and that is a response that indicates that indeed they
have provided an invalid item ID. And notice that we can see the error in our
activity stream on invoking that status. But because the fault handler caught it,
we're using the stitch action to populate that error message.
Now on to the second use case where I'll use a data stitch append operation within
a for each loop to build the full message payload to be returned. Now, this use
case is a little bit more complicated. So let me first show you the trigger of this
one. This has been defined in as a REST request where a client now will send in a
payload as a POST request.
Let's take a look at the payload he'll send in. It's a simple array of IDs. What
he's looking for is the information for all these IDs of whether or not those items
exist on the backend system. So to provide that answer for our customer, our client
of this integration, the response type that we provided back for him is going to be
a result array that will echo back his ID with a value of true or false as to
whether or not it exists.
Now, that data to be returned back to him, we're going to achieve that by going and
making a call inside of a for each loop. And within that loop, we can look at its
configuration real quick. We can see that the repeating element that has been
provided is that request wrapper of data that they provided. And within there,
we're simply going to say, look, the current item, there'll be multiple items in
that array. That's the one we want to iterate on, and we'll call that new data
element current item, which is what we have here.
Now, let's go ahead and make a call. When we make a call to the external service,
we're going to get back an item status. And as you saw in the previous part of the
demo, what returns is a lot of various pieces of data as a result of making that
call. Let's take a look and remind ourselves. What they're going to return in their
response is this JSON structure that indicates all this information.
Well, we don't need that information back to the client. We just need to tell them
whether or not it exists. So the design of this scenario is to now within this for
each loop, every time we invoke it, if that item exists, we're going to use a
stitch action to build the array response. If it doesn't exist, we're going to go
into a default error handler to go ahead and use another stitch action to indicate
that it doesn't exist.
So in order to achieve this, we need now two global variables for this one. And let
me show those to you real quickly. One of the global variables is an array that
exactly matches the response that he's expecting. The array of exists true or
false. The other will be a temporary item. Now, how I built that, let's just take a
look, is you do it based upon the object that's involved.
In the case of the array, I simply took the response that we need to send back when
we define on the trigger, and we simply use that response wrapper result as what
needs to be returned. If you recall, that was the array of items, simply dragging
that in as the response. For the response item, we simply now build that with a
JSON type to define, and we'll take a look at that real quick. Let me close that.
And you can see that within the stitch action.
So let's go take a look at our stitch action. We'll first look at the one in the
fault handler. So what we did is we first define the response item. Again, there is
that global variable I created, which is simply going to build one item that's
going to then be mapped into this response array. So first, before I append to the
array, I need to build the item, and we do that with two operations.
One is we assign the ID of the current item, and then we manually populate the
value of false. In this case, I used a Boolean function and use the false operation
to be able to populate that result. Now that we have that result, we can then take
the result-- let me expand this. We can take that response item result, and we can
now append it to the response item array.
We'll do the very same thing within the fault handler-- I'm sorry, within the main
scope that was the fault handler. And the very same logic occurs in that stitch
action where here, again, we are going to add the value of the current item to the
response item ID. And we're going to manually populate the true function-- or the
true Boolean function to provide a value for the exists, and then use the append
operation to then paste that result into the response array.
Let's now take a look at that in action. And now that it's activated, let's do a
test run. In this case, I need to populate the request with one or more items. I've
got three items in my test here. When I execute this, for every one that exists,
it'll say true. There's my result. And the one that doesn't exist, in this case,
the 456, it's going to say false.
And if you notice really quickly over here in the activity stream, I can see that
the for each loop had three iterations. One of them failed, which is what we
expected. It went into the fault handler. In that case, it was the invocation of
the get item status that failed. The fault handler then executed the operations of
the stitch append to provide this value here in the middle.
But if we look, of course, at the other iterations, like let's open up the third
one, we can see that that one was successful. Therefore, the stitch true appends
were done to provide the values of the 123 and the 789. And that's it for the data
stitch action demo. Thanks for watching.
Welcome to this video lesson on looping actions within OIC integration flows. In
this session, we'll explore the two primary looping constructs available, the while
action and the for each action. First, what do they have in common?
Well, both allow you to place one or more other actions within their loop to
perform repetitive tasks. You can use both global or local variables as needed
within their scope. They both have the capability to handle dynamic as well as
complex logic. Finally, when you use them within a scope action container, you can
implement error management as required.
So let's dive a bit deeper to look at each one, starting with the while action,
which enables you to loop over other actions or invoke an actions as long as a
specific condition is met. The configuration requires defining a loop condition
using the expression editor, typically leveraging a variable as a flag or a
counter. The condition is evaluated at the beginning of the loop.
If false, the loop is not executed. But if true, all the actions within the loop
will execute each time unless an error is encountered. In that case, the loop is
terminated. And the fault is thrown to the nearest error handler. Otherwise, in
order to discontinue the loop, it is up to you to add actions in the loop to define
the appropriate logic as necessary, which will cause the condition to be false,
such as updating the variable used in the loop expression.
As an example, here we've created a global variable called counter and used an
assign action to set its initial value to 10,000. Then when configuring the while
action, we set the condition in the expression editor. In this case, the loop will
execute as long as the variable is less than or equal to 10,000. Now, within the
logic of the while loop, we use another assign action during the execution of each
loop to decrement the value of that counter variable.
As to scenarios where you might leverage the while action, you could pull an
external system for a status update until the status changes to complete-- for
example, querying an order system for the shipping status of an order or any use
case where you need to perform a task repeatedly until a condition is satisfied,
such as retrying and invoke connection until it succeeds. Of course, you'd want to
define a maximum number of attempts. Another case is when you need to loop over a
range where the endpoint is determined at runtime-- for example, iterating over
pages of a paginated API response when the total number of pages is unknown.
Let's now discuss the for each action, which is used to process each element or
item in a pre-defined data set, such as an array, and executes a block of
instructions for each element. Now, the number of loop iterations is based on a
repeating element that you specify when configuring the action. Now, this action is
commonly used for file-based processing, which contains a set of records, like in a
delimited CSV text file or an array of elements in a JSON or XML file.
Each iteration of the loop provides access to the current item in the data set. And
like the while action, if an error occurs when processing any item, the loop will
terminate, throwing the fault to an error handler. Another feature is you can
select either a sequential mode, which processes items one at a time in the defined
order, or choosing the parallel mode, allowing for all items to be processed
concurrently.
When configuring the for each action, once again, you select an existing repeatable
element in your integration flow. In this example, it is an array of files returned
from a previous download operation response from a stage file action. Then define a
temporary variable name to indicate the current element for that loop, then whether
or not you wish to process sequentially or in parallel. You're now free to add any
number of actions with logic needed to process each file within that for each
action scope.
As for these use cases, again, the most common is when you need to process several
records in a file. Another example would be looping through an array of invoices
and send out each one for approval, such as email notifications to a list of users.
Batch processing scenarios might include transforming and pushing individual
records in a collection to a target system-- for example, extracting records from a
database query result and then loading them into an ERP system. And of course,
whenever you need to process data from a collection in parallel for better
performance, for example, concurrently uploading multiple files to a cloud storage
system, now remember to use the parallel mode only when appropriate, considering
downstream dependencies or potential process ordering requirements.
And now, finally, let's see a comparison of these two looping actions side by side.
While actions are based on a condition and iterate until the condition is false.
They're flexible but require careful condition handling. For each actions will be
data set-driven, iterating over a fixed number of items, which means they're much
easier to use with arrays. And they also provide a parallel mode option.
So to wrap up, you should leverage the looping construct that makes the most sense
for your scenario. And keep in mind, you can also create additional looping actions
inside other loops as needed to meet your requirements. That's it for this lesson.
Thanks for watching.
Welcome. In this video, I'll demonstrate the use and configuration of the while
looping action. All right. For this demonstration, before I can configure and
demonstrate a while action, part of that configuration is going to be defining the
condition for going into the loop and continuing through the loop.
So a common pattern is to create a global variable for that. And it can be any
conditional expression that provides a scalar value with a number, a string, or a
boolean. I'm going to set up three demonstrations here in this one demo by creating
three different global variables for those three different use cases, I guess, you
would call it.
So for our first use case, we're going to set up a boolean here. And we'll call
this boolean variable processingComplete. All right. So processingComplete is
either going to be true or false. The second use case will be-- we'll just use a
string.
And so the string will be a variable called accountName. And we're going to have
logic that goes through the loop as long as the accountName is valid. But I'll show
you how the values are set in just a moment. It looks like I'm just declaring them
here. We'll make it a number type, and we'll call that a counter variable for that,
counting the number of loops.
So let's go ahead and save that. And once again, review those three variables we
have-- a string accountName, a counter that's a number, and processingComplete,
which is a boolean. So now let's do our first scenario here.
Very first thing we need to do is, before the stitch action-- I mean, before the
while action, I need a stitch action to set the value of the variable that I'm
going to use. Right now, it has no value. So if I click over here and I just-- for
now, I'm just going to call this Initialize. There we go.
And so in this case, the variable I want to initialize is the boolean variable,
processingComplete. So initially, processing is not complete. So we'll assign it a
value of false as its initialized value. Make sense? So we save that. We now have
our looping variable defined. And so now I can add the while action.
And with our while action, it pops up the pane. We'll call this-- give it a
meaningful name-- WhileProcessingIsNotComplete. All right. For the name of our
while action. And again, we'll go into the loop based upon the value of this
boolean variable. So as long as that value is-- processingComplete-- so if it's not
equal to true, so processingComplete, we set in our stitch to be false.
So if it's not true-- which not is right now, it means we're going to loop the
first time-- then our while loop will execute. And then we will keep doing
something. Let's go ahead and put a placeholder in to demonstrate that. And we'll
just simply say Do_Logic-- whatever that logic is.
All right. And then what we need to do right now, we got an endless loop that's
going to continue around and around and around. It'll never end because that
evaluation at the top is always going to evaluate until we change that
processingComplete value.
So let's assume, for the sake of demonstration, that we do logical on the way. And
on every loop, we do some logic, and we're looking for a result, or is the
processing complete flag from some other downstream service. And so below that,
what I need to do is have some sort of conditional logic.
And the most common way to do that is create an if condition. We do that with a
simple switch condition. I would just simply check to see if processing is
complete. So let's name that IfProcessingisComplete. And for now, I don't have a
way of evaluating that. But let's just pretend we had something to check to see if
processing is complete. Maybe a local variable I've used to determine that, maybe
something else.
But the logic is, whatever I've configured for the condition of my switch
statement, which I haven't done-- but whatever that is, if that evaluates to true,
which is what I need to do, then, only then, will I now set-- and let's set another
data stitch action. I want to go ahead and set that variable. So I'll call this
SetToComplete.
And was SetToComplete. Now the variable I want to set is that processingComplete
variable, which is being evaluated at the top of the loop. And this time, I do want
to set that to true, which is the only time that will happen. So let's review the
logic, and then the other parts of the demo will go much quicker.
So we have a global variable that we had to first declare. In this case, it's a
boolean type. Next, we initialize that variable to a value that is set to false. We
now enter our while loop right here-- as long as it's not true, which it's not. All
right. Or we could just set is equal to false. That would have worked as well.
Then we're going to go and do our logic. And then at the end of each loop, if all
our processing is complete-- now that's the only thing I didn't really demonstrate
here. I couldn't really set up quickly enough a use case. But let's assume whatever
this is gives me a flag or gives me something, let me know. I then, and only then,
will go into this route, which then sets the value of that variable to true, which,
of course, will then break us out of our loop.
So now let's really quickly try a couple of other use cases. Let's do one with a
string variable. So that string variable was accountName. So I'll just change
things up a little bit. I'll go into our stitch statement, and I'll delete that for
our use case. And we will set that accountName. And the logic here is I'm going to
just set it to a string to be valid.
If we've indicated-- so when the loop says, if you've got an account name that you
have defined to be valid-- whatever that means-- so we initialize it to that.
That's what we want to do with our while loop. So in this case, the logic of our
while loop is WhileAccountNameIsValid. I guess that could have been another
boolean. But I just want to demonstrate a string.
So let's redefine the condition. See, let's reopen that here. There we go. So we're
going to take that accountName. And as long as that's valid, then we're good to go.
And we'll keep repeating the loop. And you get the idea.
So to do that, then I would say I've got some condition to define. So maybe I'm
checking the account from the variable that's coming through from the data. I'm
checking to see that the account name that I received along the way is not valid or
whatever.
So with that logic, now inside my stitch statement, in this case, I'm not setting
it to complete. I'm going to assign invalid as the value, and that's all I really
need to do. So let's go ahead and assign that. We'll say the accountName got a
value of invalid.
And if you follow along with me, you're realizing, well, you use a boolean for that
use case. Could you not. And I say, yes, you could have. I didn't. I just got
account name was invalid, and you could have whatever logic you need there. And so
that continues. That completes that scenario.
Notice what I have is I have, once again, a global variable, global variable called
accountName. I'm now setting it to an initial value. I'm using that to enter the
while loop one time at least because we were checking to see if the value is, in
this case, valid.
Now, whatever logic I'm doing, along the way, I'll continue doing on each loop.
Maybe I'm going through a whole list of accounts, and I've got some sort of a check
at the beginning of each loop to see if the accountName is indeed valid. If it's
valid, we just continue going through the accounts. But once I reach an account
that's not valid, then that's what takes me out of the loop. There's something
wrong with this data record or something like that. So that's how I would do that.
And the final example is actually the most common one. So let's quickly do that.
And that's using the counter variable. So if you recall, I got a number variable
called counter. So in this case, I'm going to eliminate this switch statement. I
don't need it. I'm going to quickly go down here. And let me refresh my screen.
There we go. And save my integration so far. So let's initialize that variable.
This is going to be the counter. And the logic for this use case is-- I'm going to
assign it. Remember, that's a number variable. So I can assign it a number value.
And we're only going to do five loops. For whatever reason for our logic, we only
need five loops. I can delete that one. So we just have our counter variable. Let
me save that. Set to the value of 5.
So with a value of five, that says I can go through this while loop five
iterations, whatever the logic is. So let's go ahead and change the name of that--
WhileCounterIsAboveZero, or the counter is greater than zero. We're going to go
ahead then and use that as our expression. So let's go ahead. And I need to delete
that and create the condition. So there's our counter variable.
And if it's greater than the value of 0, that equates to be true. So make sense? We
initialize it a value of 5. 5 is greater than 0. So we will enter this. Let me go
ahead and save that. I'll rename the while loop. And now we do our logic, whatever
needed logic. 5 times 1, 2, 3, 4, 5.
Well, in order to get out of this loop, that counter needs to be decremented down.
So this is where I would add another stitch action. And I'm just going to call this
DecrementCounter. And we take that counter variable. And we'll assign it to the
value of counter. And we'll just decrement it by 1.
And there's our expression. We save that, and that's it. So this allows me now to
have a scenario where I'm doing logic within this while loop. In this case, it's a
fixed number of loops-- a 5. One more thing to talk about while loops is that,
while you can use any variable to get yourself out of the loop, it is, as you can
see, your responsibility to break out of the while loop.
Also, your responsibility to make sure it can enter. So the expression, if you want
to get into it one time, must obviously evaluate to true. And then it's up to you
now to break out of the loop by defining something that would change that
expression to eventually be false.
One other thing to remember is there is a maximum of 5,000 iterations. You can't
have logic at runtime. If it goes longer, more than 5,000 loops, it will break
itself out and return an error. So just be aware, it's up to you to check yourself
or be responsible for not having any endless loop or one that goes beyond 5,000
iterations. That concludes this demo. Thanks for watching.
Welcome. In this video, I'll demonstrate the use and configuration of the For-Each
looping action. So before I create the For-Each action-- since a For-Each action
requires a recurring, repeating element-- let's go ahead and set up a use case. I
will invoke an FTP adapter to get a list of files. So I will call this
GetListOfFiles.
And I will continue and just do a basic configuration, obviously-- the list files
operation, the input directory that I need to use, and we'll look for all files up
to a maximum of 100. Click Continue and Finish. And since I've hard-coded the
mappings, I don't need to do it dynamically, so I'll delete this mapper. And save
the integration so far.
Now, I'm ready for our use case. So for the use case, I'm going to add a For-Each
action to iterate over the list of files. So we click under Actions, we look for
the For-Each action, and immediately it's looking for that repeating element. So
the return I got from the GetListOfFiles. Under ListResponse, here is a file. And
so if you'll notice the icon that indicates this is indeed an array of repeating
elements. And of course, for each file that gets returned, there's metadata-- the
file name, the directory it came from, its size, and so forth.
I'll use that as my repeating element. And the only other thing I need to do is
configure a variable name that we'll use for the current element. So I'm just
simply going to call it CurrentFile which is a good name. And notice now I have the
option to process these in parallel, or the default is to it sequentially. I'm
going to go ahead and choose the parallel option for this demonstration.
And let's give a meaningful name for this For-Each action. I will call it
ForEachFile, which kind of makes sense. And then we'll save that. And it's as
simple as that. My For-Each file action is configured. And then For-Each looping
action is configured. And notice there's a warning that says, well there's no
actions in here, so what do you want to do. Well, I mean, obviously, I would
normally do some logic along the way.
For now, I'm just going to put a little placeholder here, a little note that says
here's where I would do some file logic, for the file itself. In a moment, I'll
show you a separate use case underneath this one. So we save it, and it's done. So
beyond that, there's other things you could do, other actions I could add later, or
I could add another loop inside another loop. The important thing to point out is
that parallel behavior. Notice, once again, that was an option presented to me
here.
However, that's not available everywhere. For example-- and I'm going to switch
over to another integration-- when I go to add a For-Each action over here, now, in
this case, notice since I'm not in a scheduled integration-- and only scheduled
pattern integrations allow you to do things in parallel-- notice that's not even an
option, it's going to always be sequential.
Now, for this particular scenario, since we have a trigger from a database that's
returning one or more products, I could then use that as my repeating element and I
could call this CurrentProduct. And then of course, I would just simply say
ForEachProduct would be a good name for this For-Each action. And save that. And
now that one's configured, again, ready for me to do one or more things inside of
here. I'm just going to say, yeah, we're going to Do_Something here for each one of
those.
All right. So that ought to give you a flavor for how to configure a For-Each. To
reiterate the restrictions the on that parallel thing that I was talking about,
let's go ahead and go back. You saw that I can't use parallel in an application
pattern integration like this one, but I can use it in a schedule pattern. However,
there are still limitations there. Let's go back to this one we were just looking
at. And if you'll note here, if I try to add another For-Each action inside of this
one-- let's go ahead and do that now-- you can see what that looks like.
So for this use case, let's say let's-- invoke the FTP adapter. This time, we'll
just simply call it GetFile. We'll read each file into memory. And in this case,
I'm just going to leave this to be determined because we'll map this in
dynamically. I'll show you how that's done. And then I need to define a structure
of file, so I will drag and drop a CSV file for this.
All right. So let's say that the record name would be Invoice, and the record set
is Invoices. And in this case, there is no column header. And click Continue. And
we are now going to read the file in. And the only thing left to do for this
particular-- before we invoke this-- is to map where that came from. So in this
particular case, it needs the file name and the directory.
Well, from that current file-- if you recall what we did just a moment ago, every
current file that we retrieve is going to have that directory and that file name.
So that's how we're going to read that in. I'll click Validate, and close. And now
we're ready to create an embedded For-Each loop. So underneath here, I'm going to
add another For-Each action.
But I want you to notice this time, I don't have the option to process things in
parallel. That's because I'm now inside another For-Each. In fact, if you're inside
a While-Loop or inside a For-Each loop, or even inside a Scope Container, you won't
have the option to go parallel. Same thing if you're not using a scheduled
integration, you won't have that option either.
So to complete this configuration, we'd give it some sort of a meaningful name. I'm
just going to call it ForEachInvoice, where I would do some logic. And then, of
course, the repeating element from the invoke we have where we got the file, and
that response returned the actual contents of the file, and there's all the
invoices. So there's one or more invoices. That's an array of invoices. That's the
repeating element.
And I would just simply give it a meaningful name, CurrentInvoice. There we go. So
we'll save that. And so once again, I have a loop inside of a loop. And the only
thing left to do here would be to add some logic along the way. I'm just going to
say Do_Invoice_Logic as a placeholder. And that completes my configuration.
Notice I have now a use case that involves a For-Each loop to iterate over each
file and maybe do some logic from that list of files that were returned from the
FTP adapter. Next, I invoke on the same FTP adapter to actually read the file into
memory, and once I do that, I have a list of invoices since it's iterating over the
file itself. So the For-Each action in this case iterates over that repeating
element where then I could do some logic along the way. All right. This completes
the demo for the For-Each action.
Welcome. In this video, I'll demonstrate the use and configuration of the switch
branching action. All right then, to demonstrate the switch action, I will actually
continue on from an earlier demo I did for the foreach action. So if you recall,
we're have a database trigger where we're getting one or more products with a
trigger from the database. And then we earlier configured a foreach action to
iterate through each product.
So essentially the repeating element was that array of products. Each product has
these four values. And so each element within there has information.
So let's assume a use case where I've got different logic depending upon the
product ID. So a range of product IDs requires a different logic than another
range. So I'm just going to kind of make up the use case and show you how this
works. So I'm adding a switch action underneath whatever logic I was doing here
first. So obviously that's just a placeholder. And then we just simply go to
actions. We click on switch, and it opens up the pane.
So that it immediately wants you to not so much configure the switch, but you're
configuring the very first route condition. So in this case, the first route
logically is going to be any time where the product ID is less than-- I'm just
going to say less than 1,000. So product IDs of 0 to 999 will go down this route,
and then we'll configure something for another route.
So in this case I need a, from that current product that is in this particular
loop-- remember we're in a loop-- so for that current product, I'll just simply
take the product ID. And when that is less than the value of 1,000, then we have
our conditional statement, and we click save. And that takes care of our first
route. And so inside of here I can now do a specific logic for how to handle the
product when the ID is less than 1,000. Maybe a different kind of processing logic
is needed downstream.
So how do we add another condition? Well, we simply go to the switch itself-- the
header-- and I can either add another route or add an otherwise statement. So when
I add another route I get the same configuration. I can give it a name. I'll call
this one greater than-- greater than 1,000. Actually, it's going to be greater than
or equal to. But that's OK. So I take that same current product, product ID, and
I'll make it greater than or equal to the value of 1,000. And that's it.
So now I have two branches. When you have more than one branch, of course, there's
potential. Remember the rule is, with a switch statement, it only takes one route.
It will take the route that evaluates to true first. So this could evaluate to
true. If so, we'll do all the actions in this route. Or if that's false, then it
will look to see, well, is it greater than 1,000? Well chances are it is.
But what are the weird case-- I mean, maybe error handling or the scenarios where
the product ID is not less than 1,000 and not greater than 1,000. Maybe it doesn't
exist, or it can't be evaluated, or-- I don't know-- whatever the use case would
be. In other words, I could say, well, I can add another route coming up here and
add yet another condition. When I add an otherwise route, then there's nothing to
configure. It just simply provides a placeholder for me to do logic or do nothing
as I as I see fit.
So let's kind of review. When you have a switch statement and there is only one
branch and you don't have any other branches or routes, then it sort of logically
turns into an if statement. If that is true, we will do this. Otherwise, we just
kind of keep right on going down to the next action below.
By the way, one more thing. You can have nested switch actions if you needed. So if
I needed logic to say, well, if it's greater than 1,000, I'm going to evaluate one
more thing. I could add a switch statement. And in this switch statement I could
have logic to say, well, you know, if-- let's go ahead and take the current product
here. If the product name is maybe equal to some value. I'll just make it up here.
Yeah, let's go ahead and use that. So with the product name is equal to some value,
then I'm going to do logic in this branch. Otherwise I might want to do some other
logic if it's not equal to that.
And so once it's expanded, you can now see there's the primary route or the first
route to check if it's less than 1,000. If it's greater than 1,000, then we first
check to see if the product name is equal to some value, where we'll do the logic
in there. Otherwise we're going to do logic in here. And if it doesn't meet either
of those two criteria, we're going to go down this path and do logic in here.
Remember, only one path will be taken when you're using switch statements. That
concludes the demo. Thanks for watching.
Welcome. In this lesson, we'll learn more about using the scope action in your
integration flow. Let's go back to essentially serves as a container for a
collection of child actions and invokes providing the behavior context for those
actions. Anything that you can do in the integrations main flow, such as
configuring invokes, mappings, and actions, can all be done within a scope
container. The elements defined within the container have local visibility inside
that scope action.
There are two primary reasons for using a scope action. One is that it allows you
to organize connection invokes and other actions associated with accessing an
external system. But more importantly, and the main reason why you use these, is to
provide for scope level fault handling logic. Since I'll be covering scope fault
handlers in another lesson, I'll save those details for later.
For now, let's focus on the basic implementation. Simply use the inline menu to add
the scope action to the design canvas anywhere on the main flow. Then inside the
scope container, you add one or more actions and invokes as needed. Here's an
example where I've added three placeholder note actions just to illustrate. But one
of the benefits of using scoped containers is that you can make the design canvas
look less cluttered after you have many more actions and other scoped containers,
you simply collapse the scope container by clicking here. And then when you need to
look inside or do more editing, you expand the container again by clicking here.
Additionally, just like most other actions, you can cut and paste to reposition it
to another location in the flow if needed.
However, there are some context considerations to keep in mind. Since your design
logic needs to take into account that any returned data or message received from
invoking a service or any new variable created within a scope will not be visible
outside the scope or anywhere else in the main flow for that matter. So the most
common solution is to create one or more global variables, which are scoped to the
entire integration instance. Then configure a data stitch or an assign action to
append those local variables, values, or message data values to the global
variable.
Now to provide even more capabilities, you can also add additional scope actions or
containers nested. Consider them as child containers within a parent scope, which
allows you to become more sophisticated with separating actions into subsections of
your integration. Each of these child scope containers can have their own local
variables, but more importantly, you can also implement specific fault handling
logic for each one.
And indeed, you can have as many levels of nested child scopes as you wish.
However, I would recommend that you avoid making your integration too complex. It
might be better to create one or more separate integrations that could be invoked
from the parent integration to implement reusable logic that could be used by other
integrations.
So to wrap up, the scope action is used for better organizing the actions and
invokes within your integration flow. But in another video, I'll explore the
details about those scope level fault handlers. That's it for this lesson. Thanks
for watching.
OK. This recording is to show the OIC 3 feature for the OCI function invocation. So
I'll walk you through what exactly is happening here and then also show where
specifically we've added feature functionality for OCI functions.
So the parameters here are actually coming from the documentation of OCI function.
This is a prebuilt function. This function takes files from a given bucket, and
then zip setup up and puts it into another folder or another bucket on OCI storage.
So these are the parameters that we need to specify-- the compartment ID, the
region where this is executing, the source bucket, and the files, the source files.
We also specify what is the target bucket, and then whether we want to allow
overwrites or not. So these are the parameters that we will accept as part of the
invocation to the OIC orchestration. And you will see this later when we actually
execute this.
So once we have added all these request parameters, what we are going to do next
is, of course, introspect this particular function. But before we do that, we also
need to provide a response type. And that response type is going to be the output
from the OCI function. So this is, again, well-documented, and we are just going to
copy paste the target data or the data that we got from the documentation.
So we continue. This is our trigger that is done now. So we are now ready to do the
actual invocation of the object store. So we add a new action. This is available on
the palette. We pick it up from the palette itself, and we just provide it a name.
That's what we want to call this particular endpoint.
Now, the OCI function itself requires input and output. So this is the input.
Again, this is part of the documentation. So we are just copy-pasting the input
that is provided or that is going to be used by the OCI function. So we just copy-
paste that.
And then similarly, the output is, again, from documentation. Once the function
executes, this will contain the response JSON object from the OCI function. So we
put that in. So what we saw over here is OIC 3 has the object-- sorry, the
functions capability, where you can provide the function or introspect the
function, provide input and output.
The next thing that we do is we map the source and the target. Now, this is being
done for the parameters that the client application will send to OIC integration.
So whatever data we are collecting from the client application is being mapped over
here to the same elements in the functions parameter.
So once it is done, we will next do the mapping for the response, and that is again
going to be very similar. Whatever response comes from the OCI function, we are
just going to map it back to the response that the OIC integration, this particular
integration, will send back to the client. So we open up the appropriate response
data structures or payloads and then do the mapping appropriately.
So again, this is dependent on what the function has been coded for, what is the
input it takes, which is what we saw earlier, and what's the output that it sends
back. Once we've done that, we validate, and we go back to our integration. So this
is now ready. All we need to do now is to specify or add the business identifier.
So in this case, what we are going to do is we are going to open up the query
parameters, and we'll take the source bucket and target bucket as the identifier.
This can be anything that-- there is no hard-and-fast rule, as you're probably
aware. And once we've done that, we can now go ahead and activate this particular
integration.
So we'll set a debug, since this is just for demo, and then this gets deployed. We
can just check the status. So it's active now. We go ahead and click on Run.
And the OIC test console will come up here. As you can see here, we've already
filled all the input parameters that we saw earlier. And this is the OCW23 demo
bucket. That's the source. This is the target bucket. The target bucket doesn't
have any files at this moment. And we go back to our integration, where we've
already entered all this data.
Source files is forward slash, which will mean take all files from that folder. We
are running now this particular integration. So it'll take a little bit of time,
but this is now successful. We can see the output that has come from the function.
And if we refresh here, we will see that our interaction or integration is
completed successfully.
We go back to target bucket, which was empty. And now, when we refresh it, we will
see that it contains a new zip file, which actually contains all the source files
from the source bucket. So that's about it. Thanks for watching and listening.
Welcome. Within your integration flow, you can delay processing for a specified
time with a wait action. Let's take a closer look. Let's first assume that you have
an integration flow that has several actions and invokes that need to be executed
in sequence, but due to business requirements or other downstream dependencies, you
wish to delay the execution of the next step for a specified period of time. Here
is where you can insert a wait action, then configure the amount of time the
integration should pause before continuing on.
You have two options when configuring the wait time. The first is to enter a static
literal value in seconds, and the other is to provide an XPath expression, which
must return a value for the number of seconds, thus allowing for a dynamic time
period. As shown in this example, a common approach is to define an integration
property and use its value in the expression, which then allows you to update the
value later on, which, of course, requires a reactivation of the integration for it
to take effect.
However, either way, you need to consider that the wait time cannot exceed the
total running time of the integration instance. For example, consider that
synchronous application integrations will time out after five minutes, so it must
be less than that. But even with schedule pattern and asynchronous application
integrations, you certainly don't want to set a wait time that would cause your
maximum design runtime for that integration instance to be exceeded.
Here, the use case is to temporarily pause the integration flow, delaying the next
action execution anywhere from one second to several minutes or even a couple of
hours as appropriate for the use case.
While you can insert a wait action almost anywhere in your flow, a typical use case
for this action is to invoke another external service with a defined delay from the
previous activity, or more commonly, it's used within a looping construct, such as
a while loop or for each action as shown here to delay processing to an external
service within each iteration of the loop.
And during runtime, you can also track the status of the wait action on the
tracking page through the tracking diagram and the activity stream for an
integration instance that is in progress or even return later to a completed
instance to view the same information.
So any time you need to insert a pause in your integration flow, simply add the
wait action and configure the wait time. That's it for this lesson. Thanks for
watching.
Welcome. In this lesson, we'll look at using the note action in an integration
flow. Let's assume a use case where you're working on an integration that invokes a
couple of external services. However, during your development and testing, you
discover that you're going to need to retrieve some additional info from a database
prior to invoking that second service.
The issue is that while you know it is an instance of an Autonomous Data Warehouse,
for now, you don't have the required schema and credentials information in order to
actually configure the connection for that activity at this time. So this is where
you, perhaps, would insert a note action with some basic information to serve as a
placeholder to remind you or perhaps someone else on your team to implement this
logic later on.
Once you're ready to continue with editing the integration, you simply delete the
node action and configure the connection and invoke action accordingly to retrieve
and use the data as needed downstream in the integration flow. Of course, that is
just one use case. You can also use this action for other comments similar to using
sticky notes anywhere within the integration flow.
The note action is a design time feature that has no impact at runtime, so assuming
you want to provide design comments to others to view later on, you can actually
leave them in the integration, even in production.
Let's now take a quick look at using and configuring the note action. To configure,
after placing the note action somewhere in your flow path, you just then provide a
short name for the action. I'll call this invoke database. And then, of course, you
can provide any description, comments, or information. I'm going to paste in this
information about accessing the ADW instance, and then I simply click to save it.
Now, later on, someone can go to this action to read it by simply hovering over it,
as you see here. Or they can use this menu to move it to another location. Let's
say it needs to go here instead. And it moves to the new location. Or you can use
this menu also to delete the action. So any time you need to add comments or create
a placeholder in your integration flow design, use the note action as needed.
That's it for this lesson. Thanks for watching.
Welcome now to module 6, file processing. These lessons cover all the many aspects
of receiving, processing, and sending files and integration flows from defining FTP
and file adapter connections, to configuring all the available connection
operations.
You'll also learn about the stage file action operations, as well as how to
transform native file formats and to interact with the OIC embedded file server,
you'll learn about the new file server action. There's plenty to cover. And once
again, after you've completed the videos, don't forget to take the module 6 skill
check. Keep going strong.
Welcome. In this lesson, we'll cover the basic options you have for handling files
as you learn about some fundamental capabilities. When files arrive to your
integration flow instance, they are temporarily stored into a virtual file system.
And in some circumstances, they are also parsed and brought into memory, but more
on that later.
The virtual file system stores all received files as well as new ones you may
create within the integration, but only for that instance. Other instances that may
be running at the same time or later on will have their own VFS. So then while you
use it to store files that you create or have downloaded, essentially this storage
is ephemeral, and your files are not persisted beyond the life of that instance.
Now for access, the stage file action has eight different file operations that you
can use to work with files located in the VFS, and we'll look at those in just a
moment.
So let's now pivot to talk about how we can receive files from external sources.
There are several options. The most common pattern for retrieving files is to
define a schedule pattern integration with a configured FTP adapter connection used
to access an external FTP server or even the embedded file server. So based on a
schedule, a new run will kick off and the FTP invoke connection will be used to
retrieve one or more files to be further processed. Or instead of a schedule to
trigger the flow, another pattern is to implement a REST adapter connection as the
trigger with the interface designed to expect one or more file attachments. So
then, when invoked by a REST client, those attachments are received into the
integration and can be processed as required.
On the other hand, if the file is located in a shared file system along with an
installed connectivity agent, you can configure a file adapter connection to be the
trigger. The agent will pull the file system. And when a new file arrives, it will
be delivered to OIC, triggering a new integration instance. However, regardless of
how your integration is designed to start, once it does, you can retrieve files at
any time. We just looked at the FTP adapter. And likewise, the file adapter can be
used in the invoke role to fetch a file whenever you want to retrieve it.
Or perhaps there is a RESTful web service that allows you to request files, which
can then be sent as an attachment in the response, or using the SOAP adapter to
invoke a SOAP-based web service that returns a file as an attachment, as a
response. Additionally, there are some SaaS applications such as Oracle Commerce
Cloud and Oracle Logistics Cloud that can respond with a file attachment when
invoked from your integration. Another scenario would be to use the new object
storage action to retrieve a file that has been staged in an OCI Object Storage
bucket.
But what about those use cases where you need to send processed files to external
systems? And it doesn't really matter how your integration flow was triggered or
how it started. The same options apply. Again, the most common use case is
leveraging the FTP adapter to write the file to an external FTP server or the
embedded file server. Or if it needs to be sent to a shared file system serviced by
a connectivity agent, you use the file adapter, or perhaps there is a RESTful web
service that is looking to receive the file as a multi-part attachment, or if it is
SOAP-based, send it as an attachment using the SOAP adapter. Again, certain
applications can receive file attachments when invoked, such as Oracle Taleo
Enterprise Cloud, receiving a resume or a candidate picture. Another use case is to
send the file to be stored in an OCI object storage bucket using the object storage
action.
And finally, throughout our documentation and in these trainings, we call out the
difference between structured files as opposed to opaque files. The reason is that
when sending or receiving files by way of the various adapters, there are limits
and different behaviors. For opaque files, you can download or send files of up to
one gigabyte in size. Structured files have a limit of 100 megabytes. Or if the
adapter is being used in conjunction with a connectivity agent, the limit is 50mb.
And while both types of files will be staged in the virtual file system, structured
files will also be parsed into the integration instance memory, so their contents
can be exposed. The actual structured files themselves are those that have had
their contents defined by an XML schema, while opaque files can be unstructured
such as PDFs, image files, or a zip archive, or they could be larger structured
files, but the structure has not yet been defined. And that is why we can download
or send those files up to 1 gigabyte.
And that brings us to the end of this lesson on file handling fundamentals. But
there is more to learn, so be sure to take a look at the additional videos that are
in this module. Thanks for watching.
Welcome. In this lesson, we'll look at the options you have for defining
connections using either the FTP or file technology adapters. Let's get started.
File adapters are configured to access a shared file system, while FTP adapters can
be used to connect to an FTP server. The main difference is that since the shared
file system needs a local client to access the files, a connectivity agent must
first be installed. FTP adapters can connect with the remote FTP server directly
and would only require a connectivity agent if the FTP server was located in a
private network.
As to file operations, these are the six operations available for both adapter
types when used in the invoke role. File adapters can also be configured as a
trigger for an integration flow, since the connectivity agent serving as a polling
client will deliver new files to OIC, thus launching a new integration instance for
each new file. Unfortunately, some folks are concerned to learn that the FTP
adapter does not support the trigger role, as does the file adapter, and is not
able to automatically poll to process new files that have been recently delivered.
However, you can achieve that same capability with the FTP adapter by leveraging a
schedule pattern integration. Using the invoke role FTP adapter connection, the
trigger can be triggered on a schedule followed by an immediate list files
operation call on the connection. Next, an action to see if there are any new
files. If there are, then the connection is used again to download one or more
files followed by any downstream processing logic that needs to be implemented.
The FTP adapter has various connection property options. Of course, you'll need to
provide the server's host address and port. Then under optional properties, you'll
see several fields you can edit, as well as the ability to upload a host key or SSL
certificate, depending on the type of connection you need to use.
For FTP protocol connections, you can only use the FTP service access policy, which
uses the username and password for authentication. Then under Optional Security, if
you uploaded an SSL certificate, you'll need to provide the password for that P12
format certificate. This policy also allows for the configuration of PGP values.
For secure FTP protocol connections, you can choose to use any of these three
security policies. So instead of just username and password, the public key
authentication policy connects to the secure FTP server using the public host key
you uploaded in the optional connection properties section. You still provide a
username. But instead of a password, you upload the private key file. Then in the
optional security section, you'll need to provide the PassPhrase if the private key
is encrypted.
This last policy option uses multiple independent credentials to log in to the
server, which creates an extra layer of defense against unauthorized users. With
this policy, you provide a username, user password, and the private key, but also
the private key PassPhrase since the key must be encrypted. You will also configure
the first authentication sequence to be either the password or the host key.
And as a reminder, all three policies provide options for specifying PGP encryption
and decryption and signing verification details. However, you will first need
someone with the OIC service administrator role to upload those PGP keys into the
OIC instance for you so you can locate them here on this page.
Finally, you'll need to indicate how the connection will be accessing the FTP
server. In most cases, it will be a publicly available IP address on the internet,
so you'll use the default public gateway. However, if the FTP server is running in
your private OCI Cloud subnet, you can connect directly, but only if your OIC
service administrator has set that subnet as a private endpoint for your OIC
instance.
Otherwise, the only other way to access a private FTP server either in the Cloud or
on-premises is via an installed connectivity agent. In this case, you'd select that
option, then indicate which agent group will be used. As to creating a file adapter
connection, it is easy, just indicate which role in which it will be used, trigger,
invoke, or both. Then, as you can see with no connection or security properties,
you just need to associate the connection with the appropriate agent group, and
that's it.
Since both the file and FTP adapters have many overlapping capabilities, let's do a
quick side-by-side comparison. The file adapter only works with an installed
connectivity agent that has access to the file system, while the FTP adapter
connects directly and unless the FTP server is in a private network. The file
adapter itself has no security configuration at all. Instead, the file system must
be protected natively, while the FTP adapter has different connection security
policies that utilize encryption protocols and supports key-based authentication
for enhanced security.
Only the file adapter can be used to trigger an integration flow, and while the
supported invoke role file operations are the same, again, polling with the
connectivity agent is limited to the file adapter. However, only the FTP adapter
connection can be configured to support encrypting and decrypting files
automatically during file write, read, or download, leveraging PGP cryptography
without having to handle that within the integration flow. And one more
consideration if you wish to access the embedded file server within OIC since it
uses the secure FTP protocol, you can only use the FTP adapter.
And so to wrap up, you use the file adapter to access shared file systems and the
FTP adapter for FTP servers, both allowing for these six file operations, or you
can use the file adapter as an integration trigger or leverage a schedule
integration pattern to accomplish similar logic with the FTP adapter. That's it for
this lesson. Thanks for watching.
Welcome. In this two-part lesson, we'll take a closer look at the operations that
can be configured on both the file and FTP adapter connections. Let's get started.
We'll start with the list files operation, because this one is handy to use when
you don't know the specific names of the files that you're looking to download or
read into memory. Once you select this operation in the file or FTP Adapter
Configuration wizard, you indicate which directory to list as well as the file
pattern of the files to be listed. However, later, when implementing the map action
for this invoke connection, you have the opportunity to dynamically assign the
value for the input directory or the file name pattern. If you don't, the invoke
will use the values you specified in these fields.
You also indicate the maximum number of files to be listed. However, this cannot be
more than 1,000. Now, if you don't want to list any files that have been just
recently created or modified, you can indicate the number of seconds to compare
against the current timestamp. Otherwise, all files are listed. Now checking this
box will list all files recursively from the input directory. And for the FTP
adapter, you can check this box to force the listing of all files regardless of the
file permissions.
When examining the response data structure in the sources section of the data
mapper-- of course, this will be available in the integration flow after the
invoke-- keep in mind this is not the files themselves or a list of file
references. Instead, you're retrieving a list of file names along with some file
metadata. However, to access this list for file processing, a common pattern is to
then configure a for each action using this item count value to define the number
of loops. Then within the loop, you'll see this additional data structure that will
provide the name of the current file along with its metadata, allowing you to
perhaps configure a download or read file operation next.
When you select the move file operation, both adapter types will be looking for the
directory and the name of the file to be moved along with the target directory and
target file name. You can also indicate if it's OK to overwrite the file if it
already exists in that target directory. However, optionally, when configuring the
map action for the invoke, you can dynamically provide values for any of those four
fields. Otherwise, the default placeholder values are used. As to the results of
this call, the response from the invoke will provide a Boolean true if the move was
successful.
The configuration for the delete operation is very similar, but here you only need
to provide the directory and the name of the file to be deleted. And once again,
optionally when configuring the map action for the invoke, you can dynamically
provide values for either of those two fields. As the results of this call, once
again, the response from the invoke will provide a Boolean true if the delete was
successful.
The download file operation is used when you need to retrieve a larger file up to a
gigabyte in size. For both the file and FTP adapters, once you specify the
directory location and the name of the file to download, you then indicate where in
the logical virtual file system you want to stage that file. Now, typically, this
will be in preparation for the next file processing action. However, for the file
adapter, you also specify a transfer mode binary, which only transfers raw bytes,
or ASCII, which transfers special control characters for data formatting.
Notice that the FTP adapter can unzip an archive file for you automatically if
required. The file adapter does not have the capability, so you would need to do
that with a separate flow action later if needed. The FTP adapter can automatically
decrypt PGP encrypted files or can also be used to perform a signature verification
on the file. These require that additional private public key configuration when
defining the FTP adapter connection. Again, if using the file adapter, you'll need
to perform separate flow actions if PGP decryption or signature verification is
required.
Now when editing the map action, both adapters provide for dynamically assigning
values for the file name, directory, or download VFS directory. Of course, only the
file adapter will include these Boolean processing options.
Subsequently, in your flow, the mapping canvas will add this response in the
sources section where you will not only have access to the downloaded file
reference in the VFS, but also have its file name, file type, and other metadata.
Now a common next step use case for larger downloaded files is to add a stage file
action, configure a read file and segments operation, define the schema, then
process those file records one chunk at a time as they are brought into memory.
Another common scenario is when there are a bunch of files to be processed in a zip
archive. After downloading and unzipping the archive, you add a for each action to
loop through each file to process as needed.
That brings us to the end of part 1. I'll see you in the next video where we'll
cover the read file and write file operations.
Welcome. In this lesson, we'll learn how to interact with files located in the
embedded file server using the new file server action. But first, as we learned in
an earlier lesson, you can still define and configure an FTP adapter connection to
interact with the OIC file server, just as with any other external secure FTP
server.
But when you use the file server action under the covers, OIC uses internal APIs to
directly access any location within the file server. Unlike, the FTP adapter, which
has been configured with specific credentials. While the file server action
functions similar to an FTP adapter connection, there are still some tasks that--
where you would choose to use an FTP adapter, instead.
For example, when you need to bring a file directly into the integrations virtual
file system and be made available in memory for processing. If you need to encrypt
or decrypt the file or if you need to sign or verify a signed file, these are all
use cases for the FTP adapter. So what can you do with the file server action?
Well, these are the five operations that are currently supported. We'll look at
each of these in a moment.
However, first, it's important to note that this action won't be visible to you on
the design canvas of your integration if the embedded file server has not been
enabled, it will just be grayed out, as shown here. And so once your OIC service
administrator has enabled the file server, of course, it will then be visible and
available for your use to configure one of these five operations.
The list directory operation is handy to use when you don't know the specific names
of the files you're looking for. Once you select directory as the resource, you'll
see the only choice for operation is list directory. And while you can specify the
directory here, typically you'll be mapping that directory value in the newly added
map action such as a value from an input request parameter, or an integration
property, or from a scheduled parameter.
You type an asterisk to list all files or choose a file name pattern to limit the
type of files you want to list. The default is 100, but you can specify up to 1,000
for the max number of files to be listed. And if you don't want to list any files
that have been just recently created or modified, you can indicate the number of
seconds to compare against the current timestamp. A minimum age of zero will list
all files.
And finally, checking this box will list all files that may exist in subdirectories
of the indicated input directory. A common pattern would be then to configure a for
each loop action where you use this file array from the list directory response as
the repeating element for the loop. Now, once you name the current element, as in
this example, you'll have it available in the sources section of subsequent
expression editors or mappers where you have the file metadata to include each file
name.
For this next operation, you select File as the resource, then choose Get File
Reference for the operation. Next, you most likely will use placeholder values
because from the new preceding map action, you'll have the opportunity to assign
those directory and file name values dynamically. And finally, once configured,
you'll notice that the response from this operation returns not only the file
reference, but will also echo that directory and file name as well.
When you need to upload a file to the file server after you select File as the
resource, you choose Write File for the operation. Now, for these, you can use
placeholder values, unless, for example, you know the directory will always be the
same. But typically in the mapper, you will assign the directory and file name
values dynamically.
Notice that you will also need to provide a reference to a file that has been
stored previously into your integrations virtual file system, such as from an
external FTP server download or a file that was created earlier within a stage file
action such as in this example. Now, this is a common pattern. When you need to
transform or create a new file, then you wish to upload it to the embedded file
server.
Notice that in addition to the file reference, the name of the file created is used
as the value for the output file name. And in this example, the output directory
location was provided by an integration property that could have been modified
prior to activation. Finally, once configured, you'll notice the response from this
operation returns a Boolean value of true if successful, as well as the directory
and the file name that was uploaded.
To move a file to a different location in the file server after you select File as
the resource, you choose Move File for the operation. Next, there are placeholders
for the source directory and file name, as well as the target directory and a
target file name. But typically, once again in the mapper, you will assign these
values dynamically. Notice that you also have the option to overwrite the target
file if it already exists. And once configured, you'll notice the response from
this operation simply returns a Boolean value of true if the move was successful.
And finally to delete a file on the file server after you select files or resource,
you choose Delete File for the operation, then specify which file in which
directory needs to be removed. But typically, once again in the mapper, you will
assign those values dynamically. And once configured just as the same as the Move
File response, this operation simply returns a Boolean value of true if the delete
was successful. And that brings us to the end of this lesson where we covered how
to configure and use these five operations of the file server action. Thanks for
watching.
Welcome to this short demo on the native file server action that's available with
OIC 32404 release. And first of all, let's look at a very simple use case I'm going
to implement using that. Here, I have my directory structure on my OIC file server.
And as you can see at the top level here or at the level here, we have five files--
three orders, and two JPEGs. Now, what I want to do is I want to move the orders to
the sales directory. And I want to move the JPEGs to the marketing directory. And
to do that, I'm going to use the new file server native action in OIC.
So here, we are in our project. And let's go into the integration that's actually
going to do that for us. And once you have activated file server for your OIC
instance, you will be able to use this action here. So we'll just drag and drop
this across to see what's actually available here.
Now, we will be dealing with two resources here. One resource is directory and the
other resource is file. So for the directory, for example, I can have the operation
list directory. So let's go back to our situation here. So I could say I'm going to
start off at this directory level and list the contents of this directory. And I
can also do a recursive list that will go through all of the subdirectories and
list the files there as well.
So let's go back in here. That's directory. And now, let's look at the file
resource. So with the file, what can we do here? We can move a file, which is what
I'm going to be doing in this example. But we can also read a file through getting
the file reference. And we can use that reference, for example, to delete a file or
write a file to another location, for example. So these are the operations that are
available at file level.
So let's go in here and look at the integration. Let's get rid of this guy here. We
don't need it. And as you see, I'm starting off here by list or list directory. And
within here, as you can see, the configuration that I've done, I've said, there's
my top level directory. I can put in the file name pattern.
And you can see how many files, max files. I can use minimum age as well. This is
useful for the timely processing of large files. You want to make sure the large
file has been fully written before you actually start processing it. And, of
course, the ability to list files recursively, which I don't need in this case
because all of the files I'm interested in are in this top level directory.
So if we go down here, we can see we've got for each loop over the files read are
the files listed. And as you can see here, here is the structure returned by list
directory. So within here, you will have a file list, including the directory file
name, et cetera, et cetera. And, of course, this is what I'm actually using as a
repeating element here.
So for the move, so let's see how this guy is configured. You will see here, this
time we're using the file resource, operations move file. There's the home path, a
directory, the source directory. I'm just putting in here a dummy.json because this
will change every time because it's in a for each loop. And then there's the target
directory and, again, the target file name. And let's say you have the ability here
as well to overwrite files if required.
So let's go on to try this out. So just do Save. Now, I'm closing the integration.
Now, I'm just after remembering, I should have done the test, use the tester within
the OIC canvas itself. But it's too late. Let's go. I'll use that for the next
demo. Here we go. This guy is active. Let's go in and let's run this guy now. So
run. We see here, it's processing away there for iterations. And you see it's from
the five and it's completed successfully.
So we will see here for iteration one, for example, we have all of these guys going
through moving it from media to marketing, et cetera, et cetera. This guy is moving
the order to the sales directory, et cetera, et cetera. So we can actually go back
here. And we can actually do a refresh at this level. And we see the files are gone
from that level. Let's go into sales. Let's do a refresh at sales level. The orders
are there. Let's go to marketing. Let's do a refresh there. The JPEGs are there. So
it's a very, very simple demo. Thank you very much.
Welcome. In this lesson, we'll look to leverage the stage file action operations
for processing files within your integration. So if you recall from an earlier
lesson, once a file or file reference has arrived in your integration flow,
regardless of which pattern is being used to trigger the integration or how the
file was received, you can configure one or more stage file actions as needed using
one of eight operations.
Now, these five operations are used on files that have been received with the two
read operations, bringing the contents into memory so they can be parsed. These
operations can be used just prior to sending a file out to an external system. The
most common use cases are to leverage Write File operations to create one or more
new files, or to append the content to create one new larger file.
While you will most often retrieve files or file references from an FTP server or
remote file system, you can also work with any file attachments or embedded content
responses from SOAP or REST web services. So once you add the stage file action to
your integration flow, the first thing you do is to select the operation, then
you'll be presented with the configuration options for that operation.
You can specify the virtual file system directory by name or in many cases, as in
this example, you will drag and drop a directory value that was provided by a
previous response, such as an FTP download operation that also unzipped an archive.
Specifying a file pattern also allows you to limit the type of files you're
interested in. And checking this box will also list files that may exist in one or
more subdirectories.
Once configured, the typical use case is to follow-up with a For Each loop action
so that you can iterate over each file to do some processing. Now, in order to
configure the For Each action since the list files operation returns this response,
you can specify the repeating element for the loop to be this object array called
ICSFile, which contains the file reference and the metadata for each file.
While the FTP adapter can automatically unzip an archived file when it gets
downloaded, there are times when the file is still archived, especially if it has
been received by some other adapter such as File, SOAP, or REST. And so with this
operation, you can use the Stage File action to do it explicitly when needed within
your integration flow.
On the Configure Operation page, there are two options. If you have just a file
reference to the archive, you provide that here. Otherwise, provide the name of the
archive file and the VFS directory in which it is located. In both cases, you will
then indicate where in the virtual file system to unzip the files. Later, when
looking at the response in the data mapper, just like the list files response,
you'll see that array of objects called ICSFile, which contains the file reference
and the metadata for each file.
Once again, the FTP adapter has the capability to automatically create and send a
zip archive of files located in a VFS directory. However, there are plenty of other
use cases where you will need to create that archive ahead of time within the
integration flow using the zip file operation.
The configuration for this one is pretty straightforward. You identify which
virtual file system directory is to be zipped, then specify a name for the newly
created zip file, as well as another VFS directory where it will be placed. Later
when looking in the map editor, you'll see the responses from this operation
provides easy access to the zip file reference along with its metadata if you need
it.
I'll cover these next two operations together. Once again, if configured
appropriately, the FTP adapter can automatically decrypt a downloaded file and also
encrypt or decrypt a file being written to the FTP server. But there will be other
use cases, especially when you're not using an FTP adapter, or you'll need to
explicitly decrypt a received attachment or perhaps encrypt a file before sending
it out somewhere.
But prior to you configuring either of these operations, someone with the OIC
service administrator role must first upload the appropriate public or private PGP
keys on the certificates page in the OIC Service Console. To decrypt a file, of
course, you'll need to select the private key. And to encrypt, you'll simply choose
the appropriate public key certificate.
Next, you will provide the reference to the file that is already within the virtual
file system, then create a name for the new file along with the name of a VFS
directory where you want it to be staged. Now, later, when you need access, the new
file reference is available within the operations response visible in the map
editor. And we'll conclude part 1 of this lesson here, as I will cover the three
remaining operations in part 2. See you in the next video.
Well, welcome back. Let's continue with part 2 of configuring stage file action
operations. We'll pick up where I left off in part 1. As I mentioned before, the
write file operation is used to either create a new file or to create new content
to be appended to an already existing file.
Again, since this is not an adapter, the scope of this operation is to create file
content that will be stored in the integrations virtual file system, which means
you'll still need to configure and invoke role adapter connection to actually send
the file to some external system from your integration. And just another reminder
that all files in the VFS are automatically deleted once the integration flow
instance has been completed.
So once you select this operation, you will first specify a name for the file to be
created, along with a location in the VFS for it to be staged. If this is a file
that already exists, checking here will append the contents, otherwise, the file
will be overwritten. Once again, just like the FTP and file adapter write
operations, an append only works for delimited file types. You also have the option
to encrypt the file here as well, choosing the appropriate PGP public key.
Next, on the Configure Schema Options page, you'll notice that you must specify the
structure for the contents of the file. This is required. And then you choose how
to define that structure. Notice that these are the same options we learned about
already in an earlier lesson on FTP and file adapter operations. There is an
additional option internally. You can also create an electronic data interchange
document. We'll revisit this one in just a moment. And as a reminder, the file size
limit for a write file operation is 10 megabytes.
Meanwhile, depending on which option you choose, just as we saw before, you simply
drag and drop the document, then select the schema element. Now, for the CSV
option, as I alluded to before, you'll learn how to do this in the transforming
native file formats lesson later in this module. And if you select the EDI document
option, you only need to indicate the character encoding the file is expected to
use.
And finally, unlike all other stage file action operations, when you complete this
configuration, a new map action will be added to the integration flow so that you
can create the mapping to define the contents of the file you wish to write. In
this example, I call the action create file, and I provided a purchase order XML
schema of items for the file structure. Here's an example of what you'd see when
you choose the EDI document option. In either case, it's up to you to implement the
mapping to create the file contents as required for your use case.
However, there is another use case to discuss. While most of the time you are
creating a defined structured file, there is the scenario where you need to create
a file using an opaque schema element, providing a file reference, even to
unstructured files such as a PDF or image file. Now to handle this, you first
create a simple schema document with just the opaque element. Then select that as
your schema file and schema element as shown here.
Then when you open the newly created map action, you'll see the required opaque
element that needs to be mapped. Expanding advanced functions, you drag and drop
the encode reference to Base64 function, which is now added to the expression
editor for that element. Now, just locate the file reference and provide that as
the function's argument.
Moving on now to the read entire file operation. This one is almost identical to
the FTP and file adapters read file operation, where you're essentially bringing
the file into memory, in this case, from the virtual file system, either a file
reference or a file staged in a VFS directory. And you will be defining the
structure of the file so that those schema elements can be made available as a
source to provide mapping to other target data structure.
So when you select the read entire file operation, you can either select here to
then specify the file reference or click no to specify the file name and the VFS
directory where the file is located. Now for delimited file types, here you have
the option to remove the last row of data or even multiple rows. And if the file is
encrypted, the content can be decrypted as its read into memory by using the
corresponding private PGP key.
Finally, you now must specify the structure of the file where you have the same
five choices available, we just learned about a moment ago with the Write File
operation. Now, as a reminder, this operation can only be used on files up to 10
megabytes in size. However, for larger files, up to 1 gigabyte in size, this
operation enables you to read the file in segments.
Essentially, the operation creates a loop scope structure that delivers each file
segment separately. Chunking files enables large files to be processed one logical
chunk at a time, which enables the file segment to stay within those memory
constraints. While this operation is most often used with delimited files with lots
of rows, you can also read large EDI documents or XML files containing repeating
elements and multiple namespaces.
Here's a look at a common pattern where you first use an FTP adapter connection to
download a file without defining the structure as an opaque file. So it can be up
to 1 gigabyte in size. Then you add the stage file action and configure the read
file and segments operation to retrieve that file from the virtual file system,
now, defining the structure and indicating the segment size for each chunk. Now,
you define a for each loop action to iterate over each record within that current
segment where you can then add one or more actions to implement logic needed to
process each row of data.
Remember that this operation scope is actually a looping construct itself, which
will iterate until all the file segments have been brought into memory to be
processed. So for this implementation pattern, you've sort of created a loop within
a loop. Let's now look to see how this is configured.
So when you select the read file and segments operation, just like read entire
file, you can either select here to then specify the file reference or click No to
specify the file name and the VFS directory where the file is located. Next, you
need to indicate the number of rows or records to process in each segment, anywhere
from 200 to 2,000.
Now, since this is a loop, you have the option to process each segment in parallel,
which is the default or checking this box will ensure that each segment will
execute in sequence, which may be slower, but it may also be necessary if your
business use case requires that each record gets processed in order.
And for delimited file types, just like read entire file, here, you have the option
to remove the last row of data or even multiple rows where you specify how many
rows and then to-- at the end to be ignored. Finally, once again on the Configure
Schema Options page, you must specify the structure of the file, which is
configured exactly the same as we learned earlier with the write file and read
entire file operation. However, notice that there is a missing option since JSON
files cannot be processed in segments.
Now that the action is configured, returning to our example use case to configure
the for each action, in this example, the file schema is a purchase order with many
items, so each segment is already a subset of those items, anywhere from 200 to
2,000 depending on your configuration. We will then loop over every item in for
each loop, where we'll label each element as current item. Notice that when you
need to access that item, it will be visible in the map editor sources section as a
separate data object, where you can then extract any specific field value you may
need to use for downstream processing.
Finally, to wrap up this two-part lesson on stage file operations, remember that
all files that have been read in, stored, or created in the virtual file system
will automatically be deleted once the integration instance has completed. That's
it for this lesson. Thanks for watching.
Welcome. In this lesson, we'll explore how to handle the transformation of native
file formats. When working with file data formats to be read into the memory of an
integration instance, or to create a file within an integration flow to be sent
out, we need to know its structure.
So as I mentioned in an earlier lesson that when performing data mapping, all data
structures are represented internally as an XML schema format so that we can use
XSLT constructs to handle various data assignments. So you'll need to define the
data structure when you're configuring FTP or file adapter connections for read or
write file operations, or when there is structured inbound or outbound payloads
used with SOAP or REST adapter connections.
Also, within the integration flow, the read and write operations of the stage file
action will require that you specify the file content structure. Here is an FTP
invoke configuration example. But both the file adapter and the stage file action
provide a similar configure schema wizard. For the adapters, you must first
indicate if you want to specify the structure. This is because sometimes, your use
case only requires receiving or sending the file as opaque, where you don't need to
parse the data.
But if you need to see the data in the map editor, you must define the schema. So
when it comes to data that is already defined by an XML schema, if you have a
schema, you simply select this option and upload the XSD. So OIC knows how to parse
the data. Otherwise, if you have a sample of that XML document, choosing this
option allows you to use it and OIC will generate the schema. Likewise, if it's
JSON, a sample JSON file can be used to generate the XML schema, that will be
needed.
But what about native file formats? Well, if it is a delimited format, such as CSV,
fortunately, you can use the file definition wizard to generate a native XML
schema. With that option on the Configure file contents page, you just select the
CSV file from which to create the schema. Immediately, you'll see the content of
the file displayed at the bottom of the page. You now provide a record set name,
which will become the root element of the created schema file and the record name,
which becomes the parent element in the schema for the record names selected as
column headers from the CSV file.
Now, while the default field delimiter is, of course, a comma, depending on the
file format, you can indicate if it is instead a single space, semicolon, tab, or
pipe symbol. The character set field is used for character encoding during file
transfer. If data is sent to the adapter is in a specific encoding format, then
select that same encoding format in the adapter. Otherwise, there may be some
character loss in the final written file.
Now, this optionally enclosed by value causes occurrences of the selected delimiter
to be really ignored during processing. For example, typically you would want to
redact the double quotes or single quotes that might surround some string values in
a column. And here, you indicate how the end of each line in the file is
terminated.
To further edit the CSV file, you click here to expand the pane into full screen
mode, which now allows you to see all the columns detected in the file along with
the row values. If the column headers are in the first row of the file, you click
this checkbox, and they'll be picked up. Otherwise, the wizard creates these
generic names of C1, C2, et cetera. And then you can manually edit those column
names to be something else.
By default. It assumes that all columns are mandatory, but checking this box will
mark them as all optional, or you can separately indicate each one to be optional
from here.
Finally, here is where you designate the data type of each column, if you need to
specify something other than the default string type. When you're done, a summary
page is provided. In this example, the configured FTP adapter connection will be
reading in a file with that expected CSV file format. Then later, when using the
map editor, you'll be able to see the retrieved file as a data source with that
translated XML schema format, making it easy to do data mapping.
This works the same way in reverse when doing a file write to create a new file to
be sent out. However, what about other native files that don't use a strict
delimited format? These use cases include flat files, fixed length files, or even
more complex file types that can't be handled inside OIC using the file definition
wizard.
To better understand the challenge, look more closely at this flat file example,
where each block includes one customer record with one or more associated account
records, each with one or more transactions. Well, the solution is for someone to
create a native XML schema file based on that sample file while applying the native
file content definitions.
This NXSD format extends the standard XSD format through the use of NXD-- NXSD
attributes. And since the XSD standard allows for extra attributes with their own
namespaces and NXSD file is a valid XML schema, which you'll be able to use in OIC.
But more on that later.
So while you could learn how to create this file manually with an XML schema
editor, a better option is to use a tool to help generate this file, which brings
me to the native file-- the native format builder, which is included in JDeveloper,
when you download the on-premises Oracle SOA suite installation. Now, this is free
to use for development. So you don't have to worry about being concerned about any
licensing costs.
The Native Format Builder Wizard guides you through the creation of a native schema
file from almost any content format-- delimited, fixed length, as well as more
complex files that have records with multiple delimiter types.
If you're going to use this tool, of course, you'll need to access the online
documentation to dive deeper, since there are a whole lot of different use cases.
And while it's beyond the scope of this training course to teach you how to use
this tool, I will give you a quick idea of how it works.
You start by indicating the type of file. Since you can handle delimited files in
OIC, you'll likely be choosing fixed length or complex. Then, you'll bring in a
sample file to be parsed by the wizard.
If it is a fixed length type, you'll be able to identify the position for each
field location along with the names for each field. Complex types present both
fixed length and delimited options so that you can build and design the schema to
correspond with the native data. The wizard also provides look ahead attributes
that enable you to specify regular expression patterns for filtering both fixed
length and variable length records.
However, there are some cases where certain native files, such as master detail
records, will require additional manual editing of the NXSD file. When you're
finished, you can leverage the built-in testing tool, providing some native files
to see how they are then converted against that schema to an XML file.
You can also use XML source files to see how they are converted back into native.
Now, remember, in addition to reading native files, some use cases will involve
creating new native file format files inside your OIC integration to be sent out.
And with enough sample files testing in both directions, you can have confidence in
your final NXSD file.
So let's get back to how this will be used in your integration flow. When you're
looking to define the schema, since this is a valid XML schema, you select the XSD
document option. And when you click Continue on the Configure file contents page,
just upload the NXSD file you created, then select the appropriate schema element,
and you're all set.
Notice, on the Summary page, it simply indicates the file content type as XSD. And
that's it. You have the same options for transforming native file formats for
either of these adapter connections, as well as internal stage file action, read
file or write file operations.