DWC Administering DWC
DWC Administering DWC
Users with the DW Administrator role can configure, manage, and monitor the SAP Datasphere tenant to
support the work of acquiring, preparing, and modeling data for analytics. They manage users and roles, create
spaces, and allocate storage to them. They prepare and monitor connectivity for data integration and perform
ongoing monitoring and maintainance of the tenant.
Tip
The English version of this guide is open for contributions and feedback using GitHub. This allows you
to get in contact with responsible authors of SAP Help Portal pages and the development team to
discuss documentation-related issues. To contribute to this guide, or to provide feedback, choose the
corresponding option on SAP Help Portal:
• Feedback Edit page : Contribute to a documentation page. This option opens a pull request on
GitHub.
• Feedback Create issue : Provide feedback about a documentation page. This option opens an
issue on GitHub.
More information:
• Contribution Guidelines
• Introduction Video: Open Documentation Initiative
• Blog Post: Introducing the Open Documentation Initiative
Either SAP will provision your tenant or you can create an instance in SAP BTP (see Creating and Configuring
Your SAP Datasphere Tenant [page 17]).
• We recommend that you link your tenant to an SAP Analytics Cloud tenant (see Enable the Product Switch
to Access an SAP Analytics Cloud Tenant [page 30]).
• You can enable SAP SQL data warehousing on your tenant to exchange data between your HDI containers
and your SAP Datasphere spaces without the need for data movement (see Enable SAP SQL Data
Warehousing on Your SAP Datasphere Tenant [page 31]).
An administrator creates SAP Datasphere users manually, from a *.csv file, or via an identity provider (see
Managing SAP Datasphere Users [page 63]).
You must assign one or more roles to each of your users via scoped roles and global roles (see Managing Roles
and Privileges [page 70]). You can create your own custom roles or use the following standard roles delivered
with SAP Datasphere:
Note
Users who are space administrators primarily need scoped permissions to work with spaces,
but they also need some global permissions (such as Lifecycle when transporting content
packages). To provide such users with the full set of permissions they need, they must be
assigned to a scoped role (such as the DW Scoped Space Administrator) to receive the
necessary scoped privileges, but they also need to be assigned directly to the DW Space
Administrator role (or a custom role that is based on the DW Space Administrator role) in order
to receive the additional global privileges.
• DW Integrator (template) - Can integrate data via connections and can manage and monitor data
integration in a space.
• DW Scoped Integrator - This predefined scoped role is based on the DW Integrator role and inherits
its privileges and permissions.
• DW Modeler (template) - Can create and edit objects in the Data Builder and Business Builder and view
data in objects.
• DW Scoped Modeler - This predefined scoped role is based on the DW Modeler role and inherits its
privileges and permissions.
• DW Viewer (template) - Can view objects and view data output by views that are exposed for
consumption in spaces.
• DW Scoped Viewer - This predefined scoped role is based on the DW Viewer role and inherits its
privileges and permissions.
• Roles providing privileges to consume the data exposed by SAP Datasphere spaces:
All data acquisition, preparation, and modeling in SAP Datasphere happens inside spaces. A space is a secure
area - space data cannot be accessed outside the space unless it is shared to another space or exposed for
consumption.
An administrator must create one or more spaces. They allocate disk and memory storage to the space, set
its priority, and can limit how much memory and how many threads its statements can consume. See Creating
Spaces and Allocating Storage [page 130].
Prepare Connectivity
Administrators prepare SAP Datasphere for creating connections to source systems in spaces (see Preparing
Connectivity for Connections [page 142]).
Administrators have access to various monitoring logs and views and can, if necessary, create database
analysis users to help troubleshoot issues (see Monitoring SAP Datasphere [page 232]).
You administer SAP Datasphere using apps and tools in the side navigation area.
(Space Management)
In the Space Management, you can set up, configure, and monitor your spaces, including assigning users to
them. For more information, see Preparing Your Space and Integrating Data.
(System Monitor)
In the System Monitor, you can monitor the performance of your system and identify storage, task, out-of-
memory, and other issues. For more information, see Monitoring SAP Datasphere [page 232].
Security
Users Create, modify, and manage users in Managing SAP Datasphere Users [page
SAP Datasphere. 63]
Roles Assign pre-defined standard roles or Managing Roles and Privileges [page
custom roles that you have created to 70]
users.
Activities Track the activities that users perform Monitor Object Changes with Activities
on objects such as spaces, tables, [page 252]
views, data flows, and others, track
changes to users and roles, and more.
Data Integration Live Data Connections (Tunnel): For Create Live Data Connection of Type
SAP BW∕4HANA and SAP S/4HANA
Tunnel [page 191] (SAP BW∕4HANA)
model import, you need Cloud Connec-
tor. This requires a live data connection Create SAP S/4HANA Live Data Con-
of type tunnel. nection of Type Tunnel [page 205] (SAP
S/4HANA)
On-Premise Agents: Manage Data Pro- Connect and Configure the Data Provi-
visioning Agents which are required to
sioning Agent [page 150]
act as gateway to SAP Datasphereto
enable using connections to on-prem- Register Adapters with SAP Datasphere
ise sources for remote tables and build- [page 153]
ing views.
Monitoring Data Provisioning Agent in
SAP Datasphere [page 207]
Third-Party Drivers: Upload driver files Upload Third-Party ODBC Drivers (Re-
that are required for certain third-party quired for Data Flows) [page 167]
cloud connections to use them for data
flows.
Tenant Links Link My Tenants: Link an SAP Analytics Enable the Product Switch to Access an
Cloud tenant to your SAP Datasphere SAP Analytics Cloud Tenant [page 30]
tenant to enable the product switch in
the top right of the shell bar, and be
able to easily navigate between them.
IP Allowlist Trusted IPs: Control the range of ex- Add IP address to IP Allowlist [page 163]
ternal public IPv4 addresses that get
access to the database of your SAP
Datasphere by adding them to an allow-
list.
Tasks Clean-up task logs to reduce storage Deleting Task Logs to Reduce Storage
consumption in your SAP Datasphere Consumption
tenant.
Check Consent Expirations [page 256]
Also allows you to view a list of users
whose authorization consent will expire
within a given timeframe, by default,
four weeks.
Database Access Database Analysis Users: Create a data- Monitoring SAP Datasphere [page 232]
base analysis user to connect to your
SAP HANA Cloud database to analyze,
diagnose and solve database issues.
Only create this user for a specific spe-
cific task and delete right after the task
has been completed.
Database User Groups: Create an iso- Creating a Database User Group [page
lated environment with corresponding 224]
administrators where you can work
more freely with SQL in your SAP HANA
Cloud database.
Tenant Configuration Allocate the capacity units to storage Configure the Size of Your SAP Data-
and compute resources for your tenant. sphere Tenant [page 23]
Unified Customer Landscape Prepare a connection generated from Defining Allowed Spaces for Unified
an SAP BTP Unified Customer Land- Customer Landscape Connections
scape formation to allow its use in se- [page 171]
lected spaces.
System Information Add a visual tenant type indicator to Display Your System Information [page
show all users which system they are 48]
using, for example a test or production
system.
System Configuration Session timeout: Set the amount of By default the session timeout is set to
time before a user session expires if the 3600 seconds (1 hour). The minimum
user doesn't interact with the system. value is 300 seconds, and the maxi-
mum value is 43200 seconds.
Allow SAP support user creation: Let Request Help from SAP Support [page
SAP create support users based on in- 15]
cidents.
Data Source Configuration SAP Cloud Platform (SAP CP) Account: Set Up Cloud Connector in SAP Data-
Get subaccount information for SAP sphere [page 161]
Datasphere. You need the information
to configure the Cloud Connector that
SAP Datasphere uses to connect to
sources for data flows and model im-
port.
Security Authentication Method: Select the au- Enabling a Custom SAML Identity Pro-
thentication method used by SAP vider [page 54]
Datasphere.
App Integration OAuth Clients: You can use Open Au- Create OAuth2.0 Clients to Authenti-
thorization (OAuth) protocol to allow cate Against SAP Datasphere [page
third-party applications access. 33]
Notifications Make sure that users are notified ap- Configure Notifications [page 255]
propriately about issues in the tenant.
System About
Every user can view information about the software components and versions of your system, in particular:
Users with the DW Administrator role can open a More section to find more details. They can find outbound and
database IP addresses that might be required for allowlists in source systems or databases of SAP Datasphere
for example (see Finding SAP Datasphere IP addresses [page 164]). Administrators can also upgrade their SAP
HANA database patch version. For details, see Apply a Patch Upgrade to Your SAP HANA Database [page 49].
SAP Datasphere is a fully web-based offering. You will need an internet connection and a system that meets
certain requirements.
Desktop browser Google Chrome, latest version Google releases continuous updates to their Chrome
browser. We make every effort to fully test and support the
latest versions as they are released. However, if defects are
introduced with OEM-specific browser software, we cannot
guarantee fixes in all cases.
Microsoft Edge based on the Chro- Microsoft has available for download continuous updates to
their new Chromium-based Edge browser. We make every
mium engine, latest version
effort to fully test and support the latest versions as they are
released.
Network bandwidth Minimum 500-800 kbit/s per user In general, SAP Datasphere requires no more
bandwidth than is required to browse the inter-
net. All application modules are designed for
speed and responsiveness with minimal use of
large graphic files.
JavaScript Enable -
Power Option Recommendation High Performance mode for im- For Microsoft based Operating Systems
proved JavaScript performance
Menus, buttons, messages, and other elements of the user Bulgarian (bgBG); Catalan (caES); Chinese (zhTW); Chinese
interface. (Simplified) (zhCN); Croatian (hrHR); Czech (csCZ); Danish
(daDK); Dutch (nlNL); English (enGB); English (enUS); Es-
tonian (etEE); French (frCA); French (frFR); Finnish (fiFI);
German (deDE); German (deCH); Greek (elGR); Hindi (hiIN);
Hungarian (huHU); Indonesian (idID); Italian (itIT); Japanese
(jaJP); Korean (koKR); Latvian (lvLV); Lithuanian (ltLT); Ma-
lay (msMY); Norwegian (noNO); Polish (plPL); Portuguese
(Brazil) (ptBR); Portuguese (Portugal) (ptPT); Romanian
(roRO); Russian (ruRU); Serbian (srRS); Slovakian (skSK);
Slovenian (slSL); Spanish (esES); Spanish (esMX); Swedish
(svSE); Thai (thTH); Turkish (trTR);Ukrainian (ukUA); Viet-
namese (viVN) and Welsh (cyGB).
Data Connectivity
For more information, including information on minimum requirements for source systems and databases, see:
You can request help from SAP Product Support by creating a support incident. In many cases, a support user
is required to allow an SAP support engineer to log into and troubleshoot your system.
You can create an SAP support incident on the SAP Support Portal (S-user login required). For detailed
information about what to include in an incident, see SAP Note 2854764 .
An administrator can make sure that a support user is created in your tenant. Two options are available:
• An administrator generally allows SAP Product Support to create support users based on incidents.
Proceed as follows:
1. In the side navigation area, click (System) (Administration) System Configuration .
If your tenant was provisioned prior to version 2021.03, click (Product Switch)
Analytics System Administration System Configuration .
2. Choose Edit.
3. Set the Allow SAP support user creation setting to ON.
4. Click Save.
In case of an incident, the assigned support engineer from SAP Product Support can request and
generate a personalized support user for the affected tenant. This user is enabled for multi-factor
authentication.
Support engineers can request the support user with one of the following roles:
• the global extended role DW Support User along with the scoped role DW Scoped Support User
DW Support User gives support users read-only access privileges to all functionalities of SAP
Datasphere, enabling them to analyze the incident.
When support engineers request the DW Scoped Support User role, they can specify the spaces
that need to be added as scopes to this role. This gives the support user read-only access to these
spaces.
• the global DW Administrator role, if the customer confirms this in the incident
The support user does not consume a user license, and it will be automatically deleted after two days
or after the incident has been closed.
• An administrator creates the support user.
Before creating an incident with SAP, proceed as follows:
1. In the shell bar, click (Support).
2. In the Support dialog, click Create Support User and then choose OK to confirm the support user
creation.
An email is automatically sent to SAP Support to notify them of the newly created support user, and it
is listed with your other users at Security Users .
The support user has minimum privileges and does not consume a user license.
You can assign an appropriate role to the support user (DW Administrator role) and add it to the
required space.
3. Delete the support user when your issue is resolved.
For more information about creating a support user, see SAP Note 2891554 .
You can create your own tenant in the SAP BTP Cockpit. The procedure is the same for both subscription-
based and consumption-based contracts. Some details may vary depending on the chosen service plan (free
or standard). For more information about limitations for a free plan, see SAP Note 3227267.
When the tenant is configured, a data center region is selected. The main role of a data center is to guarantee
the uninterrupted operation of computer systems. It also provides secure storage, processing, and networking
capabilities for your data. A data center refers to the physical location, which could be a building or a group of
buildings, housing computer systems and their components.
Each data center has multiple availability zones. Your workloads are deployed in these various zones. By
distributing workloads across different zones, we ensure our services remain available, even if a specific zone
experiences issues. By keeping backup data within the same data center, the latency for data transfers and
access is minimized. This infrastructure strategy balances the workload and enhances performance. The zone
deployment contributes to a more robust and reliable infrastructure, ensuring near-zero downtime for your
critical processing needs.
Provisioning • For information about region availability, • For information about region availability,
see the SAP Discovery Center . see the SAP Discovery Center .
• The SAP BTP subaccount administrator • The SAP BTP subaccount administrator
must trigger the SAP Datasphere instance must trigger the SAP Datasphere instance
creation. Tenant creation will not be trig- creation. Tenant creation will not be trig-
gered by SAP. gered by SAP.
• You must create and configure the to- • You must create and configure the to-
be-provisioned SAP Datasphere service in- be-provisioned SAP Datasphere service in-
stance (tenant) in SAP BTP. See Create stance (tenant) in SAP BTP. See Create
Your SAP Datasphere Service Instance in Your SAP Datasphere Service Instance in
SAP BTP [page 21]. SAP BTP [page 21].
• The system owner of SAP Datasphere, who • The system owner of SAP Datasphere, who
has been specified during the provisioning, has been specified during the provisioning,
is notified via email when the tenant is pro- is notified via email when the tenant is pro-
visioned. visioned.
Size Configuration • Tenants are initially created with minimal • Tenants are created with 128 GB of storage
configuration that includes 128 GB of stor- and 32 GB of memory (2 compute blocks).
Note age and 32 GB of memory (2 compute • You cannot upscale free plan tenants. You
blocks). need to update your plan from free to
For maximum size
• Once logged to your tenant, upscaling can standard if any sizing configuration is re-
configuration op- quired.
be done at any time. See Configure the Size
tions, see the ta-
of Your SAP Datasphere Tenant [page 23].
bles below.
Note
After finalizing the configuration, you
can only change the size of your SAP
BW Bridge storage later if you don’t
have any SAP BW Bridge instances.
The maxium configuration size of your tenant depends on regional availability and your server type.
Note
Australia 5970 GB 16000 GB 4096 GB 90 TB 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
month month ory Per-
formance
Class)
Brazil (São 5970 GB 16000 GB 4096 GB Not Sup- 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
Paulo) ported month month ory Per-
formance
Class)
Canada 5970 GB 16000 GB 4096 GB Not Sup- 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
(Montreal) ported month month ory Per-
formance
Class)
Europe 5970 GB 16000 GB 4096 GB 90 TB 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
(Frankfurt) month month ory Per-
formance
Class)
EU Access 5970 GB 16000 GB 4096 GB 90 TB 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
(Frankfurt) month month ory Per-
formance
Class)
Japan (To- 5970 GB 16000 GB 4096 GB 90 TB 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
kyo) month month ory Per-
formance
Class)
Singapore 5970 GB 16000 GB 4096 GB 90 TB 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
month month ory Per-
formance
Class)
South Ko- 5970 GB 16000 GB 4096 GB Not Sup- 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
rea ported month month ory Per-
formance
Class)
US East 5970 GB 16000 GB 4096 GB 90 TB 7200 h/ 20.5 GB/h 2100 h/ 440 (Mem-
month month ory Per-
formance
Class)
Microsoft Azure
Hyper-
scaler Re-
gional Data Inte- Catalog
Availability Memory Storage BW Bridge Data Lake gration Catalog Storage vCPU
Europe 5600 GB 27840 GB Supported 90 TB 7200 h/ 20.5 GB/h 2100 h/ 412 (Mem-
(Amster- month month ory Per-
dam) formance
Class)
Europe 5600 GB 27840 GB Supported 90 TB 7200 h/ 20.5 GB/h 2100 h/ 412 (Mem-
(Switzer- month month ory Per-
land) formance
Class)
US West 5600 GB 27840 GB Supported 90 TB 7200 h/ 20.5 GB/h 2100 h/ 412 (Mem-
month month ory Per-
formance
Class)
Europe 5750 GB 28928 GB Supported 90 TB 7200 h/ 20.5 GB/h 2100 h/ 204 (High
(Frankfurt) month month Memory
Perform-
ance Class)
India 5750 GB 28928 GB Supported 90 TB 7200 h/ 20.5 GB/h 2100 h/ 204 (High
(Mumbai) month month Memory
Perform-
ance Class)
US Central 5750 GB 28928 GB Supported 90 TB 7200 h/ 20.5 GB/h 2100 h/ 204 (High
month month Memory
Perform-
ance Class)
Create your SAP Datasphere service instance in SAP Business Technology Platform.
Note
Creating an SAP Datasphere service instance in SAP Business Technology Platform (SAP BTP) results in
provisioning an SAP Datasphere tenant.
For both subscription-based contracts (initiated on November 2023) and consumption-based contracts, you
can access the SAP BTP cockpit and view all currently available services in a global account. You need to
structure this global account into subaccounts and other related artefacts, such as directories and/or spaces.
Prerequisites
To create your SAP Datasphere service instance in SAP BTP, you need the following prerequisites:
• Your global account has a commercial entitlement either via cloud credits (in case of a consumption-based
contract) or via a subscription-based contract.
• A Cloud Foundry subaccount which is entitled for SAP Datasphere. For more information, see Configure
Entitlements and Quotas for Subaccounts.
• You have SAP BTP administration authorization on the subaccount that is entitled to SAP Datasphere.
• You are using Google Chrome to properly view popups in SAP BTP.
Service Plans
Standard The standard plan provides an SAP Datasphere tenant for productive and non-productive use,
which is represented by a service instance.
Free The free plan provides an SAP Datasphere tenant for a limited time for trial use, which is
represented by a service instance.
Note
For information about region availability, see the SAP Discovery Center .
Create a Tenant
The following procedure uses the SAP BTP cockpit to create the service instance.
Note
You can create only one free tenant under the global account. If your SAP BTP service causes issues, you
can open an incident ticket via ServiceNow.
1. In the SAP BTP cockpit, navigate to the space in which you want to create the service instance, and click
Services Service Marketplace in the left navigation area.
For more information, see Navigate to Orgs and Spaces.
2. Search for "Datasphere", and click the SAP Datasphere service to open it.
3. Click Create in the top-right corner.
A wizard opens, in which you can select or specify the following parameters:
Parameter Description
Note
Not all runtime environments are available for free.
Space [no selection needed if you're creating the instance from the
space area] Select the SAP BTP space in which you want to
create the service instance.
4. Click Next and enter the following information about the SAP Datasphere system owner, who will be
notified when the service instance is created: First Name, Last Name, Email, and Host Name.
Note
Alternatively, you can use a JSON file to provide the information above.
5. Click Next to go to the final page of the wizard where you can review your selections, and then click Create
to exit the wizard.
An information message is displayed to confirm that the service instance creation is in progress.
Note
6. Click View Instance to go to your space Service Instances page, where the new instance is listed and you
can view the progress of its creation.
7. When the service instance is created, the SAP Datasphere system owner receives an email confirming
its availability, and providing a link to navigate to the SAP Datasphere tenant, which the service instance
represents.
If the creation of the service instance fails (the "failed" status is displayed), you must first delete the
failed instance and then create a new SAP Datasphere service instance. If you need support, you can
open an incident via ServiceNow with the component DS-PROV.
Configure the size of your tenant by specifying resource sizes based on your business needs. Capacity Units
(CU) are allocated to obtain storage and compute resources for your tenant.
You can configure the size of a subscription-based tenant and a consumption-based tenant with a standard
plan.
In the Tenant Configuration page of the Configuration area, you can increase the sizes for the various resources,
within the permitted size combinations, to obtain a configuration that fits your exact needs, and then click
Save.
Caution
Once you save the size configuration of your tenant, be aware that some resources cannot be resized later.
Storage cannot be downsized. If you require a storage downsize, you must recreate the tenant. Exception: If
you need to decrease the memory, see SAP note 3224686 .
• The whole process may take more than 90 minutes. The configuration process is not long, but the
operational process in the background can take a while.
• In case an error occurs, you are notified that the configuration cannot be completed and that you need
to try again later by clicking the Retry button (which replaces the Save button in such a case). The delay
depends on the error (for example, if there is an error on the SAP HANA Cloud database side, you may
need to retry after 60 minutes).
• You can only make changes to SAP HANA Compute and SAP HANA Storage once every 24 hours.
• If you try to change your SAP HANA configuration, SAP HANA Cloud functionalities (Spaces, DPServer,
Serving of Queries) will not be available for around 10 minutes. If you run into issues after the
configuration, use the Retry button.
To view all supported size combinations for compute and storage resources and the number of capacity units
consumed, go to the SAP Datasphere Capacity Unit Estimator.
Base Configuration
Property Description
• Memory
• Compute
• High-Memory
• High-Compute
Note
The performance class you select determines the number of vCPUs allocated to
your tenant. For a detailed list, see the vCPU Allocation table below.
You can reduce the amount of memory, but the lower limit depends on how much space
you have assigned to space management.
vCPU Displays the number of vCPUs allocated to your tenant. The number is calculated based
on the selected performance class, and memory used by your tenant.
Enable the SAP HANA Cloud Enable this to access the SAP HANA Automated Predictive Library (APL) and SAP
Script Server HANA Predictive Analysis Library (PAL) machine learning libraries.
Data Lake Storage [optional] Select the size of data lake disk storage.
To reduce the size of your data lake storage, you must first delete your data lake in-
stance, and re-create it in the size that you want.
Note
Deletion cannot be reversed and all data stored in the data lake instance will be
deleted.
You cannot delete your data lake storage if it's connected to a space. You must first
disconnect the space:
Data lake is not available in all regions. See SAP note 3144215 .
SAP BW Bridge Storage [optional] Select the size of SAP BW bridge using the dropdown menu, starting from 0
GB (minimum).
SAP BW Bridge includes SAP BTP, ABAP environment, runtime and compute.
Caution
You can only change the SAP BW Bridge storage allocation, if no SAP BW Bridge
instances are created.
The process for allocating capacity units to SAP BW Bridge is not part of the configura-
tion process.
Note
• First finalize the size configuration of your tenant, then open the incident as a
next step. Once the incident has been processed, you can create the SAP BW
bridge instance in the dedicated page SAP BW Bridge of the Configuration area
with the size you’ve allocated (see Provisioning the SAP BW Bridge Tenant).
• SAP BW Bridge is not available in all regions. See SAP note 3144215 .
• When using a test tenant with the minimal configuration (128 GB of storage
and 1 compute block), you cannot add SAP BW Bridge storage unless you add
more compute blocks.
• As soon as you click Save, the allocated capacity units will be assigned to SAP
BW Bridge.
Performance Class
Note
If you try to modify the settings and discover that elastic compute node function-
ality has not been enabled on your SAP HANA Cloud Database, follow the steps
described inSAP Note 3432666 . This functionality is not enabled by default on
older tenants.
[optional] Select a performance class for your elastic compute node block-hours:
• Memory
• Compute
• High-Compute
Note
The performance class you select determines the number of vCPUs and the RAM
allocated to your tenant.
You can only use one performance class at a time. To use a different performance class,
you must re-configure your Tenant Configuration settings.
Block Specifications Displays the number of vCPUs and the amount of RAM allocated to your tenant.
Block-Hours [optional] Set the number of blocks-hours scheduled for elastic compute node con-
sumption. Each block-hour is an additional block of vCPu and RAM for your tenant to
use in one hour. The maximum number of block-hours you can consume in one hour is
four.
Elastic Compute Node Usage: Displays the number of blocks currently scheduled for elastic compute node consump-
Allocated Block-Hours tion.
Elastic Compute Node Usage: Displays the total number of blocks consumed by elastic compute nodes. The total is
Used Block-Hours independent of which performance-class is selected.
Elastic Compute Node Usage: Displays the block-hours you have used that exceed the amount allocated by your
Exceeded Block-Hours tenant configuration.
Note
This only appears if you have used more block-hours than allocated.
Data Integration [optional] Select the number of compute blocks to allocate to Data Integration applica-
tions which provide enhanced data integration capabilities. This comprises the replica-
tion flow that enables data integration with delta processing which is the recommended
integration solution for all supported source and target systems.
You can increase the number of blocks to assign a maximum of 7200 node hours. You
can decrease the number of blocks until you reach the minimum number of node hours
included in your plan.
Note
Data flows are not part of this commercial package.
Execution Hours The number of node hours available for Data Integration applications per month is
calculated from the number of allocated compute blocks.
Every month you're entitled to running up to 200 hours of jobs. Using more than 200
hours has no impact in other jobs. The consumption of data integration is not limited to
avoid interrupting critical integration scenarios, but you will be billed for the overusage
when it happens.
Maximum Parallel Jobs The maximum number of parallel jobs is calculated from the number of execution hours
assigned. For every 100 execution hours, you are given one extra parallel job, up to a
maximum of 10.
Each parallel job means that roughly 5 datasets, from one or several replication flows,
can be run in parallel. If more replication flows are running, processing will be queued
and replication will occur less frequently.
Data Integration: Allocated Displays the number of hours allocated to Data Integration applications.
Execution Hours
Data Integration: Used Execution Displays the number of hours used by Data Integration applications.
Hours
Data Integration: Exceeded Displays the execution hours you have used that exceed the amount allocated by your
Execution Hours tenant configuration.
Note
This only appears if you have used more hours than allocated.
Outbound Blocks You can increase or decrease the number of storage blocks allocated for premium
outbound integration. Each block provides 20 GB of storage.
Outbound Volume The outbound volume is calculated from the number of allocated blocks.
Premium Outbound Usage: Displays the number of GB allocated to premium outbound integration.
Allocated Data Volume
Premium Outbound Usage: Used Displays the number of GB used by premium outbound integration.
Data Volume
Premium Outbound Usage: Displays the data volume you have used that exceeds the amount allocated by your
Exceeded Data Volume tenant configuration.
Note
This only appears if you have used more data than allocated.
Catalog
Property Description
Catalog Storage Included by default. You can increase or decrease the number of storage blocks allo-
cated for the catalog.
Storage The amount of storage available for the catalog is calculated from the number of allo-
cated blocks.
Catalog Usage: Allocated Storage Displays the number of GB allocated to the catalog.
Catalog Usage: Used Storage Displays the number of GB used by the catalog.
Catalog Usage: Exceeded Storage Displays the amount of storage you have used that exceeds the amount allocated by
your tenant configuration.
Note
This only appears if you have used more storage space than allocated.
Capacity Units
Property Description
Units in Use per Month Displays the estimated number of capacity units consumed per month by the storage
and compute resources you've specified.
Units in Use per Hour Displays the estimated number of capacity units consumed per hour by the storage and
compute resources you've specified.
vCPU Allocation
When you set your base configuration, the performance class you select and the hyperscaler you are using
determines the ratio of memory to vCPUs allocated to your tenant. The tables below list the memory to vCPU
ratio for each hyperscaler.
AWS 32-960 GB 16
AWS 1024-1792 GB 16
AWS 1800 GB 15
GCP 32-960 GB 16
GCP 1024-1344 GB 16
Azure 32-1024 GB 16
Azure 1088-1920 GB 16
Azure 3744 GB 16
AWS 3600 GB 30
AWS 32-912 GB 8
GCP 32-608 GB 8
Azure 32-480 GB 8
AWS 32-352 GB 4
GCP 32-288 GB 4
Azure 32-352 GB 4
In SAP Business Technology Platform (SAP BTP), if you have an SAP Datasphere service instance with a free
plan, which you can use for 90 days, you can update it to a standard plan (no time limitation) for productive
purposes. The number of days before the expiration is displayed in the top panel of SAP Datasphere.
If you do not update to a standard plan within 90 days, your SAP Datasphere tenant will be suspended.
While the tenant is suspended, you can still upgrade your service instance from the free to standard plan,
but after 5 days of suspension, your tenant will be deleted and there is no way to recover it.
If your tenant is deleted, the service instance will still be shown in your Global Account, but it is not
functional. You can delete it and create a new SAP Datasphere service instance with a free plan.
To do so, you must have SAP BTP administration authorization on the subaccount that is entitled to SAP
Datasphere.
1. In SAP BTP, select the subaccount and the space where the service instance with a free plan was created.
2. Navigate to Instances and Subscriptions.
3. In the Service Instances page, find the SAP Datasphere service instance with the free plan, click the button
at the end of the row and select Update.
Note
After updating your free plan to standard plan, you must wait at least 24 hours before changing the
tenant settings on the Tenant Configuration page.
4. In the Update Instance dialog, select standard and click Update Instance.
You can view the progress of the update. The status of the instance becomes green when the update is
completed.
Note
The update process takes around 30 minutes, and during this time some features might not work as
expected.
You can link your SAP Datasphere tenant to a SAP Analytics Cloud tenant and enable the product switch in the
top right of the shell bar, to help your users easily navigate between them.
Procedure
An SAP Analytics Cloud user must create a live connection before they can consume data from
SAP Datasphere (see Live Data Connections to SAP Datasphere in the SAP Analytics Cloud
documentation).
Multiple SAP Analytics Cloud tenants can create live connections to your SAP Datasphere tenant, but
only one SAP Analytics Cloud tenant can be accessed via the product switch.
For more information about consuming data in this way, see Consume Data in SAP Analytics Cloud via
a Live Connection.
Use SAP SQL Data Warehousing to build calculation views and other SAP HANA Cloud HDI objects directly
in your SAP Datasphere run-time database and then exchange data between your HDI containers and your
SAP Datasphere spaces. SAP SQL Data Warehousing can be used to bring existing HDI objects into your SAP
Datasphere environment, and to allow users familiar with the HDI tools to leverage advanced SAP HANA Cloud
features.
Context
To enable SAP SQL Data Warehousing on your SAP Datasphere tenant, an S-user must create an SAP ticket to
connect your SAP BTP account.
Note
The SAP Datasphere tenant and SAP Business Technology Platform organization and space must be in the
same data centre (for example, eu10, us10). This feature is not available for Free Tier plan tenants (see SAP
Note 3227267 ).
For information about working with SAP Datasphere and HDI containers, see Exchanging Data with SAP SQL
Data Warehousing HDI Containers.
Procedure
1. In the side navigation area, click (Space Management), locate your space tile, and click Edit to open it.
2. In the HDI Containers section, click Enable Access and then click Open Ticket to create an SAP ticket for the
DWC-SM component to request us to map your SAP Datasphere tenant to your SAP Business Technology
Platform account.
Item Description
SAP Datasphere Tenant ID In the side navigation area, click (System) (About).
Note
You need the Tenant ID for the ticket, and the Database ID when building
your containers in the SAP Datasphere run-time database.
SAP Business Technology Platform Your SAP Business Technology Platform organization ID.
Org GUID
You can use the Cloud Foundry CLI to find your organization GUID:
See https://cli.cloudfoundry.org/en-US/v6/org.html .
SAP Business Technology Platform The SAP Business Technology Platform space inside the organization.
Space GUID
You can use the Cloud Foundry CLI to find your space GUID:
See https://cli.cloudfoundry.org/en-US/v6/space.html .
4. Build one or more new HDI containers in the SAP Datasphere run-time database (identified by the
Database ID on the SAP Datasphere About dialog).
For information about setting up your build, see Set Up an HDI Container .
5. When one or more containers are available in the run-time database, the Enable Access button is replaced
by the + button in the HDI Containers section for all your SAP Datasphere spaces (see Add an HDI
Container and Access its Objects in Your Space).
2.6 Enable the SAP HANA Cloud Script Server on Your SAP
Datasphere Tenant
You can enable the SAP HANA Cloud script server on your SAP Datasphere tenant to access the SAP
HANA Automated Predictive Library (APL) and SAP HANA Predictive Analysis Library (PAL) machine learning
libraries.
To enable the SAP HANA Cloud script server, go to the Tenant Configuration page and select the checkbox in
the Base Configuration section. For more information, see Configure the Size of Your SAP Datasphere Tenant
[page 23].
Note
The script server cannot be enabled in a SAP Datasphere consumption-based tenant with free plan.
For detailed information about using the machine learning libraries, see:
Users with the DW Administrator role can create OAuth2.0 clients and provide the client parameters to users
who need to connect clients, tools, or apps to SAP Datasphere.
Context
Note
Consuming exposed data in third-party clients, tools, and apps via an OData service requires a three-
legged OAuth2.0 flow with type authorization_code.
Procedure
Property Description
Authorization Grant Depending on the purpose selected, the following authorization grants are avaialble:
• For the purpose Interactive Usage, only Authorization Code is selected and cannot be
changed.
• For the purpose API Access, select one of these options:
• Client Credentials - Select this option when the client application is accessing its
own resources or when the permission to access resources has been granted by the
resource owner via another mechanism. To use the SCIM 2.0 API, select this option
(see Create Users and Assign Them to Roles via the SCIM 2.0 API [page 83]).
• SAML2.0 Bearer - Select this option when the user context is passed using SAML
or to control access based on user permissions using SAML. This option requires
specific client-side infrastructure to support SAML.
Secret [read-only] Allows the secret to be copied immediately after the client is created.
Note
Once you close the dialog, the secret is no longer available.
Note
Clients created before v2024.08 have a Show Secret button, which allows you to display
and copy the secret at any time after the client is created.
Redirect URI Enter a URI to indicate to where the user will be redirected after authorization. If the URI has
dynamic parameters, use a wildcard pattern (for example, https://redirect_host/
**).
The client, tool, or app that you want to connect is responsible for providing the redirect URI:
• When working with the datasphere command line interface (see Accessing SAP Data-
sphere via the Command Line), set this value to http://localhost:8080/.
• When connecting SAP Analytics Cloud to SAP Datasphere via an OData services connec-
tion (see Consume SAP Datasphere Data in SAP Analytics Cloud via an OData Service),
use the Redirect URl provided in the SAP Analytics Cloud connection dialog.
Note
Redirect URI is not available if you've selected API Access as the purpose.
Token Lifetime Enter a lifetime for the access token from a minimum of 60 seconds to a maximum of one day.
Default: 60 minutes
Refresh Token Lifetime Enter a lifetime for the refresh token from a minimum of 60 seconds to a maximum of 180
days.
Default: 30 days
4. Click Add to create the client and generate the ID and secret.
5. Copy the secret, save it securely, and then close the dialog.
Note
You won't be able to copy the secret again. If you lose it, you will need to create a new client.
6. Provide the following information to users who will use the client:
• Client ID • Client ID
• Secret • Secret
• Authorization URL • OAuth2SAML Token URL
• Token URL. • OAuth2SAML Audience
Users must manually authenticate against the IDP in order Users authenticate with their third-party app, which has
to generate the authorization code before continuing with a trusted relationship with the IDP, and do not need to
the remaining OAuth2.0 steps. re-authenticate (see Add a Trusted Identity Provider [page
35]).
If you use the OAuth 2.0 SAML Bearer Assertion workflow, you must add a trusted identity provider to SAP
Datasphere.
Context
The OAuth 2.0 SAML Bearer Assertion workflow allows third-party applications to access protected resources
without prompting users to log into SAP Datasphere when there is an existing SAML assertion from the
third-party application identity provider.
Note
Both SAP Datasphere and the third-party application must be configured with the same identity provider.
The identity provider must have a user attibute Groups set to the static value sac. See also the blog
Integrating with SAP Datasphere Consumption APIs using SAML Bearer Assertion (published March
2024).
Property Description
Name Enter a unique name, which will appear in the list of trusted identity providers.
Provider Name Enter a unique name for the provide. This name can contain only alphabet characters (a-z &
A-Z), numbers (0-9), underscore (_), dot (.), hyphen (-), and cannot exceed 36 characters.
Signing Certificate Enter the signing certificate information for the third-party application server in X.509 Base64
encoded format.
4. Click Add.
The identity provider is added to the list. Hover over it and select Edit to update it or Delete to delete it.
You may need to use the Authorization URL and Token URL listed here to complete setup on your OAuth
clients.
To do so, you must have SAP BTP administration authorization on the subaccount that is entitled to SAP
Datasphere.
Note
If you delete your service instance by accident, it can be recovered within seven days. After seven days has
passed, the tenant and all its data will be deleted and cannot be recovered.
1. In SAP BTP, select the subaccount and the space where the service instance was created.
2. Navigate to Instances and Subscriptions.
3. In the Service Instances page, find the SAP Datasphere service instance that you want to delete, click the
button at the end of the row and select Delete, then click Delete in the confirmation dialog.
You can view the progress of the deletion.
Context
If you accidentally delete your SAP Datasphere service instance in SAP BTP, you can restore it within seven
days. For more information, see SAP Note 3455188 .
Note
Restoring your service instance is only supported for standard service plans.
Procedure
1. Create a customer incident through ServiceNow using the component DS-PROV. Set the priority to High,
and ask SAP support to restore impacted SAP Datasphere tenant. You must provide the tenant URL.
Once completed, SAP Support will inform you that that the impacted tenant has been restored and
unlocked successfully.
2. Get the OAuth Client ID and OAuth Client Secret:
a. Log on to the impacted SAP Datasphere tenant.
b. From the side navigation, choose System Administration .
c. Choose the App Integration tab.
d. Select Add a New OAuth Client.
e. From the Purpose list, select API Access.
f. Choose at least one API option from the Access list.
g. Set Authorization Grant to Client Credentials.
h. Select Save.
i. Copy and save the OAuth Client ID and OAuth Client Secret for Step 3.
3. Fetch your access token via http POST request to the OAuth Client Token URL.
The Token URL is displayed on the App Integration tab, above the list of Configured Clients.
Replace <TokenURL> with your OAuth Client Token URL. Replace <OAuthClientSecret> with the
OAuth Client Secret. The secret must be Base64 encoded.
b. Save the access token returned by the POST request.
4. Get the UUID for your tenant.
a. Log on to the impacted tenant.
b. Go to System About .
{
"tenantUuid": "<TenantUUID>",
"access_token": "<AccessToken>"
}
Replace <TenantUUID> with the ID you retrieved in Step 4c. Replace <AccessToken> with the token
you fetched in Step 3b.
l. Click Next.
m. In the review dialog click Create.
Results
A new service instance is created and linked to the SAP Datasphere tenant that was accidentally deleted. All
tenant data is restored.
If certain views and SAP HANA multi-dimensional services (MDS) requests regularly require more resources
than are available on your SAP Datasphere tenant, you can now purchase additional on-demand compute and
processing memory. You can then create elastic compute nodes, allocate the additional resources to them and
schedule them to spin up to handle read peak loads.
The elastic compute nodes will take over the read peak loads and support the SAP HANA Cloud database.
Note
Users of SAP Datasphere can consume data via elastic compute nodes only in SAP Analytics Cloud (via a
live connection) and Microsoft Excel (via an SAP add-in).
To identify peak loads, you can look at the following areas in the System Monitor: out-of-memory widgets
in the Dashboard tab, key figures in Logs Statements , views used in MDS statements in Logs
Statements . See Monitoring SAP Datasphere [page 232].
You can purchase a number of compute blocks to allocate to elastic compute nodes.
Depending on the resources allocated to your tenant in the Tenant Configuration page, the administrator
decides how many compute blocks they will allocate to elastic compute nodes. See Configure the Size of Your
SAP Datasphere Tenant [page 23].
Once you've purchased additional resources, you can create an elastic compute node to take over peak loads.
Once an administrator has purchased additional resources dedicated to elastic compute nodes, they can
create and manage elastic compute nodes in the Space Management. You can create an elastic compute node
and allocate resources to it, assign spaces and objects to it to specify the data that will be replicated to the
node, and start the node (manually or via a schedule) to replicate the data to be consumed.
Users of SAP Analytics Cloud and Microsoft Excel (with the SAP add-in) will then automatically benefit from the
improved performance of the elastic compute nodes when consuming data exposed by SAP Datasphere. See
Consuming Data Exposed by SAP Datasphere.
To create and manage elastic compute nodes, you must have the following privileges:
The DW Administrator global role, for example, grants these privileges (see Roles and Privileges by App and
Feature [page 107]).
1. In the side navigation area, click (Space Management), then click Create in the Elastic Compute Nodes
area.
2. In the Create Elastic Compute Node dialog, enter the following properties, and then click Create:
Property Description
Business Name Enter the business name of the elastic compute node. Can contain a maximum of 30 charac-
ters, and can contain spaces and special characters.
Technical Name Enter the technical name of the elastic compute node. The technical name must be unique.
It can only contain lowercase letters (a-z) and numbers (0-9). It must contain the prefix: ds
(which helps to identify elastic compute nodes in monitoring tools). The minimum length is 3
and the maximum length is 9 characters. See Rules for Technical Names [page 138].
Note
As the technical name will be displayed in monitoring tools, including SAP internal tools,
we recommend that you do not mention sensitive information in the name.
Performance Class The performance class, which has been selected beforehand for all elastic compute nodes, is
displayed and you cannot modify it for a particular elastic compute node.
Note
The performance class is selected when purchasing additional resources in the Tenant
Configuration page (see Configure the Size of Your SAP Datasphere Tenant [page 23]) and
applies to all elastic compute nodes. The default performance class is High Compute and
you may want change it in specific cases. For example, if you notice that the memory
usage is high and the CPU usage is low during the runtime and you want to save resources,
you can select another the performance class, which will change the memory/CPU ratio.
If the performance class is changed in the Tenant Configuration page and you want to
edit your elastic compute node by selecting it and clicking Configure, you will be asked to
select the changed performance class.
Compute Blocks Select the number of compute blocks. You can choose 4, 8, 12, or 16 blocks. The amount of
memory and vCPU depends on the performance class you choose:
• Memory: 1 vCPU and 16 GB RAM per block
• Compute: 2 vCPUs and 16 GB RAM per block
• High Compute: 4 vCPUs and 16 GB RAM per block
Default: 4
The number of GB for memory and storage and the number of CPU are calculated based on
the compute blocks and you cannot modify them.
Note
You can modify the number of compute blocks later on by selecting the elastic compute
node and click Configure.
The price you pay for additional resources depends on the compute blocks and the perform-
ance class. If a node that includes 4 compute blocks runs for 30 minutes, you pay for 2
block-hours.
Select the spaces and objects whose data you want to make available in an elastic compute node. The data of
the objects you've selected, which is stored in local tables and persisted views, will be replicated to the node
and available for consumption when the elastic compute node is run.
1. In the side navigation area, click (Space Management), then select the elastic compute node.
2. Click Add Spaces, then in the dialog box, select the spaces that contain objects whose data you want to
make available in an elastic compute node and click Add Spaces.
The number of spaces added to the elastic compute node is displayed in the list of nodes on the left part of
the screen.
By default, all current and future exposed objects of the selected spaces are automatically assigned to the
elastic compute node and All Exposed Objects is displayed in the space tile.
You can deactivate the automatic assignment and manually select the objects.
There are 3 types of exposed objects: analytic models, perspectives and views (of type analytical dataset
and that are exposed for consumption). See Consuming Data Exposed by SAP Datasphere.
3. To manually select the objects of a space, select the space and click Add Objects. Uncheck Add All Objects
Automatically, then select the objects you want and click Add Objects.
All the objects added across all the added spaces, are displayed in the Exposed Objects tab, whether they've
been added manually or automatically via the option All Exposed Objects.
Note
Remote Tables - Data that is replicated from remote tables in the main instance cannot be replicated to
an elastic compute node. If you want to make data from a replicated remote table available in an elastic
compute node, you should build a view on top of the remote table and persist its data in the view (see
Shared Table Example - Making data from a shared table available in an elastic compute node:
• The IT space shares the Products table with the Sales space.
• The analytical model in the Sales space uses the shared Products table as a source.
• If you want the Products table to be replicated to an elastic compute node, you need to add to the node
both the Sales space and the IT space. The shared Products table will not be replicated to the node if you
only add the Sales space.
1. In the side navigation area, click (Space Management), then select the elastic compute node.
2. Select one or more spaces and click Remove Spaces.
All spaces and their objects are removed from the elastic compute node.
3. To remove one or more objects that you've manually added, in the Exposed Objects tab, select one or more
objects and click Remove Objects.
1. In the side navigation area, click (Space Management), then select the elastic compute node.
2. Click Delete then in the confirmation dialog click Delete again.
The Delete button is disabled if the status of the elastic compute node is Running.
Once you've created an elastic compute node and added spaces and objects to it, you can run it and make data
available for consumption.
When you start an elastic compute node, it will pass through the following phases:
• Not Ready - The node cannot be run because no spaces or objects are assigned to it.
• Ready - Spaces or objects are assigned to the node, which can be run, either by starting the run manually
or scheduling it.
Note
The Running status displayed in red indicates that the elastic compute node contains issues. We
recommend that you stop and restart the node, or, alternatively that you stop and delete the node and
create a new one.
• Stopping - You’ve stopped the elastic compute node manually by clicking the Stop button or it has been
stopped via a schedule: persisted view replicas, local table replicas and routing are being deleted from the
node.
• Stopping Failed (displayed in red) - You’ve stopped the elastic compute node manually by clicking the Stop
button or it has been stopped via a schedule: issues have occurred. You can stop again the elastic compute
node.
Note
Updates of local tables or persisted views while an elastic compute node is running - An elastic compute
node is in its running phase, which means that its local tables and persisted views have been replicated. Here is
the behavior if these objects are updated while the node is running:
• If a local table data is updated, it is updated on the main instance and the local table replica is also
updated in parallel on the elastic compute node. The runtime may take longer and more memory may be
consumed.
• If a persisted view data is updated, it is first updated on the main instance, then as a second step the
persisted view replica is updated on the elastic compute node. The runtime will take longer, and more
memory and compute will be consumed.
• If local table or persisted view metadata is changed on (new column for example) or deleted from the main
instance, the local table replica or the persisted view replica is deleted from the elastic compute node. The
data of these objects is therefore read from the main instance and not from the elastic compute node.
To create and manage elastic compute nodes, you must have the following privileges:
The DW Administrator global role, for example, grants these privileges (see Roles and Privileges by App and
Feature [page 107]).
If the status of an elastic compute node is Ready, you can start it.
1. In the side navigation area, click (Space Management), then select the elastic compute node.
2. Click Start.
The status of the elastic compute node changes to Starting.
If the status of an elastic compute node is Starting or Running, you can stop it.
1. In the side navigation area, click (Space Management), then select the elastic compute node.
2. Click Stop.
The status of the elastic compute node changes to Stopping.
You can schedule an elastic compute node to run periodically at a specified date or time. You can also pause
and then later resume the schedule. You create and manage a schedule to run an elastic compute node as
any other data integration task (see Scheduling Data Integration Tasks) and, in addition, you can specify the
duration time frame as follows.
1. In the side navigation area, click (Space Management), then select the elastic compute node.
2. Click Schedule, then Create Schedule.
3. In the Create Schedule dialog, specify the options of the schedule, just like for any other integration task.
See Schedule a Data Integration Task (Simple Schedule) and Schedule a Data Integration Task (with Cron
Expression).
4. In addition, specify in the Duration area the total number of hours and minutes of an elastic compute node
run, from the starting to the stopping stages.
Example - The elastic compute node is scheduled to run on the first day of every month for a duration of 72
hours (uptime of 3 days).
You can then perform the following actions for the schedule by clicking Schedule: edit, pause, resume, delete or
take over the ownership of the schedule (see Scheduling Data Integration Tasks).
In some cases, you can partially run again an elastic compute node for updates and replicate tables or
persisted views that were not replicated: if changes have been made to a local table or a persisted view
assigned to the node while or after the node was running; if a table or a view has failed to be replicated. In such
a case, the Update button is available.
1. In the side navigation area, click (Space Management), then select the elastic compute node.
2. Click Update.
The status of the elastic compute node changes to Starting.
Monitor an elastic compute node to see for example all its start and stop runs or if all local tables and persisted
views have been replicated.
Note
To monitor the start and stop runs for all elastic compute nodes, you can click View Logs in the left-hand
area of the Space Management.
You can monitor key figures related to an elastic compute node (such as start and end time of the last run;
amount of memory used for data replication), in the Elastic Compute Nodes tab of the System Monitor (see
Monitoring SAP Datasphere [page 232]).
Context
You can add a tenant type indicator to show all users which system they are using. For example, it would allow
users to differentiate between a test or production system. When enabled, a colored information bar is visible
to all users of the tenant, and the browser favicon is be updated with the matching color.
Procedure
4. Select a color.
Results
The tenant information that you set is displayed to all users above the shell bar. For example:
Context
Note
This task is limited to patch upgrades. For example, if your current database version is 2024.28.3, and the
next patch version of 2024.28.4 is available, you can upgrade. You cannot go from version 2024.28.4 to
2024.29.0, because that is a larger upgrade, not a patch.
Create and manage users, manage secure access to SAP Datasphere using roles, and set up authentication for
your users if you are using your own identity provider.
By default, SAP Cloud Identity Authentication is used by SAP Datasphere. We also support single sign-on
(SSO), using your identity provider (IdP).
Related Information
Enable IdP-Initiated Single Sign On (SAP Data Center Only) [page 51]
Renewing the SAP Analytics Cloud SAML Signing Certificate [page 53]
Enabling a Custom SAML Identity Provider [page 54]
By default, IdP-initiated SSO is not supported if SAP Datasphere is running on an SAP Data Center. To support
IdP initiated SSO on an SAP Data Center, you must add a new assertion consumer service endpoint to your
identity provider.
Prerequisites
SAP Datasphere can be hosted either on SAP data centers or on non-SAP data centers. Determine which
environment SAP Datasphere is hosted in by inspecting your URL:
• A single-digit number, for example us1 or jp1, indicates an SAP data center.
• A two-digit number, for example eu10 or us30, indicates a non-SAP data center.
1. Navigate to your IdP and find the page where you configure SAML 2.0 Single Sign On.
2. Find and copy your FQDN.
https:// <FQDN>/
https://<tenant_ID>.accounts.ondemand.com/saml2/idp/sso?
sp=<sp_name>&index=<index_number>
Note
The pattern will vary depending on the identity provider you use.
The following table lists the URL parameters you can use for IdP-initiated SSO.
Results
Users will be able to use SAML SSO to log onto SAP Datasphere through their identity provider.
To continue using SAML SSO, an administrator must renew the certificate before it expires.
Context
An email with details on how to renew the SAML X509 certificate is sent to administrators before the certificate
expiry date. If the certificate expiry is less than 30 days away, a warning message appears when you log on to
SAP Datasphere.
Note
If you click the Renew link on the warning message, you're taken to the Security tab on the
(Administration) page.
Procedure
A confirmation dialog appears. When you confirm the renewal, a new metadata file is automatically
downloaded.
3. If you use a custom identity provider, upload the SAP Datasphere metadata file to your SAML Identity
Provider (IdP).
Note
This step is not required if you use SAP Cloud ID for authentication.
4. If you have live data connections to SAP HANA systems that use SAML SSO, you must also upload the new
metadata file to your SAP HANA systems.
5. Log on to SAP Datasphere when five minutes has passed.
Results
If you are able to log on, the certificate renewal was successful. If you cannot logon, try one of the following
troubleshooting tips.
1. Ensure the new metadata file has been uploaded to your IdP. For more information, see Enabling a Custom
SAML Identity Provider [page 54].
2. Clear the browser cache.
3. Allow up to five minutes for your IdP to switch to the new certificate with the newly uploaded metadata.
By default, SAP Cloud Identity Authentication is used by SAP Datasphere. SAP Datasphere also supports single
sign-on (SSO), using your identity provider (IdP).
Prerequisites
Note
A custom identity provider is a separate solution, like for example Azure AD, and is not part of SAP
Analytics Cloud or SAP Datasphere. Therefore the change in configuration is to be applied directly in the
solution, not within SAP Analytics Cloud or SAP Datasphere. Also no access to SAP Analytics Cloud or SAP
Datasphere is required to make the change, only an access to the Identity Provider, eg Azure AD.
Note
Be aware that the SAML attributes for SAP Datasphere roles do not cover user assignment to spaces. A
user who logs into a SAP Datasphere tenant through SSO must be assigned to the space in order to access
the space. If you do not assign a user to a space, the user will not have access to any space.
Procedure
If you've provisioned SAP Datasphere prior to version 2021.03 you'll see a different UI and need go to
(Product Switch) → (Analytics) → (System) → (Administration) → Security.
2. Select (Edit).
3. In the Authentication Method area, select SAML Single Sign-On (SSO) if it is not already selected.
Note
If SAP Datasphere is running on an SAP data center, you must submit an SAP Product Support Incident
using the component LOD-ANA-ADM. In the support ticket, indicate that you want to set up user profiles
and role assignment based on custom SAML attributes, and include your SAP Datasphere tenant URL.
If SAP Datasphere is running on an SAP data center, and you want to continue using User Profiles and
Role assignment using SAML attributes, you will need to open a support ticket each time you switch to
a different custom IdP.
If SAP Datasphere is running on a non-SAP data center, you must configure your SAML IdP to map user
attributes to the following case-sensitive allowlisted assertion attributes:
displayName Optional.
functionalArea Optional.
preferredLanguage Optional.
<AttributeStatement>
<Attribute
Name="email">
<AttributeValue>abc.def@mycompany.com</AttributeValue>
</Attribute>
<Attribute
Name="givenName">
<AttributeValue>Abc</AttributeValue>
</Attribute>
<Attribute
Name="familyName">
<AttributeValue>Def</AttributeValue>
</Attribute>
<Attribute
Name="displayName">
<AttributeValue>Abc Def</AttributeValue>
</Attribute>
<Attribute
Name="Groups">
<AttributeValue>sac</AttributeValue>
</Attribute>
Note
If you are using the SAP Cloud Identity Authentication service as your IdP, map the Groups attribute
under Default Attributes for your SAP Datasphere application. The remaining attributes should be
mapped under Assertion Attributes for your application.
The attribute will be used to map users from your existing SAML user list to SAP DatasphereNameID used
in your custom SAML assertion:
<NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified"><Your
Unique Identifier></NameID>
Determine what your NameID maps to in your SAP Datasphere system. It should map to . The user
attribute you select must match the User ID, Email or a custom attribute. You can view your SAP
Datasphere user attributes in Security Users .
Note
NameID is case sensitive. The User ID, Email, or Custom SAML User Mapping must match the values in
your SAML IdP exactly. For example, if the NameId returned by your SAML IdP is user@company.com
and the email you used in SAP Datasphere is User@company.com the mapping will fail.
Note
If your NameID email is not case-sensitive and contains mixed-case, for example
User@COMPANY.com, consider choosing Custom SAML User Mapping instead.
Note
If you select this option, there will be a new column named SAML User Mapping in Security
Users . The. After switching to your SAML IdP, you must manually update this column for all
existing users.
If you are using a live connection to SAP S/4HANA Cloud Edition with OAuth 2.0 SAML Bearer
Assertion, NameId must be identical to the user name of the business user on your SAP S/4HANA
system.
For example, if you want to map an SAP Datasphere user with the user ID SACUSER to your SAP
S/4HANA Cloud user with the user name S4HANAUSER, you must select Custom SAML User Mapping
and use S4HANAUSER as the Login Credential in Step 10.
If you are using SAP Cloud Identity as your SAML IdP, you can choose Login Name as the NameID
attribute for SAP Datasphere, then you can set the login name of your SAP Datasphere user as
S4HANAUSER.
When dynamic user creation is enabled, new users will be automatically created using the default role and
will be able to use SAML SSO to log onto SAP Datasphere. After users are created, you can set roles using
SAML attributes.
Note
Automatic user deletion is not supported. If a user in SAP Datasphere is removed from your SAML
IdP, you must go to Security Users and manually delete users. For more information, see Delete
Users [page 69].
If this option is enabled, dynamic user creation still occurs even when SAML user attributes have not
been set for all IdP users. To prevent a user from being automatically created, your SAML IdP must
deny the user access to SAP Datasphere.
Note
The Login Credential depends on the User Attribute you selected under Step 3.
13. Test the SAML IdP setup, by logging in with your IdP, and then clicking Verify Account to open a dialog for
validation.
In another browser, log on to the URL provided in the Verify Your Account dialog, using your SAML IdP
credentials. You can copy the URL by selecting (Copy).
You must use a private session to log onto the URL; for example, guest mode in Chrome. This ensures that
when you log on to the dialog and select SAP Datasphere, you are prompted to log in and do not reuse an
existing browser session.
Note
If SAP Datasphere is running on a non-SAP data center, upon starting the verification step, you will
see a new screen when logging into SAP Datasphere. Two links will be displayed on this page. One will
link to your current IdP and the other will link to the new IdP you will switch to. To perform the Verify
Account step, use the link for the new IdP. Other SAP Datasphere users can continue logging on with
The values in this column should be a case sensitive match with the NameId sent by your IdP's SAML
assertion.
Note
If you selected Custom SAML User Mapping as User Attribute, you must manually update all fields in
the SAML User Mapping column.
Results
Users will be able to use SAML SSO to log onto SAP Datasphere.
Note
You can also set up your IdP with your Public Key Infrastructure (PKI) so that you can automatically log in
your users with a client side X.509 certificate.
Next Steps
You can revert your system to the default identity provider (SAP Cloud Identity) and disable your custom SAML
IdP.
Procedure
If you've provisioned SAP Datasphere prior to version 2021.03 you'll see a different UI and need go to
(My Products) → (Analytics) → (System) → (Administration) → Security.
2. Select (Edit) .
3. In the Authentication Method area, select SAP Cloud Identity (default).
4. Select (Save) .
Results
When conversion is complete, you will be logged out and directed to the SAP Cloud Identity logon page.
You can update the SAML identity provider (IdP) signing certificate.
Prerequisites
• You must have the metadata file that contains the new certificate from your custom IdP, and you must be
logged into SAP Datasphere before your IdP switches over to using the new certificate.
• You must be the System Owner in SAP Datasphere.
Procedure
If you've provisioned SAP Datasphere prior to version 2020.03 you'll see a different UI and need go to
(My Products) → (Analytics) → (System) → (Administration) → Security.
2. Select (Edit)
3. Under Step 2, select Update and provide the new metadata file.
4. Select (Save) and confirm the change to complete the update.
The update will take effect within two minutes.
Results
Note
The Identity Provider Administration tool allows system owners to manage the custom identity provider
configured with SAP Datasphere. Through the tool, the system owner can choose to upload new metadata
for the current custom identity provider, or revert to using the default identity provider.
Prerequisites
1. Access the Identity Provider Administration tool using the following URL pattern:
https://console.<data center>.sapanalytics.cloud/idp-admin/
For example, if your SAP Datasphere system is on eu10, then the URL is:
https://console.eu10.sapanalytics.cloud/idp-admin/
https://console.cn1.sapanalyticscloud.cn/idp-admin/
https://console-eudp.eu1.sapanalytics.cloud/idp-admin/
https://console-eudp.eu2.sapanalytics.cloud/idp-admin/
2. Log in with an S-user that has the same email address as the system owner of your system. If you don't yet
have such an S-user, you can click the “Register” button and create a P-user.
If you create a new P-user, you'll receive an email with an activation link that will let you set your password.
3. Once you're logged in, you'll see a list of SAP Datasphere systems for which you are the system owner.
Once you're in the settings page for your system, you can see information about your current custom
identity provider. If you need to reacquire your system's metadata, you can click the “Service Provider
Metadata Download” link.
If you don't want to manage your custom identity provider through Identity Provider Administration, you
can disconnect your system by clicking “Disconnect IdP Admin from your system”.
4. To proceed with either reverting to the default identity provider or updating the current custom identity
provider, select the corresponding radio button and then click “Step 2”.
Note
Your SAP Datasphere system is connected to the Identity Provider Administration tool by default. The
connection status for your system is displayed under the “Status” column of the systems list page. If
you'd like to disconnect your system from the console, you can do so in either of two places:
• In SAP Datasphere, navigate to System Administration Security Optional: Configure
Identity Provider Administration Tool , click the Connected switch, and then save the changes.
• Click “Disconnect IdP Admin from your system” after selecting your system in Identity Provider
Administration.
You can create and modify users in SAP Datasphere in several different ways.
Creating Users
Create individual users in the Users list Create a User [page 64]
Import multiple users from a CSV file Import or Modify Users from a File [page 65]
Modifying Users
Export user data to a CSV file, to synchronize with other Export Users [page 67]
systems
Update the email address a user logs on with Update User Email Addresses [page 69]
Prerequisites
You can select one or more roles while you're creating the user. Before getting started creating users, you might
want to become familiar with the global roles and scoped roles. You can still assign roles after you've created
the users.
Global Roles A role that enables users assigned to it Managing Roles and Privileges [page
to perform actions that are not space- 70]
related, typically a role that enables
to administrate the tenant. A standard
or custom role is considered as global
when it includes global privileges.
Scoped Roles A role that inherits a set of scoped priv- Create a Scoped Role to Assign Privi-
ileges from a standard or custom role leges to Users in Spaces [page 75]
and grants these privileges to users for
use in the assigned spaces.
Context
The method described here assumes that SAP Datasphere is using its default authentication provider. If you
are using a custom SAML Identity Provider, you must provide slightly different information, depending upon
how your SAML authentication is configured.
Procedure
Note
6. Select the icon and choose one or more roles from the list.
7. Select (Save).
Results
• A welcome email including an account activation URL will be sent to the user, so that the user can set an
initial password and access the system. Optionally, you can disable the welcome email notification (see
Configure Notifications [page 255]).
• When you create a user, it is activated by default. You may want to deactivate a user in specific cases,
for example when a user is on long-term leave. To deactivate a user, select the relevant check box in the
leftmost column of the table, click the icon (Deactivate Users) and optionally select Email users to notify
them that their accounts have been deactivated. Deactivated users cannot login to SAP Datasphere until
they are activated again.
Note
In addition to the standard workflows, you can also create users via the command line (see Manage Users
via the Command Line).
You can create users or batch-update existing users by importing user data that you have saved in a CSV file.
Prerequisites
The user data you want to import must be stored in a CSV file. At minimum, your CSV file needs columns for
UserID, LastName, and Email, but it is recommended that you also include FirstName and DisplayName.
If you want to assign new users different roles, include a Roles column in the CSV file. The role IDs used for
role assignment are outlined in Standard Roles Delivered with SAP Datasphere [page 72].
For existing users that you want to modify, you can create the CSV file by first exporting a CSV file from SAP
Datasphere. For more information, see Export Users [page 67].
Note
The first name, last name, and display name are linked to the identity provider, and can't be changed in the
User list page, or when importing a CSV file. (In the User list page, those columns are grayed out.)
Edit the downloaded CSV file to remove columns whose values you don't want to modify, and to remove rows
for users whose values you don't want to modify. Do not modify the USERID column. This ensures that entries
can be matched to existing users when you re-import the CSV.
These are the available mapping parameters when importing CSV user data:
Parameter Description
User ID
First Name
Last Name
Display Name
Manager
Roles
Mobile
Phone
Office Location
Function Area Can be used to refer to a user's team or area within their
organization.
Job Title
Clean up notifications older than Set in user settings: when to automatically delete
notifications.
Welcome message Message that is shown to the user on the home screen.
Closed Page tips Closed page tips are tracked so that they are not shown
again.
Closed Item Picker Tips Closed tooltips are tracked so that they won't be reopened
again (for first time users).
Last Maintenance Banner Version The version when the last maintenance banner was shown.
Homescreen content is initialized If default tiles have been set for the home screen.
Procedure
If you want to synchronize SAP Datasphere user data with other systems, you can export the data to a CSV file.
Procedure
The system exports all user data into a CSV file that is automatically downloaded to your browser's default
download folder.
Column Description
USER_NAME
FIRST_NAME
LAST_NAME
DISPLAY_NAME
MANAGER
DEFAULT_APP The application that will launch when you access your SAP
Datasphere URL. The default application can be set in
System Administration System Configuration or in
the user settings.
OVERRIDE_BACKGROUND_OPTION
OVERRIDE_LOGO_OPTION
OVERRIDE_WELCOME_MESSAGE_FLAG
OVERRIDE_HOME_SEARCH_TO_INSIGHT_FLAG
OVERRIDE_GET_STARTED_FLAG
OVERRIDE_RECENT_FILES_FLAG
OVERRIDE_RECENT_STORIES_FLAGOVERRIDE_RECENT_
STORIES_FLAG
OVERRIDE_RECENT_PRESENTATIONS_FLAG
OVERRIDE_RECENT_APPLICATIONS_FLAG
OVERRIDE_CALENDAR_FLAG
OVERRIDE_FEATURED_FILES_FLAG
You can update the user email addresses used for logon.
When you create a user, you must add an email address. The email address is used to send logon information.
To edit a user's email address, go to the Users page of the Security area, and select the email address you want
to modify. Add a new email address and press Enter, or select another cell to set the new address.
If the email address is already assigned to another user, a warning will appear and you must enter a new
address. Every user must be assigned a unique email address.
As long as a user has not logged on to the system with the new email address, the email address will appear in
a pending state on the Users list.
Related Information
Procedure
1. In the Users management table, select the user ID you want to delete by clicking the user number in the
leftmost column of the table.
The whole row is selected.
2. Choose (Delete) from the toolbar.
3. Select OK to continue and remove the user from the system.
Users with the DW Administrator role (administrators) can set a password policy to cause database user
passwords to expire after a specified number of days.
Context
Users with the DW Space Administrator role (space administrators) can create database users in their spaces
to allow the connection of ETL tools to write to and read from Open SQL schemas attached to the space
schema (see Integrating Data via Database Users/Open SQL Schemas).
Procedure
After this period, the user will be prompted to set a new password.
Note
The password policy applies only to database users where the Enable Password Policy property is
selected, for both existing and new users. If a user does not log on with their initial password during this
period, they will be deactivated until their password is reset.
Assigning roles to your users maintains access rights and secures your information in SAP Datasphere.
SAP Datasphere delivers a set of standard roles and you can create your own custom roles:
Each standard or custom role is either a global role or a template for scoped roles:
• Global role - A role that enables users assigned to it to perform actions that are not space-related, typically
a role that enables to administrate the tenant. A standard or custom role is considered as global when it
includes global privileges. A tenant administrator can assign a global role to the relevant users. See Assign
Users to a Role [page 81].
• Scoped role - A role that inherits a set of privileges from a standard or custom role and assigns them to one
or more users for one or more spaces. Users assigned to a scoped role can perform actions in the assigned
spaces. A tenant administrator can create a scoped role. See Create a Scoped Role to Assign Privileges to
Users in Spaces [page 75].
For more information on global and scoped privileges, see Privileges and Permissions [page 95].
A DW Administrator can use standard roles as templates for creating custom roles with a different set
of privileges (see Create a Custom Role [page 74]). You can also use the standard roles that include
scoped privileges as templates for creating scoped roles (see Create a Scoped Role to Assign Privileges to
Users in Spaces [page 75]). You can assign the standard roles that contain global privileges (such as DW
Administrator, Catalog Administrator and Catalog User) directly to users.
Note
In the side navigation area, click (Security) (Roles). The following standard roles are available:
Note
Users who are space administrators primarily need scoped permissions to work with spaces,
but they also need some global permissions (such as Lifecycle when transporting content
packages). To provide such users with the full set of permissions they need, they must be
assigned to a scoped role (such as the DW Scoped Space Administrator) to receive the
necessary scoped privileges, but they also need to be assigned directly to the DW Space
Administrator role (or a custom role that is based on the DW Space Administrator role) in order
to receive the additional global privileges.
• DW Integrator (template) - Can integrate data via connections and can manage and monitor data
integration in a space.
Note
Please do not use the roles DW Support User and DW Scoped Support User as they are reserved for SAP
Support.
Users are assigned roles in particular spaces via scoped roles. One user may have different roles in different
spaces depending on the scoped role they're assigned to. See Create a Scoped Role to Assign Privileges to
Users in Spaces [page 75].
Note
Please note for SAP Datasphere tenants that were initially provisioned prior to version 2021.03, you need
the following additional roles to work with stories:
The standard roles are grouped by the license type they consume and each user's license consumption is
determined solely by the roles that they've been assigned. For example, a user who has been assigned only the
DW Administrator standard role consumes only a SAP Datasphere license.
Planning Professional, Planning Standard as well as Analytics Hub are SAP Analytics Cloud specific license
types. For more information, see Understand Licenses, Roles, and Permissions in the SAP Analytics Cloud
documentation.
You can create a custom role using either a blank template or a standard role template and choosing privileges
and permissions as needed.
Prerequisites
Context
You can create a custom role to enable users to do either global actions on the tenant or actions that are
specific to spaces.
• If you create a custom role for global purposes, you should include only global privileges and permissions.
You can then assign the role to the relevant users.
• If you create a custom role for space-related purposes, you should include only scoped privileges and
permissions. As a second step, you need to create a scoped role based on this custom role to assign users
and spaces to the set of privileges included. See Create a Scoped Role to Assign Privileges to Users in
Spaces [page 75].
You should not mix global and scoped privileges in a custom role.
• If you include a scoped privilege in a custom role that you create for global purposes, the privilege is
ignored.
• If you include a global privilege in a custom role that you want to use as a template for a scoped role, the
privilege is ignored .
Note
Some users, such as space administrators, primarily need scoped permissions to work with spaces, but
they also need some global permissions (such as Lifecycle when transporting content packages). To
provide such users with the full set of permissions they need, you can include both the relevant global
privileges and scoped privileges in the custom role you will use as a template for the scoped role. Each
For more details about global and scoped privileges, see Privileges and Permissions [page 95].
Procedure
Note
You can assign the role to a user from the Users page or - only if you've created a custom role for global
purposes (and not for space-related purposes) - from the Roles page. Whether you create users first or
roles first does not matter. See Assign Users to a Role [page 81].
A scoped role inherits a set of scoped privileges from a standard or custom role and grants these privileges to
users for use in the assigned spaces.
A DW Administrator can assign a role to multiple users in multiple spaces, in a single scoped role. As a
consequence, a user can have different roles in different spaces: be a modeler in space Sales Germany and
Sales France and a viewer in space Europe Sales.
You can create a scoped role based on a standard role or on a custom role. In both cases, the scoped role
inherits the privileges from the standard or custom role. You cannot edit the privileges of a scoped role or of
a standard role. You can edit the privileges of a custom role. To create a scoped role with a different set of
privileges, create a custom role with the set of privileges wanted and then create the scoped role from the
custom role. You can then change the privileges of the custom role as needed, which will also change the
privileges of all the scoped roles that are based on the custom role.
Users who are granted the DW Space Administrator role via a scoped role can add or remove users to or from
their spaces and the changes are reflected in the scoped roles. See Control User Access to Your Space.
In the following example, the DW administrator begins assigning users to the three Sales spaces by creating the
appropriate scoped roles:
Bob Sales US
Senior Sales Modeler Custom role “Senior Mod- Jim Sales Europe
eler” based on the DW Mod-
eler standard role + these
privileges (and permissions):
If Bob no longer needs to work in the space Sales US, the DW administrator can unassign Bob from Sales US in
the scoped role Sales Modeler.
As Joan has the role of space administrator for the space Sales US, she can also unassign Bob from Sales US
directly in the space page (in the Space Management). The user assignment change is automatically reflected
in the Sales Modeler scoped role.
Later on, Bob needs the space administration privileges for the space Sales Asia. From the page of the space
Sales Asia, Joan assigns Bob to the space with the Sales Space Admin scoped role.
For more information on scoped roles, see the blog Preliminary Information SAP Datasphere– Scoped Roles
(published in September 2023).
Note
In addition to the standard workflows, you can also create scoped roles and assign scopes and users to
them via the command line (see Manage Scoped Roles via the Command Line).
1. In the side navigation area, click (Security) (Roles) and click your scoped role to open it.
2. Click (Add Role) and select Create a Scoped Role.
Note
As an alternative to creating a scoped role, you can use one of the predefined scoped roles that are
delivered with SAP Datasphere in the Roles page and directly assign spaces and users to them.
3. Enter a unique name for the role and select the license type SAP Datasphere.
You can then assign spaces and users to the new scoped role. The spaces and users must be created
beforehand and you must assign spaces before assigning users to them.
Note
If you’re creating a scoped role to assign space administration privileges to certain users in certain spaces,
you can either do as follows:
• Create a scoped role based on the standard role template DW Space Administrator and, to allow user
assignment, select the privilege (permission) Scoped Role User Assignment privilege (Manage), which
is the only privilege you can select, as the rest of the privileges are inherited from the template. Then,
assign one or more spaces and one or more users to the spaces.
• Open the predefined scoped role DW Scoped Space Administrator and assign one or more spaces and
one or more users to the spaces. Scoped Role User Assignment (Manage) is selected by default.
1. In the side navigation area, click (Security) (Roles) and click your scoped role to open it.
2. Click [number] Scopes, select one or more spaces in the dialog Scopes and click Save.
Note
By default, all users of the scoped role are automatically assigned to the spaces you've just added. You
can change this and assign only certain members to certain spaces in the Users page of the scoped
role.
1. In the side navigation area, click (Security) (Roles) and click your scoped role to open it.
2. Click [number] Scopes.
3. In the Selected Scopes area of the dialog Scopes, click the cross icon for each space that you want to
remove from the role, then click Save.
All users that were assigned to the spaces you've just removed are automatically removed from the scoped
role.
1. In the side navigation area, click (Security) (Roles) and click your scoped role to open it.
2. Click Users. All user assignements are displayed in the Users page.
• To individually select users and assign them to spaces, click (Add Users to Scopes), then Add New
Users to Scopes. Select one or more users in the wizard Add Users to Scopes and click Next Step.
Note
By default, the added users are automatically assigned to all the spaces included in the scoped
role. If you want to modify this, select the one or more spaces to which you want to assign the
users.
Note
You can also add a user to a scoped role from the (Users) area. In such a case, the user is
automatically assigned to all the spaces included in the scoped role. See Assign Users to a Role
[page 81].
• To assign all users included in the scoped role to one or more spaces. To do so, click (Add Users to
Scopes), then Add All Current Users to Scopes. Select one or more spaces in the wizard Add Users to
Scopes and click Next Step and Save.
• To assign all users of the tenant to one or more spaces, click (Add Users to Scopes), then Add All
Users to Scopes. Select one or more spaces in the wizard Add Users to Scopes and click Next Step and
Save.
Restriction
A user can be assigned to a maximum of 100 spaces across all scoped roles.
Note
In the Users page, you can filter users and spaces to see for example to which spaces and roles a user is
assigned to.
Once you've assigned a user to a space with the DW Space Administrator role via a scoped role, this user can
manage the users for its space directly in the page of its space (in the Space Management). See Control User
Access to Your Space.
1. In the side navigation area, click (Security) (Roles) and click your scoped role to open it.
2. Click Users. All user assignements are displayed in the Users page.
3. Check the relevant rows (a row corresponding to a combination of one user and one space) and click the
garbage icon. The users cannot access the spaces they were previously assigned to in the scoped role.
Prerequisites
Note
If you assign a user to a scoped role, be aware that the user is automatically assigned to all the spaces
included in the scoped role. You can change the user assignment in the scoped role. See Create a Scoped
Role to Assign Privileges to Users in Spaces [page 75].
Note
This is not relevant for scoped roles. For information about how to assign users to spaces in a scoped role,
see Create a Scoped Role to Assign Privileges to Users in Spaces [page 75].
You can create a SAML role mapping to automatically assign users to a specific role based on their SAML
attributes.
For example, you want to give a specific role to all employees that are assigned to a specific cost center. Once
you've done the role mapping, if new users are assigned to the cost center in the SAML identity provider
(IdP), the users will be automatically assigned to the role when logging onto SAP Datasphere via SAML
authentication.
Prerequisites
Your custom SAML Identity Provider (IdP) must be configured and the authentication method selected must
be SAML Single Sign-On (SSO) in (System) → (Administration) →Security. See Enabling a Custom SAML
Identity Provider [page 54].
Procedure
Note
If a user is assigned to a scoped role via SAML attributes, the user is automatically assigned to all the
spaces included in the scoped role.
In the Roles page, a dedicated icon in the role tile is displayed, indicating that the users are assigned to the
role via SAML attributes. When you hover over the icon, the conditions defined for the role are displayed.
You can create, read, modify and delete users and add them to roles via the SCIM 2.0 API.
Introduction
This API allows you to programmatically manage users using a SCIM 2.0 compliant endpoint.
SAP Datasphere exposes a REST API based on the System for Cross-domain Identity Management (SCIM
2.0) specification. This API allows you to keep your SAP Datasphere system synchronized with your preferred
identity management solution.
Note
• List users.
• Get information on the identity provider, available schemas, and resource types.
This API uses SCIM 2.0. For more information, see SCIM Core Schema.
To access the API specification and try out the functionality in SAP Analytics Cloud, see the SAP Business
Accelerator Hub.
Beforehand you can log in with a Oauth client, a user with the administrator role must create an OAuth2.0 client
in your SAP Datasphere tenant and provide you with the OAuth client ID and secret parameters.
Note
See Create OAuth2.0 Clients to Authenticate Against SAP Datasphere [page 33]
To log in to the OAuth client, send a GET (or POST) request with the following elements:
https://<token_url>/oauth/token?grant_type=client_credentials
Note
You can find the token URL in (System) (Administration) App Integration OAuth Clients
Token URL .
The response body returns the access token, which you'll then use as the bearer token to obtain the csrf token.
To obtain a csrf token, send a GET request with the following elements:
<tenant_url>/api/v1/csrf
The CSRF token is returned in the x-csrf-token response header. This token can then be included in the
POST, PUT, PATCH, or DELETE request in the x-csrf-token:<token> header.
List Users
To retrieve users, use the GET request with the/api/v1/scim2/Users endpoint and the following elements:
https://<tenant_url>/api/v1/scim2/Users
You can control the list of users to retrieve by using one or more of the following optional URL parameters:
Parameter Description
sortBy=userName
sortOrder Specifies the order in which items are returned, either ascending or descending.
By default, an ascending sort order is used.
sortOrder=descending
For example, so that the tenth user is the first user retireved:
startIndex=10
count=8
For example, to display the users whose user name include the letter K:
filter=userName co "K"
See the user schema for available attributes. All operators are supported.
https://<tenant_url>/api/v1/scim2/Users/?filter=emails.value co
"a"&sortOrder=descending&startIndex=3&count=2&sortBy=emails.value
Caution
GET requests send personal identifiable information as part of the URL, such as the user name in this case.
Consider using the POST request with the /api/v1/scim2/Users/.search endpoint instead for enhanced
privacy of personal information. Syntax of POST request:
https://<tenant_url>/api/v1/scim2/Users/.search
Note
In the response body, if the users listed are assigned to roles, you can identify the roles as they are prefixed
with PROFILE.
To retrieve a specific user based on its ID, use the GET request with the /api/v1/scim2/Users/<user ID>
endpoint and the following elements:
To retrieve a specific user based on its ID, enter the GET request:
https://<tenant_url>/api/v1/scim2/Users/<user ID>
The user ID must be the UUID (universally unique identifier), which you can get by sending the GET request:
https://<tenant_url>/api/v1/scim2/Users
In the response body, if the user is assigned to roles, you can identify with their prefix PROFILE.
Create a User
To create a user, use the POST request with the/api/v1/scim2/Users/ endpoint and the following elements:
Note
The following information are required: userName, name, and emails information. Other information that
are not provided will be either left empty or set to its default value.
If you are using SAML authentication, idpUserId should be set to the property you are using for your
SAML mapping. For example, the user's USER ID, EMAIL, or CUSTOM SAML MAPPING. If your SAML
mapping is set to EMAIL, the email address you add to idpUserId must match the email address you use
for email.
The userName attribute can only contain alphanumeric and underscore characters. The maximum length
is 20 characters.
Note
When creating or modifying a user, you can add optional properties to the user.
{
"schemas": [
"urn:ietf:params:scim:schemas:core:2.0:User",
"urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
],
"userName": "LGARCIA",
"name": {
"familyName": "Garcia",
"givenName": "Lisa",
"formatted": "Lisa Garcia"
},
"displayName": "Lisa Garcia",
"preferredLanguage": "en",
The following example shows how to create a new user and assign it to a role:
{
"schemas": [
"urn:ietf:params:scim:schemas:core:2.0:User",
"urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
],
"userName": "LGARCIA",
"name": {
"familyName": "Garcia",
"givenName": "Lisa",
"formatted": "Lisa Garcia"
},
"displayName": "Lisa Garcia",
"preferredLanguage": "en",
"active": true,
"emails": [
{
"value": "lisa.garcia@company.com",
"type": "work",
"primary": true
}
],
"roles": [
{
"value": "PROFILE:t.V:Sales_Modeler",
"display": "Sales_Modeler",
"primary": true
} ],
"urn:sap:params:scim:schemas:extension:sac:2.0:user-custom-parameters": {
"idpUserId": "lisa.garcia@company.com"
}
}
The response body returns the ID of the user created, which is the user UUID (universally unique identifier).
Note
When creating or modifying a user via the API, you can also assign the user to one or more roles - either
global or scoped, provided that the roles already exist in the tenant:
• Before you can add one or more users to a scoped role, one space at least must be assigned to the
scoped role.
• When a user is added to a scoped role, the user is given access to all the spaces included in the scoped
role.
• All roles are prefixed with PROFILE. Custom and scoped roles have IDs in the following format:
PROFILE:<t.#>:<role_name>.
• To override all information related to a specific user, use a PUT request. The user properties are updated
with the properties you provide and all the properties that you do not provide are either left empty or set to
their default value.
• To update only some information related to a specific user, use a PATCH request. The user properties are
updated with the changes you provide and all properties that you do not provide remain unchanged.
You can use either the PUT (override) or PATCH (update) request with the/api/v1/scim2/Users/<user
ID> endpoint and the following elements:
https://<tenant_url>/api/v1/scim2/Users/<user ID>
Note
If you are using SAML authentication, and you are using USER ID as your SAML mapping, you cannot
change the userName using this API. The userName you use in the request body must match the user
<ID>.
Note
When creating or modifying a user, you can add optional properties to the user.
{
"schemas": [
"urn:sap:params:scim:schemas:extension:sac:2.0:user-custom-parameters",
"urn:ietf:params:scim:schemas:core:2.0:User"
],
"id": "userID-00001",
"meta": {
"resourceType": "User",
"location": "/api/v1/scim2/Users/userID-00001"
},
"userName": "LGARCIA",
"name": {
"familyName": "Garcia",
"givenName": "Lisa",
"formatted": "Lisa Garcia"
},
Note
When creating or modifying a user via the API, you can also assign the user to one or more roles - either
global or scoped, provided that the roles already exist in the tenant:
• Before you can add one or more users to a scoped role, one space at least must be assigned to the
scoped role.
• When a user is added to a scoped role, the user is given access to all the spaces included in the scoped
role.
• All roles are prefixed with PROFILE. Custom and scoped roles have IDs in the following format:
PROFILE:<t.#>:<role_name>.
Delete a User
To delete a user, use the DELETE request with the/api/v1/scim2/Users/<user ID> endpoint and the
following elements:
To delete a specific user based on its ID, enter the DELETE request:
https://<tenant_url>/api/v1/scim2/Users/<user ID>
The user ID must be the UUID (universally unique identifier), which you can get by sending the GET request:
https://<tenant_url>/api/v1/scim2/Users
You can add optional parameters to the user when creating or modifying a user, in addition to the required
properties (userName, name, and emails).
Parameter Description
preferredLanguage Specifies the language in which to view the SAP Datasphere interface.
Allowed values:
Default value: en
Example
"preferredLanguage": "en",
dataAccessLanguage Specifies the default language in which to display text data in SAP Analytics
Cloud.
Allowed values:
Default value: en
Allowed values:
• MMM d, yyyy
• MMM dd, yyyy
• yyyy.MM.dd
• dd.MM.yyyy
• MM.dd.yyyy
• yyyy/MM/dd
• dd/MM/yyyy
• MM/dd/yyyy
Allowed values:
Note
• H:mm:ss corresponds to 24-Hour Format. For example, 16:05:10.
• h:mm:ss a corresponds to 12-Hour Format. For example, 4:05:10 p.m.
• h:mm:ss A corresponds to 12-Hour Format. For example, 4:05:10 PM.
Example:
"urn:ietf:params:scim:schemas:extension:sap:user-custom-parameters:1.0": {
"dataAccessLanguage": "en",
"dateFormatting": "MMM d, yyyy",
"timeFormatting": "H:mm:ss",
"numberFormatting": "1,234.56",
"cleanUpNotificationsNumberOfDays": 0,
"systemNotificationsEmailOptIn": true,
"marketingEmailOptIn": false
},
Bulk Operations
To create, modify or delete users in bulk, use the POST request with the /api/v1/scim2/Users/ endpoint
and the following elements:
Note
}
}
},
{
"method": "POST",
"path": "/Users",
"bulkId": "bulkId2",
"data": {
"schemas": [
"urn:sap:params:scim:schemas:extension:sac:2.0:user-custom-
parameters",
"urn:ietf:params:scim:schemas:core:2.0:User"
],
"userName": "JOWEN",
"name": {
"familyName": "Owen",
"givenName": "Joe"
},
"displayName": "Joe Owen",
"emails": [
{
"value": "joe.owen@company.com"
}
],
"roles":[
{
"value": "PROFILE:t.V:Sales_Modeler",
"display": "Sales_Modeler",
"primary": true
}
The following example shows how to delete two users using their IDs:
{
"schemas": ["urn:ietf:params:scim:api:messages:2.0:BulkRequest"],
"Operations": [
{
"method": "DELETE",
"path": "/Users/<userID_User1>"
},
{
"method": "DELETE",
"path": "/Users/<userID_User2>"
}
]
}
Using the GET request, you can obtain the following information about the SCIM API:
• /scim2/ServiceProviderConfig - Gets information about the identity provider being used with your
SAP Datasphere tenant.
• /scim2/Schemas - Gets information on the schemas used for user management.
• /scim2/ResourceTypes - Gets information on all available resource types.
• /scim2/ResourceTypes/<Type> - Gets information on a specific resource type.
See all the users, roles, and spaces in the tenant and how they relate to each other.
In (Security) (Authorization Overview), a user with the DW Administrator global role can see all the
users, roles, and spaces in the tenant and how they relate to each other. You can filter by user, role, or space to
see:
To display information related to the one or more terms, enter one or more characters in the Search field and
press Enter (or click Search).
As you type, the field will begin proposing objects and search strings. Click on a string to trigger a search on it.
For example, to display all roles that are assigned to the user Lucia, enter "Lucia" in the Search.
Filter by Criteria
You can filter the list by any of the categories listed in the Filter By area of the left panel: user (in User Name),
space (in Scope Name) and role (in Role Name).
You can select one or more values in each filter category in the Filter By section:
• Each value selected in a category acts as an OR condition. For example, to display all roles that are assigned
to the users Lucia and Ahmed, select Lucia and Ahmed in the User Name category.
• Values selected in separate categories act together as AND conditions. For example, to display all the
scoped roles that enables Lucia to access the Sales Asia space, select Lucia in the User Name category and
Sales Asia in the Scope Name category.
A privilege represents a task or an area in SAP Datasphere and can be assigned to a specific role. The actions
that can be performed in the area are determined by the permissions assigned to a privilege.
Overview
A role represents the main tasks that a user performs in SAP Datasphere. Each role has a set of privileges with
appropriate levels of permissions. The privileges represent areas of the application like the Space Management
or the Business Builder and the files or objects created in those areas.
The standard roles provide sets of privileges and permissions that are appropriate for that role. For example,
the DW Administrator role has all the Spaces permissions, while the DW Viewer role has none.
Global versus scoped privileges - Global privileges are privileges that are used at the tenant level and are
not space-related, and can therefore be included in a global role, typically a tenant administrator role. Scoped
privileges are privileges that are space-related and can therefore be included in a scoped role.
The following table lists the privileges and their permissions that can be included in a global role.
Caution
The permission Manage should be granted only to
tenant administrators.
Note
The permissions Read, Update and Delete are scoped
permissions and are described in the scoped privileges
and permissions table (see Scoped Privileges and Per-
missions [page 100]).
Space Files -------M Allows access to all objects inside a space, such as views
and tables.
Note
To perform actions on spaces, you need a combination
of permissions for the privilege Spaces and for other
privileges. See Roles and Privileges by App and Feature
[page 107].
Caution
The permission Manage should be granted only to ten-
ant administrators.
Note
The permissions Create, Read, Update and Delete are
scoped permissions and are described in the scoped
privileges and permissions table (see Scoped Privileges
and Permissions [page 100]).
Data Warehouse General -R------ Allows users to log into SAP Datasphere. Included in all
standard roles except for DW Consumer.
Data Warehouse Runtime -R--E--- • Read - Allows users of the View Analyzer to download
the generated SQL analyzer plan file.
See Exploring Views with View Analyzer
• Execute - not in use
Other Datasources ----E--- Some connection types require this privilege. For more in-
formation, see Permissions in the SAP Analytics Cloud Help.
See Managing Roles and Privileges [page 70] and View Au-
thorizations (Users, Roles and Spaces) [page 94]
Security Users .
Note
The permissions are included in the DW Adminis-
trator role. When you create a custom role based
on the DW Administrator role, the permissions are
automatically included and you cannot edit them.
Activity Log -R-D---- Allows access to the Activities page in the Security tool.
Lifecycle -R---MS- Allows to import content from the Content Network and to
import and export content via the Transport tool.
Note
The permissions -R---MS- are included in the DW
Administrator role. When you create a custom role
based on the DW Administrator role, the permissions
are automatically included and you cannot edit them.
System Information -RU----- • Read: To access the About area in the System tool.
• Update: To access the Administration, Configuration
and About areas in the System tool.
Catalog Glossary CRUD---- • Create: Use with the Update privilege to create a glos-
sary.
• Read: Use with Catalog Glossary Object to:
• View the term details and glossary list.
• Create a category.
• Search for terms, favorites, and assets.
• Update: Edit a glossary.
• Delete: Delete a glossary.
Catalog KPI Object CRUD---M • Create: Use with Catalog KPI Template with Read per-
mission to create a KPI.
• Read: Use with Catalog KPI Template with Read permis-
sion to:
• View KPI details.
• Search for KPIs, favorites, and recent.
• Filter KPIs on linked terms.
• Update: Use with Catalog KPI Template with Read per-
mission to update a KPI.
• Delete: Delete a KPI.
• Manage:
• View published and unpublished KPIs.
• Publish or unpublish KPIs.
Catalog KPI Template -RU----- • Read: Use with Catalog KPI Object with Read permis-
sion to:
• View KPI details.
• Search for KPIs, favorites, and recent.
• Filter KPIs on linked terms.
• Update: Edit the KPI template.
Catalog Log -R------ • Read: View and search extraction logs for assets and
batch job details.
The following table lists the privileges and their permissions that can be included in a scoped role.
Note
Some permissions require others and may automatically set them. For example, setting the Delete
permission for the Data Warehouse Data Builder privilege automatically sets the Read permission as well.
Note
The permissions Create and
Manage are global permissions and
are described in the global privi-
leges and permissions table (see
Global Privileges and Permissions
[page 96]).
Note
The permission Manage is a global
permission and is described in the
global privileges and permissions
table (see Global Privileges and
Permissions [page 96]).
Data Warehouse Data Builder CRUD--S- Allows access to all objects in the Data
Builder app.
Data Warehouse Remote Connection CRUD---- Allows access to remote and run-time
objects:
Note
The following feature needs an ad-
ditional permission:
Select a location ID -
Connection.Read
Data Warehouse Data Integration -RU-E--- Allows access to the Data Integration
Monitor app:
Note
In addition to these permissions,
the following Data Integration
Monitor actions require the Data
Warehouse Data Builder (Read)
privilege:
Data Warehouse Data Access Control CRUD---- Allows access to data access controls in
the Data Builder app:
Data Warehouse Business Builder -R------ Allows access to the Business Builder
app.
Data Warehouse Business Entity CRUD---- Allows access to business objects (di-
mensions and facts) defined in the
Business Builder.
Data Warehouse Fact Model CRUD---- Allows access to fact models defined in
the Business Builder. Fact models are
shaped like consumption models but
offer re-useability in other consumption
models.
M (Manage):
Note
This privilege is displayed and avail-
able for selection only in a scoped
role and is selected by default in
the predefined scoped role DW
Scoped Space Administrator.
Note
Custom roles cannot be assigned
this privilege.
Permissions
The following table displays the available permissions and their definitions.
Create Permits creating new objects of this item type. Users need this permission to create spaces, views
or tables, upload data into a story, or upload other local files.
Update Permits editing and updating existing items. Compare this permission with the Maintain permis-
sion, which doesn't allow changes to the data structure. Note: some object types need the
Maintain permission to update data. See the Maintain entry.
Maintain Permits the maintenance of data values, for example adding records to a model, without allowing
changes to the actual data structure. Compare this permission with the Update permission, which
does allow changes to the data structure.
When granted on the Lifecycle privilege, permits importing and exporting objects.
Manage When granted on Spaces and Space Files, permits to view all spaces and their content (including
data), regardless of whether the user is assigned to the space or not.
To perform actions on spaces, you need the Manage permission in combination with other permis-
sions for Spaces and other privileges. See Roles and Privileges by App and Feature [page 107].
Caution
This permission should be granted only to tenant administrators.
Review the standard roles and the privileges needed to access apps, tools, and other features of SAP
Datasphere.
A user is granted a set of global privileges for the tenant via a A user is granted a set of scoped privileges for one or more
global role. spaces via a scoped role.
The global role can be: The scoped role inherits a role template, which can be:
• A standard global role that is delivered with SAP • A standard scoped role template that is delivered with
Datasphere (such as DW Administrator). SAP Datasphere, such as DW Space Administrator).
• A custom role that you create from a template (a stand- • A custom role template that you create from another
ard global role or another custom role containing global template (a standard scoped role or another custom
privileges). role).
To assign a user to a global role, see Assign Users to a Role To assign a user to a scoped role, see Create a Scoped Role
[page 81]. to Assign Privileges to Users in Spaces [page 75].
Note
To access an app, tool, or editor, a user must have a global or scoped role inheriting from a role template which
contains the listed privileges:
• Tag Hierarchies
• Monitoring
(Business Builder) Each page or editor requires a separate permission: DW Space Administrator
Start page • Start page: Data Warehouse Business Builder (-R------) DW Modeler
Dimension editor
• Dimension editor: Data Warehouse Business Entity DW Viewer (read-only ac-
(CRUD----) cess)
Fact editor
• Fact editor: Data Warehouse Business Entity (CRUD----)
Fact model editor • Fact model editor: Data Warehouse Fact Model
(CRUD----)
Consumption model editor
• Consumption model editor: Data Warehouse
Authorization scenario editor
Consumption Model (CRUD----)
See Modeling Data in the • Authorization scenario editor: Data Warehouse
Business Builder Authorization Scenario (CRUD----)
Note
The DW Viewer role includes Data Warehouse Con-
sumption.Read, which allows these users to pre-
view only data from Fact models and consumption
models.
(Data Builder) All pages and editors share a single permission: DW Space Administrator
Start Page • Data Warehouse Data Builder (CRUD--S-) DW Modeler
Table editor The following features need additional permissions (which DW Viewer (read-only ac-
are included in the DW Modeler role): cess)
Graphical view editor
Note
The DW Modeler role includes Data Warehouse Data
Access Control.Read, which allows them to apply an
existing data access control to a view.
(Data Integration Monitor) Data Warehouse Data Integration (-RU-E---) DW Space Administrator
See Managing and Monitor- DW Integrator
ing Data Integration
Note
DW Modeler (manual tasks
Data Warehouse Data Integration.Update allows you to
only)
do only manual integration tasks. The DW Integrator
role includes Data Warehouse Data Integration.Execute, DW Viewer (read-only ac-
which also allows scheduling automated integration cess)
tasks.
Administration Tools
To access an app, tool, or editor, a user must have a global or scoped role inheriting from a role template which
contains the listed privileges:
Users with different roles have different levels of access to the Space Management tool:
Various privileges and permissions are required to see and edit different parts of the Space Management tool:
Note
The global privilege Spaces (-------M) enables users to perform the following actions in all the spaces of
the tenant: read, update and delete.
Note
A DW Administrator cannot see the HDI
Containers area in a space.
Note
A DW Administrator cannot see the Time
Data area in a space.
Modify General Settings (except for Global privilege Spaces (-------M) DW Administrator and DW
Storage Assignment) Space Administrator
or scoped privilege Spaces (-RU-----)
See Create a Space [page 130]
Modify Users Global privileges Spaces (-------M) and Role DW Administrator and DW
(-------M) Space Administrator
Modify HDI Containers Scoped privileges Spaces (--U-----) and Remote DW Space Administrator
Connection (--U-----)
See Prepare Your HDI Project for
Note
Exchanging Data with Your Space
A DW Administrator
cannot access the HDI
Containers area in a
space.
Modify Auditing Global privilege Spaces (-------M) or scoped privi- DW Administrator and DW
lege Spaces (-RU-----) Space Administrator
See Enable Audit Logging
Delete a Space Global privileges Spaces (-------M) and User DW Administrator and DW
(-------M) Space Administrator
See Delete Your Space
or scoped privileges Spaces (---D----) and Scoped Note
Role User Assignement (-------M)
A user with a space
administrator role can
delete only the spaces
they’re assigned via a
scoped role.
When creating a custom role for using or administering the catalog, you must set the permissions for the
privileges in certain ways so that you can complete various tasks. Review the following table of tasks to see
which permissions and privilege combinations you need.
To be able to access the Catalog app from the side navigation, all custom catalog roles need the Read
permission on Catalog Asset.
Note
All custom catalog roles need the SAP Datasphere read permission on Space Files to allow users to mark
assets, terms, and KPIs as their favorite.
Assets Search for an asset and view the detailed Catalog Asset: (-R------)
information for it.
See Evaluating and Accessing Catalog As- Catalog Glossary Object: (-R------)
sets
Tag Hierarchy: (-R------)
Assets Add a catalog description for the asset. Catalog Asset: Read, Update
See Editing and Enriching Catalog Assets
Catalog Tag Hierarchy: (-R------)
Assets Add a term, tag, or KPI relationship to the Catalog Asset: Read, Update
asset from the asset’s detailed information
page. Catalog Tag Hierarchy: (-R------)
See Editing and Enriching Catalog Assets Catalog Glossary Object: (-R------)
See Editing and Enriching Catalog Assets Catalog Glossary Object: (-R------)
See Finding and Accessing Data in the Cat- Data Warehouse Remote Connection:
alog and Evaluating and Installing Market- (CRUD----)
place Data Products.
Data Warehouse Data Integration: (-RU-----)
Users can consume data exposed by SAP Datasphere if they are assigned to a space via a scoped role and have
the Space Files.Read permission.
Consume data in SAP Analytics Cloud, Space Files (-R------) All roles
Microsoft Excel, and other clients, tools,
and apps
Note
See Consuming Data Exposed by SAP
If a user does not need to access
Datasphere
SAP Datasphere itself, and only
wants to consume data exposed by
it, they should be granted the DW
Consumer role.
To use the command line interface (see Manage Spaces via the Command Line), a user must have the following
standard role or a custom role containing the listed privileges:
DW Modeler
datasphere objects • Data Builder (CRUD----)
• Data Warehouse Business Entity
(CRUD----)
• Data Warehouse Fact Model
(CRUD----)
• Data Warehouse Consumption
Model (CRUD----)
• Data Warehouse Authorization
Scenario (CRUD----)
For SAP Datasphere tenants that were created before version 2023.21, the roles and user assignment to
spaces have been converted so that users can continue to perform the same actions as before in their spaces.
The way a DW Administrator gives privileges to users to do certain actions in spaces has changed.
A DW Administrator assigned a role to a user and assigned A DW Administrator assigns a role to one or more users and
the user as a member of a space. one or more spaces within a new role: a scoped role.
As a consequence: As a consequence:
• A user had the same one or more roles in all the spaces • A user can have different roles in different spaces: be a
he was a member of. modeler in space Sales Germany and Sales France and
• A DW Administrator assigned users space by space by a viewer in space Europe Sales.
going in each space page. • A DW Administrator can give a role to many users in
many spaces, all in one place in a scoped role. See
Create a Scoped Role to Assign Privileges to Users in
Spaces [page 75].
A DW Space Administrator can then manage users
in their spaces and the changes are reflected in the
scoped roles. See Control User Access to Your Space.
You can now use global roles for tenant-wide actions and scoped roles for space-related actions.
• DW Administrator, Catalog Administrator and Catalog User: these standard roles are considered as global
roles. They now include only privileges that are global, which means privileges that apply to the tenant
and are not space-related. For example, the DW Administrator role no more grants access to any of the
modeling apps of SAP Datasphere (such as Data Builder).
Users who previously had these roles are still assigned to them after conversion.
Users who previously had the DW Administrator role and were members of certain spaces are assigned to
the new DW Scoped Space Administrator role for those spaces they previously had access to.
The user who previously had the System Owner role and was member of certain spaces is assigned to the
new DW Scoped Space Administrator role for those spaces the user previously had access to.
• A single scoped role is created for each standard role (outside of DW Administrator, Catalog Administrator
and Catalog User) and each custom role and all the users who previously had that standard or custom role
are assigned to the new scoped role but only for those spaces they previously had access to.
Note
All the spaces of the tenant are included in each scoped role created, but not all users are assigned to
all spaces. See the example of scoped role below.
For each standard or custom role, two roles are available after the conversion: the initial standard or
custom role (which acts as a template for the scoped role) and the scoped role created.
Each scoped role includes privileges which are now considered as scoped privileges.
• Users who previously had the DW Space Administrator role are assigned to these 2 roles: the standard
role DW Space Administrator and the new scoped role DW Scoped Space Administrator. Users who
manage spaces primarily need scoped permissions to work with spaces, but they also need some global
permissions (such as Lifecycle when transporting content packages). To provide such users with the full
set of permissions they need, each space administrator is assigned to the scoped role DW Scoped Space
Administrator to receive the necessary scoped privileges, and they are also assigned directly to the DW
Space Administrator role in order to receive the additional global privileges.
Note
• Specific case - no role assigned to a user: Before conversion, a DW Administrator assigned a user
to certain spaces but did not assign a role to the user. As no role was assigned to the user, the
user-to-spaces assignment is not kept after conversion.
• Privileges and permissions are now either global or scoped. See Privileges and Permissions [page 95].
In this example, users assigned to a custom role called « Senior Modeler » were members of certain spaces
before the conversion, as shown below.
The scoped roles that are automatically created during the conversion ensure that users can continue to
perform the same actions as before the conversion. However, we recommend that you do not use the
automatically created scoped roles and that you create your own scoped roles by logical groups as soon
as possible.
In this example, the following scoped roles have been automatically created during conversion:
There are 4 spaces: Sales US, Sales Europe, Finance US and Finance Europe, which can be logically organized
in one Sales group and one Finance group.
You should create a set of scoped roles for each logical group of spaces, add the relevant spaces and the
relevant users and assign the users to the spaces in the scoped roles. The users will have access to the spaces
with the appropriate privileges.
For more information about creating a scoped role, see Create a Scoped Role to Assign Privileges to Users in
Spaces [page 75].
Note
In addition to the standard workflows, you can also create scoped roles and assign scopes and users to
them via the command line (see Manage Scoped Roles via the Command Line).
The individual who purchases SAP Datasphere is automatically designated as the system owner. If you, as the
purchaser, are not the right person to administer the system, you can transfer the system owner role to the
appropriate person in your organization.
Prerequisites
You must be logged on as a user with the System Information Update privilege.
Note
Transferring the system owner role is not possible if you only have one license for SAP Datasphere.
Context
1. On the Users page of the Security area, select the user you want to assign the system owner role to.
2. Select (Assign as System Owner).
The Transfer System Owner Role dialog appears.
3. Under New Role, enter a new role for the previous system owner, or select to open a list of available
roles.
Note
All data acquisition, preparation, and modeling in SAP Datasphere happens inside spaces. A space is a secure
area - space data cannot be accessed outside the space unless it is shared to another space or exposed for
consumption.
An administrator must create one or more spaces. They allocate disk and memory storage to the space, set its
priority, and can limit how much memory and how many threads its statements can consume.
If the administrator assigns one or more space administrators via a scoped role, they can then manage users,
create connections to source systems, secure data with data access controls, and manage other aspects of the
space (see Managing Your Space).
Create a space, allocate storage, and set the space priority and statement limits.
Context
Note
Only administrators can create spaces, allocate storage, and set the space priority and statement limits.
The remaining space properties can be managed by the space administrators that the administrator
assigns to the space via a scoped role.
Procedure
1. In the side navigation area, click (Space Management), and click Create.
2. In the Create Space dialog, enter the following properties, and then click Create:
Property Description
Space Name Enter the business name of the space. Can contain a maximum of 30 characters, and can
contain spaces and special characters.
Space ID Enter the technical name of the space. Can contain a maximum of 20 uppercase letters or
numbers and must not contain spaces or special characters other than _ (underscore). Unless
advised to do so, must not contain prefix _SYS and should not contain prefixes: DWC_, SAP_
(See Rules for Technical Names [page 138]). As the technical name will be displayed in the
Open SQL Schema and in monitoring tools, including SAP internal tools, we recommend that
you do not include sensitive business or personal data in the name.
Property Description
Space ID Enter the technical name of the space. Can contain a maximum of 20 uppercase letters or
numbers and must not contain spaces or special characters other than _ (underscore). Unless
advised to do so, must not contain prefix _SYS and should not contain prefixes: DWC_, SAP_
(See Rules for Technical Names [page 138]).
Space Name Enter the business name of the space. Can contain a maximum of 30 characters, and can
contain spaces and special characters.
Space Status [read-only] Displays the status of the space. Newly-created spaces are always active.
Space Type [read-only] Displays the type of the space. You can only create spaces of type SAP Datasphere.
Created On [read-only] Displays the date and time when the space was created.
Deployment Status [read-only] Displays the deployment status of the space. Newly-created spaces are deployed,
but when you make changes, you need to save and re-deploy them before they are available to
space users.
Deployed On [read-only] Displays the date and time when the space was last deployed.
Description [optional] Enter a description for the space. Can contain a maximum of 4 000 characters.
Note
Once the space is created, users with space administrator privileges can use the Translation area to
choose the language from which business textual information will be translated. For more information,
see Translating Metadata for SAP Analytics Cloud.
4. [optional] Use the Space Storage properties to allocate disk and memory storage to the space and to
choose whether it will have access to the SAP HANA data lake.
Expose for Consumption by Default Choose the default setting for the Expose for
Consumption property for views created in this space.
• Data Access/Database Users - Use the list in the Database Users section to create users who can
connect external tools and read from and write to the space. See Create a Database User.
• Data Access/HDI Containers - Use the list in the HDI Containers section to associate HDI containers to
the space. See Prepare Your HDI Project for Exchanging Data with Your Space.
Note
A user with the DW Administrator role only cannot see the HDI Containers area.
• Time Data/Time Tables and Dimensions - Click the button in the Time Tables and Dimensions section to
generate time data in the space. See Create Time Data and Dimensions.
Note
A user with the DW Administrator role only cannot see the Time Tables and Dimensions area.
• Auditing/Space Audit Settings - Use the properties in the Space Audit Settings section to enable audit
logging for the space. See Enable Audit Logging.
• Add your space to an existing scoped role (see Add Spaces to a Scoped Role [page 79]).
• Create a scoped role and add your space and at least one user to the scoped role (see Create a Scoped
Role [page 78]).
For more information, see Create a Scoped Role to Assign Privileges to Users in Spaces [page 75].
All users assigned to the space via the scoped roles are automatically displayed in the Users area of the
space page. In this area, you can add or remove users to/from scoped roles for your space (see Control
User Access to Your Space). Either an administrator or a user with space administrator privileges can do
so.
8. [optional] The properties in the Workload Management section are set with their default values. To
change them, go in the side navigation area and click (System) (Configuration) Workload
Management .
For more information, see Set Priorities and Statement Limits for Spaces [page 134].
Use the Space Storage properties to allocate disk and memory storage to the space and to choose whether it
will have access to the SAP HANA data lake.
Context
SAP Datasphere supports data tiering using the features of SAP HANA Cloud:
• Memory Storage (hot data) - Keep your most recent, frequently-accessed, and mission-critical data loaded
constantly in memory to maximize real-time processing and analytics speeds.
When you persist a view, the persisted data is stored in memory (see Persist View Data).
• Disk (warm data) - Store master data and less recent transactional data on disk to reduce storage costs.
When you load data to a local table or replicate data to a remote table in SAP Datasphere, the data is
stored on disk by default, but you can load it in memory by activating the Store Table Data in Memory
switch (see Accelerate Table Data Access with In-Memory Storage).
• Data Lake (cold data) - Store historical data that is infrequently accessed in the data lake. With its low
cost and high scalability, the data lake is also suitable for storing vast quantities of raw structured and
unstructured data, including IoT data. For more information, see Integrating Data to and From SAP HANA
Cloud Data Lake.
You can allocate specific amounts of memory and disk storage to a space or disable the Enable Space Quota
option, and allow the space to consume all the storage it needs, up to the total amount available in your tenant.
Procedure
1. In the side navigation area, click (Space Management), locate your space tile, and click Edit to open it.
2. Use the Space Storage properties to allocate disk and memory storage to the space and to choose whether
it will have access to the SAP HANA data lake.
Property Description
Enable Space Quota Disable this option to allow the space to consume any amount of disk and memory storage up
to the total amounts available in your tenant.
If this option was disabled and then subsequently re-enabled, the Disk and Memory properties
are initialized to the minimum values required by the current contents of the space.
Default: Enabled
Disk (GB) Enter the amount of disk storage allocated in GB. You can use the buttons to change the
amount by whole GBs or enter fractional values in increments of 100 MB by hand.
Default: 2 GB
Memory (GB) Enter the amount of memory storage allocated in GB. This value cannot exceed the amount of
disk storage allocated. You can use the buttons to change the amount by whole GBs or enter
fractional values in increments of 100 MB by hand.
Note
The memory allocated is used to store data and is not related to processing memory.
For more information on limiting processing memory in a space, see Set Priorities and
Statement Limits for Spaces [page 134].
Default: 1 GB
Use This Space to Ac- Enable access to the SAP HANA Cloud data lake. Enabling this option is only possible if no
cess the Data Lake other space already has access to the data lake.
Default: Disabled
Note
If a space exceeds its allocations of memory or disk storage, it will be locked until a user of the space
deletes the excess data or an administrator assigns additional storage. See Lock or Unlock Your Space.
3. Click Save to save your changes to the space, or Deploy to save and immediately make the changes
available to users assigned to the space.
Results
To view the total storage available and the amount assigned to and used by all spaces, see Monitoring SAP
Datasphere [page 232].
Procedure
1. In the side navigation area, click (System) (Configuration) Workload Management , then
click on the row of the space for which you want to edit the properties.
Note
You can search for a space based on its ID by entering one or more characters in the Search field. Only
the spaces whose space ID includes the entered characters are displayed in the table.
• Default. The default configuration provides generous resource limits, while preventing any single space
from overloading the system. The default configuration is applied by default to new spaces.
These statement limit and admission control parameters are taken into account in the default
configuration and cannot be changed:
• Custom. These statement limit and admission control parameters are taken into account in the
custom configuration. You can specify only the value for statements limits to set maximum total thread
and memory limits that statements running concurrently in the space can consume:
Caution
Be aware that changing the statement limits may cause performance issues.
Statement Limits TOTAL STATEMENT In the Total Statement Thread Limit area, enter the maximum
THREAD LIMIT number (or percentage) of threads that statements running con-
currently in the space can consume. You can enter a percentage
between 1% and 100% (or the equivalent number) of the total num-
ber of threads available in your tenant.
Note
100% represents the maximum of 80% of CPU resources re-
served for workload generated by spaces, user group users and
agent users. The remaining 20% of CPU resources are reserved
to ensure that the system can respond under heavy load.
Setting this limit prevents the space from consuming too many
threads, and can help with balancing resource consumption be-
tween competing spaces.
Caution
Be aware that setting this limit too low may impact statement
performance, while excessively high values may impact the
performance of statements in other spaces.
Default: 70%
TOTAL STATEMENT In the Total Statement Memory Limit area, enter the maximum
MEMORY LIMIT number (or percentage) of GBs of memory that statements running
concurrently in the space can consume. You can enter any value or
percentage between 0 (no limit) and the total amount of memory
available in your tenant.
Setting this limit prevents the space from consuming all available
memory, and can help with balancing resource consumption be-
tween competing spaces.
Caution
Be aware that setting this limit too low may cause out-of-mem-
ory issues, while excessively high values or 0 may allow the
space to consume all available system memory.
Default: 80%
Note
Admission control is designed to avoid overloading the system under peak load by denying any
further SQL requests when the load on the system is equal to or exceeds a given threshold.
A statement which exceeds a reject threshold is rejected with the SQL error 616: 'rejected by
workload class configuration'. A statement which exceeds a queue threshold is queued for up to
10 minutes, after this time the statement is rejected with the SQL error 616: 'queue wait timeout
exceeded'. For more information, see Properties for Workload Classes and Mappings in the SAP
HANA Cloud, SAP HANA Database Administration Guide.
Note
If too many statements are rejected, we recommend that you to perform these two actions:
• Decrease the total statement thread limit for the spaces which consume a large amount of
CPU time.
First, identify the spaces which consume a large amount of CPU time: As a database analysis
user, analyze the M_WORKLOAD_CLASS_STATISTICS view in the Database Explorer, like in this
example:
Using this sample code, all the spaces can be listed by the TOTAL_STATEMENT_CPU_TIME
descending order, which enables you to identify the spaces that consumed the most CPU time.
As a second step, go to the Workload Configuration area of each identified space, select the
configuration Custom and decrease the total statement thread limit. Some statements will
take longer to run but will not be rejected.
• Avoid that tasks which consume a high load of CPU run at the same time.
You can adjust the task schedules in the Data Integration Monitor. See Scheduling Data
Integration Tasks.
4. Click Save. The changes are reflected in the space details page in read-only.
Note
You can use the SAP Datasphere command line interface, datasphere, to set space priorities and
statement limits for spaces. See Manage Space Priorities and Statement Limits via the Command Line.
Rules and restrictions apply to the technical names of objects that you create in SAP Datasphere. The technical
name by default is synchronized with the business name by using rules to automatically replace invalid
characters.
When specifying the technical name of an object, bear in mind the following rules and restrictions:
Space The space ID can only contain uppercase letters, numbers, and under- 20
scores (_). Reserved keywords, such as SYS, CREATE, or SYSTEM, must
not be used. Unless advised to do so, the ID must not contain prefix _SYS
and should not contain prefixes: DWC_, SAP_. The maximum length is 20
characters.
Also, the keywords that are reserved for the SAP HANA database cannot
be used in a space ID. See Reserved Words in the SAP HANA SQL Refer-
ence Guide for SAP HANA Platform.
Elastic Compute Node The elastic compute node technical name can only contain lowercase 9
letters (a-z) and numbers (0-9). It must contain the prefix: ds. The mini-
mum length is 3 and the maximum length is 9 characters.
SAP BW bridge in- The technical name can contain any characters except for the asterisk 50
(*), colon (:), and hash sign (#). Also, tab, carriage return, and newline
stance
must not be used, and space must not be used at the start of the name.
Remote table gener- The maximum length is 50 characters.
ated during the import
of analysis authoriza-
tions from a SAP BW
or SAP BW∕4HANA
system
Object created in the The technical name can only contain alphanumeric characters and un- 50
Data Builder, for exam- derscores (_). The maximum length is 50 characters.
ple a table, view, or
E/R model
Element in the Data The technical name can only contain alphanumeric characters and un- 30
Builder, for example a derscores (_). The maximum length is 30 characters.
column, or a join, pro-
jection, or aggregation
node
Object created in the The technical name can only contain alphanumeric characters and un- 30
Business Builder, for derscores (_). The maximum length is 30 characters.
example a fact, dimen-
sion, fact model, con-
sumption model, or
authorization scenario
Association The technical name can only contain alphanumeric characters, under- 10
scores (_), and dots (.). The maximum length is 10.
Input parameter The technical name can only contain uppercase letters, numbers, and 30
underscores (_). The maximum length is 30 characters.
Database analysis user The user name suffix can only contain uppercase letters, numbers, and 31 (40 minus prefix)
underscores (_). The maximum length is 41 characters. This suffix is
added to the default prefix DWCDBUSER# to create your full user name.
Note that you cannot change the prefix as it is a reserved prefix.
Database user group The user name suffix can only contain uppercase letters, numbers, and 30 (40 minus prefix)
user underscores (_). The maximum length is 41 characters. This suffix is
added to the default prefix DWCDBGROUP# to create your full user
name. Note that you cannot change the prefix as it is a reserved prefix.
Database user (Open The user name suffix can only contain uppercase letters, numbers, and 40 minus space name
SQL schema) underscores (_). The maximum length is 41 characters. This suffix is (or 41 minus prefix)
added to the default prefix <space ID># to create your full user name.
Note that you cannot change the prefix.
Connection The technical name can only contain alphanumeric characters and un- 40
derscores (_). Underscore (_) must not be used at the start or end of the
name. The maximum length is 40 characters.
Data access control The technical name can only contain alphanumeric characters, and un- 50
derscores (_). The maximum length is 50 characters.
The technical name by default is synchronized with the business name. While entering the business name,
invalid characters are replaced in the technical name as follows:
Rule Example
You can use the SAP Datasphere command line interface, datasphere, to create, read, update, and delete
spaces. You can set space properties, assign (or remove) users, create database users, create or update
objects (tables, views, and data access controls), and associate HDI containers to a space.
To use datasphere to create spaces, you must have an SAP Datasphere user with the DW Administrator role
or equivalent permissions (see Roles and Privileges by App and Feature [page 107]).
For more information, see Manage Spaces via the Command Line.
To recover the disk storage used by the data in spaces, you must delete them from the Recycle Bin area.
Once a space has been deleted and moved to the recycle bin (see Delete Your Space), you can permanently
delete it from the database.
Caution
You need to perform some preparatory steps to be able to create and use connections in SAP Datasphere. The
steps depend on the source you want to connect to and on the features you want to use with the connection.
The following overview lists the most common prerequisites per connection type and points to further
information about what needs to be prepared to connect and use a connection.
Data Flows
and Replica-
Remote Ta- tion Flows:
Remote Ta- bles: Instal- Cloud Con- Data Flows: SAP Source IP
bles: Data lation of nector Re- Third-Party Datasphere Required in Additional
Provisioning Third-Party quired for Driver Up- IP Required SAP Information
Connection Agent Re- JDBC Library On-Premise load Re- in Source Al- Datasphere and Prereq-
Type quired? Required? Sources? quired? lowlist? IP Allowlist? uisites
Microsoft no no no no no no n/a
Azure Blob
Storage Con-
nections
Microsoft no no no no no no n/a
Azure Data
Lake Store
Gen1 Con-
nections
(Deprecated)
Oracle Con- yes yes yes (for data yes no no Prepare Con-
nections flows) nectivity to
Oracle [page
185]
SAP ABAP yes (for on- no yes (for on- no no no Prepare Con-
Connections premise) premise: for nectivity to
data flows SAP ABAP
and replica- Systems
tion flows) [page 186]
SAP HANA yes (for on- no yes (for on- no no Cloud Con- Prepare Con-
Connections premise) premise: for nector IP (for nectivity to
data flows on-premise SAP HANA
and replica- when using [page 195]
tion flows, or Cloud Con-
when using nector for re-
Cloud Con- mote tables
nector for re- feature)
mote tables
feature)
SAP HANA no no no no no no no
Cloud, Data
Lake Files
Connections
Note
For information about supported versions of sources that are connected via SAP HANA smart data
integration and its Data Provsioning Agent, see the SAP HANA smart data integration and all its patches
Product Availability Matrix (PAM) for SAP HANA SDI 2.0 .
For information about necessary JDBC libraries for connecting to sources from third-party vendors, see:
• SAP HANA smart data integration and all its patches Product Availability Matrix (PAM) for SAP HANA
SDI 2.0
• Register Adapters with SAP Datasphere [page 153]
Context
The Data Provisioning Agent is a lightweight component running outside the SAP Datasphere environment. It
hosts data provisioning adapters for connectivity to remote sources, enabling data federation and replication
Through the Data Provisioning Agent, the preinstalled data provisioning adapters communicate with the
Data Provisioning Server for connectivity, metadata browsing, and data access. The Data Provisioning Agent
connects to SAP Datasphere using JDBC. It needs to be installed on a local host in your network and needs to
be configured for use with SAP Datasphere.
Note
A given Data Provisioning Agent can only connected to one SAP Datasphere tenant (see SAP Note
2445282 ).
For an overview of connection types that require a Data Provisioning Agent setup, see Preparing Connectivity
for Connections [page 142].
See also the guide Best Practices and Sizing Guide for Smart Data Integration (When used in SAP
Datasphere) (published June 10, 2022) for information to consider when creating and using connections
that are based on SDI and Data Provisioning Agent.
Procedure
To prepare connectivity via Data Provisioning Agent, perform the following steps:
1. Download and install the latest Data Provisioning Agent version on a host in your local network.
Note
• We recommend to always use the latest released version of the Data Provisioning Agent. For
information on supported and available versions for the Data Provisioning Agent, see the SAP
HANA Smart Data Integration Product Availability Matrix (PAM) .
• Make sure that all agents that you want to connect to SAP Datasphere have the same latest
version.
For more information, see Install the Data Provisioning Agent [page 149].
2. Add the external IPv4 address of the server on which your Data Provisioning Agent is running to the IP
allowlist in SAP Datasphere. When using a proxy, the proxy's address needs to be included in IP allowlist as
well.
Note
For security reasons, all external connections to your SAP Datasphere instance are blocked by default.
By adding external IPv4 addresses or address ranges to the allowlist you can manage external client
connections.
This includes configuring the agent and setting the user credentials in the agent.
For more information, see Connect and Configure the Data Provisioning Agent [page 150].
4. Register the adapters with SAP Datasphere.
Note
For third-party adapters, you need to download and install any necessary JDBC libraries before
registering the adapters.
For more information, see Register Adapters with SAP Datasphere [page 153].
The registered adapters are available for creating connections to the supported remote sources and enabling
these connections for creating views and accessing or replicating data via remote tables.
Download the latest Data Provisioning Agent 2.0 version from SAP Software Download Center and install it
as a standalone installation on a Windows or Linux machine. If you have already installed an agent, check if
you need to update to the latest version. If you have more than one agent that you want to connect to SAP
Datasphere, make sure to have the same latest version for all agents.
Context
Procedure
You can install the agent on any host system that has access to the sources you want to access, meets
the minimum system requirements, and has any middleware required for source access installed. The
agent should be installed on a host that you have full control over to view logs and restart, if necessary.
Note
• We recommend to always use the latest released version of the Data Provisioning Agent.
• Make sure that all agents that you want to connect to SAP Datasphere have the same latest
version.
• Select your operating system before downloading the agent.
For more information, see Install from the Command Line in the SAP HANA Smart Data Integration and
SAP HANA Smart Data Quality documentation.
Note
If you have upgraded your Data Provisioning Agent to version 2.5.1 and want to create an Amazon
Redshift connection, apply SAP note 2985825 .
Related Information
Connect the Data Provisioning Agent to the SAP HANA database of SAP Datasphere. This includes configuring
the agent and setting the user credentials in the agent.
Procedure
Note
d. Select Create.
The Agent Settings dialog opens and provides you with information required to configure the Data
Provisioning Agent on your local host:
• Agent name
• HANA server (host name)
• HANA port
• HANA user name for agent messaging
• HANA user password for agent messaging
Either keep the Agent Settings dialog open, or note down the information before closing the dialog.
2. At the command line, connect the agent to SAP HANA using JDBC. Perform the following steps:
a. Navigate to <DPAgent_root>/bin/. <DPAgent_root> is the Data Provisioning Agent installation
root location. By default, on Windows, this is C:\usr\sap\dataprovagent, and on Linux it
is /usr/sap/dataprovagent.
b. Start the agent using the following command:
On Windows: dpagent_servicedaemon_start.bat
c. Start the command-line agent configuration tool using the following command:
On Linux:
<DPAgent_root>/bin/agentcli.sh --configAgent
On Windows:
<DPAgent_root>/bin/agentcli.bat --configAgent
Tip
h. Enter the host name (HANA server) and port number (HANA port) for the SAP Datasphere instance.
For example:
• Host name: <instance_name>.hanacloud.ondemand.com
• Port number: 443
i. If HTTPS traffic from your agent host is routed through a proxy, enter true and specify any required
proxy information as prompted.
1. Enter true to specify that the proxy is an HTTP proxy.
2. Enter the proxy host and port.
3. If you use proxy authentication, enter true and provide a proxy user name and password.
j. Enter the credentials for the HANA user for agent messaging.
The HANA user for agent messaging is used only for messaging between the agent and SAP
Datasphere.
k. Confirm that you want to save the connection settings you have made by entering true.
Note
<DPAgent_root>/bin/agentcli.sh --configAgent
On Windows:
<DPAgent_root>/bin/agentcli.bat --configAgent
1. To stop the agent, choose Start or Stop Agent, and then choose Stop Agent.
2. Choose Start Agent to restart the agent.
3. Choose Agent Status to check the connection status. If the connection succeeded, you should see
Agent connected to HANA: Yes.
Note
For agent version 2.7.4 and higher, if in the agent status the message No connection established
yet is shown, this can be ignored.
For more information about the agent/SAP HANA connection status in agent version 2.7.4 and
higher, see SAP Note 3487646 .
If the tile of the registered Data Provisioning Agent doesn’t display the updated connection status, select
Refresh Agents.
Related Information
Troubleshooting the Data Provisioning Agent (SAP HANA Smart Data Integration) [page 213]
After configuring the Data Provisioning Agent, in SAP Datasphere, register the Data Provisioning adapters that
are needed to connect to on-premise sources.
Prerequisites
For third-party adapters, ensure that you have downloaded and installed any necessary JDBC libraries. Place
the files in the <DPAgent_root>/lib folder before registering the adapters with SAP Datasphere. For
connection types Amazon Redshift and Generic JDBC, place the file in the <DPAgent_root>/camel/lib
folder.
For information about the proper JDBC library for your source, see the SAP HANA smart data integration and
all its patches Product Availability Matrix (PAM) for SAP HANA SDI 2.0 . Search for the library in the internet
and download it from an appropriate web page.
Procedure
Note
Next Steps
To use new functionality of an already registered adapter or to update the adapter in case of issues that have
been fixed in a new agent version, you can refresh the adapter by clicking the (menu) button and then
choosing Refresh.
If you want to stream ABAP tables for loading large amounts of data without running into memory issues it is
required to meet the following requirements.
• You need to create an RFC destination in the ABAP source system. With the RFC destination you register
the Data Provisioning agent as server program in the source system.
Using transaction SM59, you create a TCP/IP connection with a user-defined name. The connection should
be created with “Registered Server Program” as “Activation Type”. Specify “IM_HANA_ABAPADAPTER_*”
as a filter for the “Program ID” field, or leave it empty.
• Successful registration on an SAP Gateway requires that suitable security privileges are configured. For
example:
• Set up an Access Control List (ACL) that controls which host can connect to the gateway. That
file should contain something similar to the following syntax: <permit> <ip-address[/mask]>
[tracelevel] [# comment]. <ip-address> here is the IP of the server on which Data Provisioning
agent has been installed.
For more information, see the Gateway documentation in the SAP help for your source system
version, for example Configuring Network-Based Access Control Lists (ACL) in the SAP NetWeaver
7.5 documentation.
• You may also want to configure a reginfo file to control permissions to register external programs.
Connections to on-premise sources used for data flows, replication flows, and other use cases require Cloud
Connector to act as link between SAP Datasphere and the source. Before creating the connection, the Cloud
Connector requires an appropriate setup.
Context
Cloud Connector serves as a link between SAP Datasphere and your on-premise sources and is required for
connections that you want to use for:
• Data flows
• Replication flows
• Model import from:
• SAP BW/4HANA Model Transfer connections (Cloud Connector is required for the live data connection
of type tunnel that you need to create the model import connection)
• SAP S/4HANA On-Premise connections (Cloud Connector is required for the live data connection of
type tunnel that you need to search for the entities in the SAP S/4HANA system)
• Remote tables (only for SAP HANA on-premise via SAP HANA Smart Data Access)
For an overview of connection types that require a Cloud Connector setup to be able to use any of these
features, see Preparing Connectivity for Connections [page 142].
Procedure
The Cloud Connector respectively Cloud Connector instances are available for creating connections and
enabling these for the supported features.
Related Links
Frequently Asked Questions (about the Cloud Connector) in the SAP BTP Connectivity documentation
Configure Cloud Connector before connecting to on-premise sources and using them in various use cases. In
the Cloud Connector administation, connect the SAP Datasphere subaccount to your Cloud Connector, add
a mapping to each relevant source system in your network, and specify accessible resources for each source
system.
Prerequisites
Before configuring the Cloud Connector, the following prerequisites must be fulfilled:
Note
If you have an account but cannot see the account information here, enter the SAP BTP user ID. This ID
is typically the email address you used to create your SAP BTP account. After you have entered the ID,
you can see the Account Information for SAP Datasphere.
For more information about the supported use cases depending on the connection type, see Preparing Cloud
Connector Connectivity [page 155].
Procedure
<hostname> refers to the machine on which the Cloud Connector is installed. If installed on your machine,
you can simply enter localhost.
2. To connect the SAP Datasphere subaccount to your Cloud Connector, perform the following steps:
a. In the side navigation area of the Cloud Connector Administration, click Connector to open the
Connector page and click Add Subaccount to open the Add Subaccount dialog.
b. Enter or select the following information to add the SAP Datasphere subaccount to the Cloud
Connector.
Note
You can find the subaccount, region, and subaccount user information in SAP Datasphere
under System Administration Data Source Configuration SAP BTP Core Account Account
Information .
Property Description
Note
• Using location IDs you can connect multi-
ple Cloud Connector instances to your subac-
count. If you don't specify any value, the de-
fault is used. For more information, see Manag-
ing Subaccounts in the SAP BTP Connectivity
documentation.
• Each Cloud Connector instance must use a dif-
ferent location, and an error will appear if you
choose a location that is already been used.
• We recommend that you leave the Location
ID empty if you don't plan to set up multiple
Cloud Connectors in your system landscape.
c. Click Save.
In the Subaccount Dashboard section of the Connector page, you can see all subaccounts added to the
Cloud Connector at a glance. After you added your subaccount, you can check the status to verify that
the Cloud Connector is connected to the subaccount.
3. To allow SAP Datasphere to access systems (on-premise) in your network, you must specify the systems
and the accessible resources in the Cloud Connector (URL paths or function module names depending on
the used protocol). Perform the following steps for each system that you want to be made available by the
Cloud Connector:
a. In the side navigation area, under your subaccount menu, click Cloud To On-Premise and then
(Add)in the Mapping Virtual To Internal System section of the Access Control tab to open the Add
System Mapping dialog.
Note
The side navigation area shows the display name of your subaccount. If the area shows
another subaccount, select your subaccount from the Subaccount field of the Cloud Connector
Administration.
b. Add your system mapping information to configure access control and save your configuration.
Confluent - Confluent Platform on-premise only (repli- • For the Kafka broker: TCP
cation flows) • For the Schema Registry: HTTPS
For more information, see Configure Access Control (HTTP) and Configure Access Control (RFC) in the
SAP BTP Connectivity documentation.
Note
When adding the system mapping information, you enter internal and virtual system information.
The internal host and port specify the actual host and port under which the backend system can
be reached within the intranet. It must be an existing network address that can be resolved on the
intranet and has network visibility for the Cloud Connector. The Cloud Connector tries to forward
the request to the network address specified by the internal host and port, so this address needs
to be real. The virtual host name and port represent the fully qualified domain name of the related
system in the cloud.
We recommend to use a virtual (cloud-side) name that is different from the internal name.
c. To grant access only to the resources needed by SAP Datasphere, select the system host you just
added from the Mapping Virtual To Internal System list, and for each resource that you want to allow to
be invoked on that host click (Add) in the Resources Of section to open the Add Resource dialog.
d. Depending on the connection type, protocol, and use case, add the required resources:
SAP ABAP Function Name (name of the func- For accessing data using CDS view
tion module for RFC) extraction:
SAP S/4HANA On-Premise
• DHAMB_ – Prefix
• DHAPE_ – Prefix
• RFC_FUNCTION_SEARCH
• LTAMB_ – Prefix
• LTAPE_ – Prefix
• RFC_FUNCTION_SEARCH
SAP BW Function Name (name of the func- For accessing data using ODP con-
tion module for RFC) nectivity (for legacy systems that do
SAP ECC not have the ABAP Pipeline Engine
extension or DMIS Addon installed):
• /SAPDS/ – Prefix
• RFC_FUNCTION_SEARCH
• RODPS_REPL_ – Prefix
SAP Datasphere, SAP BW bridge Function Name (name of the func- See Add Resources to Source Sys-
(connectivity for ODP source sys- tion module for RFC) tem.
tems in SAP BW bridge)
For more information, see Configure Access Control (HTTP) and Configure Access Control (RFC) in the
SAP BTP Connectivity documentation.
4. [optional] To enable secure network communication (SNC) for data flows, configure SNC in the Cloud
Connector.
For more information, see Initial Configuration (RFC) in the SAP BTP Connectivity documentation.
Next Steps
1. If you have defined a location ID in the Cloud Connector configuration and want to use it when creating
connections, you need to add the location ID in (System) (Administration) Data Source
Configuration .
For more information, see Set Up Cloud Connector in SAP Datasphere [page 161].
2. If you want to create SAP BW/4HANA Model Transfer connections or SAP S/4HANA On-Premise
connections for model import, you need to switch on Allow live data to securely leave my network in
(System) (Administration) Data Source Configuration
For more information, see Set Up Cloud Connector in SAP Datasphere [page 161].
Related Information
For answers to the most common questions about the Cloud Connector, see Frequently Asked Questions in the
SAP BTP Connectivity documentation.
Receive SAP Datasphere subaccount information required for Cloud Connector configuration and complete
Cloud Connector setup for creating SAP BW/4HANA Model Transfer connections and for using multiple Cloud
Connector instances.
Context
The Cloud Connector allows you to connect to on-premise data sources and use them in various use cases
depending on the connection type.
For more information, see Preparing Cloud Connector Connectivity [page 155].
1. In the side navigation area, click (System) (Administration) Data Source Configuration .
Note
If your tenant was provisioned prior to version 2021.03, click (Product Switch) Analytics
System Administration Data Source Configuration .
Note
If you don't have an SAP Business Technology Platform (SAP BTP) user account yet, create an
account in the SAP BTP cockpit by clicking Register in the cockpit.
• To be able to use the Cloud Connector for SAP BW/4HANA Model Transfer connections to import
analytic queries with the Model Transfer Wizard and for SAP S/4HANA On-Premise connections to
import ABAP CDS Views with the Import Entities wizard, switch on Allow live data to securely leave my
network in the Live Data Sources section.
Note
The Allow live data to securely leave my network switch is audited, so that administrators can see
who switched this feature on and off. To see the changes in the switch state, go to (Security)
(Activities), and search for ALLOW_LIVE_DATA_MOVEMENT.
• If you have connected multiple Cloud Connector instances to your subaccount with different location
IDs and you want to offer them for selection when creating connections using a Cloud Connector, in
the On-premise data sources section, add the appropriate location IDs. If you don't add any location IDs
here, the default location will be used.
Cloud Connector location IDs identify Cloud Connector instances that are deployed in various
locations of a customer's premises and connected to the same subaccount. Starting with Cloud
Connector 2.9.0, it is possible to connect multiple Cloud Connectors to a subaccount as long as their
location ID is different.
Clients in your local network need an entry in the appropriate IP allowlist in SAP Datasphere. Cloud Connectors
in your local network only require an entry if you want to use them for federation and replication with remote
tables from on-premise systems.
Context
To secure your environment, you can control the range of IPv4 addresses that get access to the database of
your SAP Datasphere by adding them to an allowlist.
You need to provide the external (public) IPv4 address (range) of the client directly connecting to the
database of SAP Datasphere. This client might be an SAP HANA smart data integration Data Provisioning
Agent on a server, a 3rd party ETL or analytics tool, or any other JDBC-client. If you're using a network firewall
with a proxy, you need to provide the public IPv4 address of your proxy.
Internet Protocol version 4 addresses (IPv4 addresses) have a size of 32 bits and are represented in dot-
decimal notation, 192.168.100.1 for example. The external IPv4 address is the address that the internet and
computers outside your local network can use to identify your system.
The address can either be a single IPv4 address or a range specified with a Classless Inter-Domain Routing
suffix (CIDR suffix). An example for a CIDR suffix is /24 which represents 256 addresses and is typically used
for a large local area network (LAN). The CIDR notation for the IPv4 address above would be: 192.168.100.1/24
to denote the IP addresses between 192.168.100.0 and 192.168.100.255 (the leftmost 24 bits of the address in
binary notation are fixed). The external (public) IP address (range) to enter into the allowlist will be outside of
the range 192.168.0.0/16. You can find more information on Classless Inter-Domain Routing on Wikipedia .
Note
The number of entries in the allowlist is limited. Once the limit has been reached, you won't be able to
add entries. Therefore, please consider which IP addresses should be added and whether the number of
allowlist entries can be reduced by using ranges to request as few allowlist entries as possible.
Procedure
• Trusted IPs: For clients such as an Data Provisioning Agent on a server, 3rd party ETL or analytics tools,
or any other JDBC-client
• Trusted Cloud Connector IPs: For Cloud Connectors that you want to use for federation and replication
with remote tables from on-premise systems such as SAP HANA
The selected list shows all IP addresses that are allowed to connect to the SAP Datasphere database.
3. Click Add to open the Allow IP Addresses dialog.
Once the number of entries in the allowlist has reached its limit, the Add button will be disabled.
4. In the CIDR field of the dialog, either provide a single IPv4 address or a range specified with a CIDR suffix.
Note
Please make sure that you provide the external IPv4 address of your client respectively proxy when
using a network firewall. The IP you enter needs to be your public internet IP.
5. [optional] You can add a description of up to 120 characters to better understand your IP entries.
6. In the dialog, click Add to return to the list.
7. To save your newly added IP to the allowlist on the database, click Save in the pushbutton bar of your list.
Note
Updating the allowlist in the database requires some time. To check if your changes have been applied,
click Refresh.
Next Steps
You can also select and edit an entry from the list if an IP address has changed, or you can delete IPs if they are
not required anymore to prevent them from accessing the database of SAP Datasphere. To update the allowlist
in the database with any change you made, click Save and be reminded that the update in the database might
take some time.
Find externally facing IP addresses and IDs that must be added to allowlists in particular remote applications
before you can use connections to these remote applications.
Remote applications might restrict access to their instances. Whether an external client such as SAP
Datasphere is allowed to access the remote application is often decided by the remote application based
on allowlisted IPs. Any external client trying to access the remote appplication has to be made known to the
remote application before first trying to access the application by adding the external client's IP address(es)
to an allowlist in the remote application. As an SAP Datasphere administrator or a user with the System
Information = Read privilege you can find the necessary information in the About dialog.
Particular remote applications or sources that you might want to access with SAP Datasphere restrict access
to their instances and require external SAP Datasphere IP address information to be added to an allowlist in the
remote application before first trying to access the application.
Users with the DW Administrator role can open a More section to find more details.
To allow SAP Datasphere access to a protected remote application and using the corresponding connection
with data flows or replication flows, add the Replication/Data Flow NAT IP (egress) to the remote application
allowlist.
Administrators can find the Replication/Data Flow NAT IP (egress) from the side navigation area by clicking
(System) (About) More Replication/Data Flow NAT IP (egress).
Examples
The network for Amazon Redshift, Microsoft Azure SQL Database, or SAP SuccessFactors instances, for
example, is protected by a firewall that controls incoming traffic. To be able to use connections with
these connection types for data flows or replication flows, the connected sources require the relevant SAP
Datasphere network address translation (NAT) IP address to be added to an allowlist.
For Amazon Redshift and Microsoft Azure SQL Database, find the Replication/Data Flow NAT IP (egress) in the
last step of the connection creation wizard.
(IP address of the SAP Datasphere's SAP HANA Cloud database instance)
If connecting a REST remote source to the HANA Cloud instance through SDI (for example, OData Adapter),
then the REST remote source is accessed using one of the NAT / egress IPs.
If connecting a remote source using SDA to the HANA Cloud instance, then the connection uses the NAT /
egress IP in case the Cloud Connector is not used in the scenario.
Administrators can find the NAT IPs from the side navigation area by clicking (System) (About)
More SAP HANA Cloud NAT IP (egress).
For more information, see Domains and IP Ranges in the SAP HANA Cloud documentation.
For more information about adding the IP addresses in SAP SuccessFactors, see Adding an IP Restriction in the
SAP SuccessFactors platform documentation.
If you're using SAP Datasphere on Microsoft Azure and want to connect to an Azure storage service in a
firewall-protected Microsoft Azure storage account within the same Azure region, an administrator must allow
the SAP Datasphere's Virtual Network Subnet ID in the Microsoft Azure storage account. This is required for
connections to Azure storage services such as Microsoft Azure Data Lake Store Gen2.
Administrators can find the ID from the side navigation area by clicking (System) (About)
More Virtual Network Subnet ID (Microsoft Azure).
Related Links
To import a certificate into the SAP Datasphere trust chain, obtain the certificate from the target endpoint and
upload it to SAP Datasphere.
Prerequisites
You have downloaded the required SSL/TLS certificate from an appropriate website. As one option for
downloading, common browsers provide functionality to export these certificates.
Note
• Only X.509 Base64-encoded certificates enclosed between "-----BEGIN CERTIFICATE-----" and "-----
END CERTIFICATE-----" are supported. The common filename extension for the certificates is .pem
(privacy-enhanced mail). We also support filename extensions .crt and .cer.
• A certificate used in one region might differ from those used in other regions. Also, some sources, such
as Amazon Athena, might require more than one certificate.
• Remember that all certificates can expire.
• If you have a problem with a certificate, please contact your cloud company for assistance.
Context
For connections secured by leveraging HTTPS as the underlying transport protocol (using SSL/TLS transport
encryption), the server certificate must be trusted.
Note
You can create connections to remote systems which require a certificate upload without having uploaded
the necessary certificate. Validating a connection without valid server certificate will fail though, and you
won't be able to use the connection.
Results
In the overview, you can see the certificate with its creation and expiry date. From the overview, you can delete
certificates if required.
To enable access to a non-SAP database via ODBC to use it as a source for data flows, you need to upload the
required ODBC driver files to SAP Datasphere.
Prerequisites
• Search for the required driver files in the internet, make sure you have selected the correct driver files
(identified by their SHA256-formatted fingerprint) and download them from an appropriate web page (see
below).
• Ensure you have a valid license for the driver files.
Drivers are required for the following connection types (if several driver versions are supported, we recommend
to use the newest supported version mentioned below):
AmazonRedshiftODBC-64- ee79a8d41760a90b6fa2e1a
bit-1.4.65.1000-1.x86_64.rpm 074e33b0518e3393afd305f
0bee843b5393e10df0
When uploading the drivers, they are identified by their SHA256-formatted fingerprint. You can verify the
fingerprint with the following command:
Upload a Driver
Perform the following steps before creating the first Amazon Redshift, Oracle, or Google BigQuery connection
that you want to use for data flows.
Note
The fingerprint of the driver file name to be uploaded must match the fingerprint mentioned above.
4. Choose Upload.
5. Choose sync to synchronize the driver with the underlying component. Wait for about 5 to 10 minutes to
finish synchronization before you start creating connections or using data flows with the connection.
You might need to remove a driver when you want to upload a new version of the driver or your licence
agreement has terminated.
Troubleshooting
If a data flow fails with the error message saying that the driver could not be found, check that the drivers are
uploaded and start synchronization.
For remote systems that are integrated with SAP Datasphere in the Unified Customer Landscape, connections
are generated in SAP Datasphere. These generated connections are not assigned to any spaces yet. In the
Configuration tool, users with an administrator role can control which spaces users with an integrator role can
add the connection to. Once added to a space, space users can use the connection to replicate data with
replication flows from the remote systems.
Prerequisites
In SAP BTP Unified Customer Landscape, a formation with type Integration with SAP Datasphere has been
created integrating your SAP Datasphere tenant with one or more remote systems.
Context
Formations of type Integration with SAP Datasphere including a SAP Datasphere tenant and one or more
remote systems generate one or more connections (with connection type SAP HANA Cloud, Data Lake Files) in
the same SAP Datasphere tenant. The generated connections are not available for use in any space yet. Before
such a connection can be used in a space:
1. In the Configuration tool, an administrator must decide to which spaces users with an integrator role are
allowed to add the connection (see below).
2. In the Connections app, a space user with an integrator role must add the connection to the space (see
Add a Connection to a System in Unified Customer Landscape to the Space).
For more information about creating formations, see Including Systems in a Formation in the SAP Business
Technology Platform documentation.
1. In the side navigation area, click (System) (Configuration) Unified Customer Landscape .
2. [optional] Select a generated connection and choose Edit Business Name to provide a more reasonable
business name.
3. To allow adding a connection to selected spaces, click (Details) to open the side panel.
4. In the side panel, use the following options to define the list of spaces in which you allow space users to add
and use the connection:
• Choose Add to add one or more spaces to the list and confirm your selection.
• Enable the Allow Connection in All Spaces option to add all available spaces to the list.
Note
• Select one or more spaces and choose Remove to remove the selected spaces from the list.
Note
• You can only remove a space if the connection hasn't been added to the space in the
Connections app (see the Connections Added column in the list of spaces).
• You cannot remove spaces when you enabled the Allow Connection in All Spaces option.
To be able to successfully validate and use a connection to Adverity for view building certain preparations have
to be made.
• In an Adverity workspace, you have prepared a datastream that connects to the data source for which you
want to create the connection.
• In SAP Datasphere, you have added the necessary Adverity IP addresses to the IP allowlist. For more
information, see Add IP address to IP Allowlist [page 163].
Note
To get the relevant IP addresses, please contact your Adverity Account Manager or the Adverity
Support team.
Adverity Connections
To be able to successfully validate and use a connection to Amazon Athena for remote tables certain
preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• A DW administrator has uploaded the server certificates to SAP Datasphere. Two certificates are required,
one for Amazon Athena and one for Amazon S3. Region-specific certificates might be required for Amazon
Athena. Alternatively, if the common root CA certificate contains trust for both endpoints, Amazon Athena
and Amazon Simple Storage Service (API/Athena and the Data/S3), you can upload the root certificate.
For more information, see Manage Certificates for Connections [page 166].
Related Information
To be able to successfully validate and use a connection to Apache Kafka (on-premise) for replication flows,
certain preparations have to be made.
Replication Flows
Before you can use the connection for replication flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to the Apache Kafka on-
premise implementation.
To be able to successfully validate and use a connection to Confluent Platform (on-premise) for replication
flows, certain preparations have to be made.
Replication Flows
Before you can use the connection for replication flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to Confluent Platform (Kafka
brokers) and to the Schema Registry.
Note
Separate Cloud Connector instances might be used for the two endpoints. The Schema Registry might
be used in one Cloud Connector location is while connecting to the Kafka brokers happens in another
location.
Related Information
Confluent Connections
To be able to successfully validate and use a connection to an Amazon Redshift database for remote tables or
data flows certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the CamelJdbcAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• An administrator has downloadad and installed the required JDBC library in the <DPAgent_root>/
camel/lib folder and restarted the Data Provisioning Agent before registering the adapter with SAP
Datasphere.
Data Flows
Before you can use the connection for data flows, the following is required:
Related Information
To be able to successfully validate and use a Cloud Data Integration connection for remote tables or data flows
certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the CloudDataIntegrationAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• For ABAP-based cloud SAP systems such as SAP S/4HANA Cloud or SAP Marketing Cloud: A
communication arrangement has been created for communication scenario SAP_COM_0531 in the source
system.
For more information, see Integrating CDI in the SAP S/4HANA Cloud documentation.
Data Flows
Before you can use the connection for data flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
For more information, see Configure Cloud Connector [page 156].
• For ABAP-based cloud SAP systems such as SAP S/4HANA Cloud or SAP Marketing Cloud: A
communication arrangement has been created for communication scenario SAP_COM_0531 in the source
system.
For more information, see Integrating CDI in the SAP S/4HANA Cloud documentation.
Related Information
To be able to successfully validate and use a Generic JDBC connection for remote tables certain preparations
have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the CamelJdbcAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• It has been checked that the data source is supported by the CamelJdbcAdapter.
For latest information about supported data sources and versions, see the SAP HANA Smart Data
Integration Product Availability Matrix (PAM) .
Note
For information about unsupported data sources, see SAP Note 3130999 .
• An administrator has downloadad and installed the required JDBC library in the <DPAgent_root>/
camel/lib folder and restarted the Data Provisioning Agent before registering the adapter with SAP
Datasphere.
For more information, see Set up the Camel JDBC Adapter in the SAP HANA Smart Data Integration and
SAP HANA Smart Data Quality Installation and Configuration Guide.
For information about the proper JDBC library for your source, see the SAP HANA smart data integration
Product Availability Matrix (PAM).
Related Information
To be able to successfully validate and use a connection to an OData service for remote tables or data flows
certain preparations have to be made.
General
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
Data Flows
Before you can use the connection for data flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
For more information, see Configure Cloud Connector [page 156].
Related Information
To create a Generic SFTP connection, the host's public key is required. Additionally, to successfully validate and
use a Generic SFTP connection to an on-premise SFTP server, Cloud Connector is required.
Data Flows
Before you can use the connection for data flows, the following is required:
Example
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEMXBFYDfcYMW0dccgbJ/
TfhpTQhc5oR06jKIg+WCarr myuser@myhost
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAACAQDRqWbaMxSetrsAtTHFaxym4rVqV1yb4umqhDJbJ0H63T+wn8
lm+Ev/i/
u+8BZT9nvzXZqbn1rezWZvXK234SkfDFzTIb37vqlgPagrZlUc9DGAey6F4irQcgEQjiSAczsjNzYu
n2yrpsL/
9QBahFdeCKUPNQIXYU8ctbEOxqiOzvsNH4EsobiAS+leteRA0Pe2hiOaTODj4o3e5Pug4hugr8p/
tJPFVC5z7MBX9XPs6qpSAs81oZ0hZYdF4bjfHmaTNJTjrJCfg4RHTBVPsBKOLFBxwPhjcQlccNQ33v
oYF59bM37IyqV6h+Mz8up/
GrMVA7ka6np3fAyJhGhRPsLEZZY8h6KK633HLDqglkisQP87ewz8SRrcIHnhrP3hTBClx484XxCBMW
l4pUElQ+p32322v+KbwCEHpYj5pitnieekiXpsMNXOCZdyA/
llToPqzlGkbcI3z8ScOLvoX2qsrjOWMJlKOpwIcA/NzwU/
9LlFsecQvFzGowYYFHMnDypAnhCcwQz9BvqmjRRJGbMONmzq39HTBMd0rfyoui8KCOGkN/
d89aZERzH6jZa9ft6qzaBuhKc1TND/m1+IBEUoWZUX3XurYaJu/
0awACjVeyB0dhGafSRGhskBy2oOlX97ZOoErkIoc5BQRCLpa3OjHywzd6BLnTKKJRS6pvfG9w==
Use the resulting file host_key.pub.txt, in the directory where you run the specified command, as the Host
Key for your connection. The specified commands are as follows:
If your machine doesn't have a trusted channel, we recommend asking your administrator for the public
host key to avoid man-in-the-middle attacks.
Related Information
To be able to successfully validate and use a connection to a Google BigQuery data source for remote tables,
certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
Note
The root certificate GTS Root R1 which is valid until 2036 is required. In your browser, open https://
cloud.google.com/ to export it (see SAP Note 3424000 ).
For more information, see Manage Certificates for Connections [page 166].
Before you can use the connection for data flows, the following is required:
• A DW administrator has uploaded the required ODBC driver file to SAP Datasphere.
For more information, see Upload Third-Party ODBC Drivers (Required for Data Flows) [page 167].
To be able to successfully validate and use a connection to Microsoft Azure SQL database for remote tables or
data flows certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the MssqlLogReaderAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• An administrator has downloadad and installed the required JDBC library in the <DPAgent_root>/lib
folder before registering the adapter with SAP Datasphere.
• To use Microsoft SQL Server trigger-based replication, the user entered in the connection credentials
needs to have the required privileges and permissions. For more information, see Required Permissions for
SQL Server Trigger-Based Replication in the Installation and Configuration Guide for SAP HANA Smart Data
Integration and SAP HANA Smart Data Quality
Before you can use the connection for data flows and replication flows, the following is required:
Related Information
To be able to successfully validate and use a connection to Microsoft Azure Data Lake Store Gen2 certain
preparations have to be made.
Before you can use the connection for data flows and replication flows, the following is required:
• If you're using SAP Datasphere on Microsoft Azure and want to connect to Microsoft Azure Data Lake
Store Gen2 in a firewall-protected Microsoft Azure storage account within the same Azure region: An Azure
administrator must grant SAP Datasphere access to the Microsoft Azure storage account.
For more information, see Finding SAP Datasphere IP addresses [page 164]
Related Information
To be able to successfully validate and use a connection to a Microsoft SQL Server for remote tables or data
flows, certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the MssqlLogReaderAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• An administrator has downloadad and installed the required JDBC library in the <DPAgent_root>/lib
folder before registering the adapter with SAP Datasphere.
• Required Permissions for SQL Server Trigger-Based Replication in the SAP HANA Smart Data Integration
and SAP HANA Smart Data Quality Installation and Configuration Guide
Before you can use the connection for data flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
For more information, see Configure Cloud Connector [page 156].
Note
Cloud Connector is not required if your Microsoft SQL Server database is available on the public
internet.
Related Information
Integrate SAP Open Connectors with SAP Datasphere to be able to connect to third party data sources
powered by SAP Open Connectors.
1. Set up an SAP BTP account and enable the SAP Integration Suite service with the SAP Open Connectors
capability.
Note
You need to know your SAP BTP subaccount information (provider, region, environment, trial - yes/no)
later to select the appropriate SAP BTP subaccount region in SAP Datasphere when integrating the
SAP Open Connectors account in your space.
For information about setting up an SAP BTP trial version with the SAP Integration Suite service, see Set
Up Integration Suite Trial . To enable SAP Open Connectors, you need to activate the Extend Non-SAP
Connectivity capability in the Integration Suite.
For information about setting up SAP Integration Suite from a production SAP BTP account, see Initial
Setup in the SAP Integration Suite documentation.
For information about SAP Open Connectors availability in data centers, see SAP Note 2903776 .
2. In your SAP Open Connectors account, create connector instances for the sources that you want to
connect to SAP Datasphere.
1. In the side navigation area, click (Connections), select a space if necessary, click the SAP Open
Connectors tab, and then click Integrate your SAP Open Connectors Account to open the Integrate your SAP
Open Connectors Account dialog.
2. In the dialog, provide the following data:
1. In the SAP BTP Sub Account Region field, select the appropriate entry according to your SAP BTP
subaccount information (provider, region, environment, trial - yes/no).
2. Enter your SAP Open Connectors organisation and user secret.
3. Click OK to integrate your SAP Open Connectors account with SAP Datasphere.
Results
With connection type Open Connectors you can now create connections to the third-party data sources
available as connector instances with your SAP Open Connectors account.
Related Information
To be able to successfully validate and use a connection to an Oracle database for remote tables or data flows,
certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the OracleLogReaderAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• An administrator has downloadad and installed the required JDBC library in the <DPAgent_root>/lib
folder before registering the adapter with SAP Datasphere.
• Required Permissions for Oracle Trigger-Based Replication in the SAP HANA Smart Data Integration and
SAP HANA Smart Data Quality Installation and Configuration Guide
• If encrypted communication is used (connection is configured to use SSL), the server certificate must be
uploaded to the Data Provisioning Agent.
To retrieve the certificate, you can use for example the following command:openssl s_client
-showcerts -servername <host>:<port> -connect <host>:<port>
For more information about uploading the certificate to the Data Provisioning Agent, see Configure the
Adapter Truststore and Keystore in the SAP HANA Smart Data Integration and SAP HANA Smart Data
Quality documentation.
Data Flows
Before you can use the connection for data flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
For more information, see Configure Cloud Connector [page 156].
Note
Cloud Connector is not required if your Oracle database is available on the public internet.
• A DW administrator has uploaded the required ODBC driver file to SAP Datasphere.
To use encrypted communication (connection is configured to use SSL), additional files are required to be
uploaded.
For more information, see Upload Third-Party ODBC Drivers (Required for Data Flows) [page 167].
• A DW administrator has uploaded the server certificate to SAP Datasphere.
To retrieve the certificate, you can use for example the following command:openssl s_client
-showcerts -servername <host>:<port> -connect <host>:<port>
For more information, see Manage Certificates for Connections [page 166].
Oracle Connections
To be able to successfully validate and use a connection to Precog for view building certain preparations have
to be made.
• In Precog, you have added the source for which you want to create the connection.
• In SAP Datasphere, you have added the necessary Precog IP addresses to the IP allowlist. For more
information, see Add IP address to IP Allowlist [page 163].
Note
You can find and copy the relevant IP addresses in the final step of the connection creation wizard.
Related Information
Precog Connections
To be able to successfully validate and use a connection to an SAP ABAP system for remote tables or data
flows, certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the ABAPAdapter.
For the Language setting in the connection properties to have an effect on the language shown in the Data
Builder, Data Provisioning Agent version 2.0 SP 05 Patch 10 (2.5.1) or higher is required.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
Data Flows
Note
The availability of the replication flow feature depends on the used version and Support Package level of
the ABAP-based SAP system (SAP S/4HANA or the DMIS addon in the source). Make sure your source
systems meet the required minimum versions. We recommend to use the latest available version of SAP
S/4HANA and the DMIS add-on where possible and have the latest SAP notes and TCI notes implemented
in your systems.
For more information about required versions, recommended system landscape, considerations for the
supported source objects, and more , see SAP Note 2890171 .
Before you can use the connection for data flows, the following is required:
• If the connected system is an on-premise source, an adminstrator has installed and configured Cloud
Connector.
In the Cloud Connector configuration, an administrator has made sure that access to the required
resources is granted.
For more information, see Configure Cloud Connector [page 156].
See also: SAP Note 2835207 (ABAP connection type for SAP Data Intelligence)
• If you want to connect to SAP S/4HANA Cloud to replicate extraction-enabled, C1-released CDS views:
Consider the information about preparing an SAP S/4HANA Cloud connection for data flows.
For more information, see Prepare Connectivity to SAP S/4HANA Cloud [page 198].
Note
The availability of the replication flow feature depends on the used version and Support Package level of
the ABAP-based SAP system (SAP S/4HANA or the DMIS addon in the source). Make sure your source
systems meet the required minimum versions. We recommend to use the latest available version of SAP
S/4HANA and the DMIS add-on where possible and have the latest SAP notes and TCI notes implemented
in your systems.
For more information about required versions, recommended system landscape, considerations for the
supported source objects, and more , see SAP Note 2890171 .
Before you can use the connection for replication flows, the following is required:
• If the connected system is an on-premise source, an adminstrator has installed and configured Cloud
Connector.
In the Cloud Connector configuration, an administrator has made sure that access to the required
resources is granted.
For more information, see Configure Cloud Connector [page 156].
See also: SAP Note 2835207 (ABAP connection type for SAP Data Intelligence)
• If you want to connect to SAP S/4HANA Cloud to replicate extraction-enabled, C1-released CDS views or
you want to replicate CDS view entities using the SQL service exposure: Consider the information about
preparing an SAP S/4HANA Cloud connection for replication flows.
For more information, see Prepare Connectivity to SAP S/4HANA Cloud [page 198].
Related Information
To be able to successfully validate and use a connection to SAP BW for remote tables or data flows, certain
preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the ABAPAdapter.
Data Flows
Before you can use the connection for data flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
In the Cloud Connector configuration, an administrator has made sure that access to the required
resources is granted.
For more information, see Configure Cloud Connector [page 156].
Related Information
SAP BW Connections
Accessing SAP BW/4HANA meta data and importing models into SAP Datasphere with a SAP BW/4HANA
Model Transfer connection requires two protocols (or endpoints): Http and SAP HANA Smart Data Integration
based on the SAP HANA adapter.
For accessing SAP BW∕4HANA, http is used to securely connect to the SAP BW∕4HANA system via Cloud
Connector, and SAP HANA SQL is used to connect to the SAP HANA database of SAP BW∕4HANA via Data
For information on supported SAP BW/4HANA source versions, see Supported Source Versions for SAP
BW∕4HANA Model Transfer Connections [page 193].
Before creating a connection for SAP BW/4HANA Model Transfer in SAP Datasphere, you need to prepare the
following:
1. In SAP BW∕4HANA, make sure that the following services are active in transaction code SICF:
• BW InA - BW Information Access Services:
• /sap/bw/ina/GetCatalog
• /sap/bw/ina/GetResponse
• /sap/bw/ina/GetServerInfo
• /sap/bw/ina/ValueHelp
• /sap/bw/ina/BatchProcessing
• /sap/bw/ina/Logoff
• /sap/bw4
2. In SAP BW∕4HANA, activate OData service ESH_SEARCH_SRV in Customizing (transaction SPRO)
under SAP NetWeaver Gateway OData Channel Administration General Settings Activate and
Maintain Services .
3. Install and configure Cloud Connector. For more information, see Configure Cloud Connector [page 156].
4. In the side navigation area of SAP Datasphere, click System Administration Data Source
Configuration Live Data Sources and switch on Allow live data to leave my network.
Note
If your SAP Datasphere tenant was provisioned prior to version 2021.03, click (Product Switch)
Analytics System Administration Data Source Configuration Live Data Sources .
For more information, Set Up Cloud Connector in SAP Datasphere [page 161].
5. In the side navigation area of SAP Datasphere, click System Administration Data Source
Configuration On-premise data sources and add the location ID of your Cloud Connector instance.
Note
If your SAP Datasphere tenant was provisioned prior to version 2021.03, click (Product Switch)
Analytics System Administration Data Source Configuration On-premise data
sources .
For more information, Set Up Cloud Connector in SAP Datasphere [page 161].
6. In the side navigation area of SAP Datasphere, open System Configuration Data Integration Live
Data Connections (Tunnel) and create a live data connection of type tunnel to SAP BW∕4HANA.
Note
If your SAP Datasphere tenant was provisioned prior to version 2021.03, click (Product Switch)
Analytics (Connections).
Related Information
To securely connect and make http requests to SAP BW∕4HANA, you need to connect via Cloud Connector.
This requires that you create a live data connection of type tunnel to the SAP BW∕4HANA system.
Prerequisites
See the prerequisites 1 to 5 in Preparing SAP BW/4HANA Model Transfer Connectivity [page 189].
Procedure
Note
If your SAP Datasphere tenant was provisioned prior to version 2021.03, click (Product Switch)
Analytics (Connections) and continue with step 3.
2. In the Live Data Connections (Tunnel) section, click Manage Live Data Connections.
By enabling tunneling, data from the connected source will always be transferred through the Cloud
Connector.
7. Select the Location ID.
Note
In the next step, you will need to specify the virtual host that is mapped to your on-premise system.
This depends on the settings in your selected Cloud Connector location.
8. Add your SAP BW∕4HANA host name, HTTPS port, and client.
Use the virtual host name and virtual port that were configured in the Cloud Connector.
9. Optional: Choose a Default Language from the list.
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP BW∕4HANA system before adding a
language code. If the language code you enter is invalid, SAP Datasphere will default to the language
specified by your system metadata.
Note
While saving the connection, the system checks if it can access /sap/bc/ina/ services in SAP
BW∕4HANA.
The connection is saved and now available for selection in the SAP Datasphere connection creation wizard for
the SAP BW∕4HANA Model Transfer connection.
In order to create a connection of type SAP BW/4HANA Model Transfer , the SAP BW∕4HANA system needs to
have a specific version.
To be able to successfully validate and use a connection to SAP ECC for remote tables or data flows, certain
preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the ABAPAdapter.
For the Language setting in the connection properties to have an effect on the language shown in the Data
Builder, Data Provisioning Agent version 2.0 SP 05 Patch 10 (2.5.1) or higher is required.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• The ABAP user specified in the credentials of the SAP ABAP connection needs to have a specific set of
authorizations in the SAP ABAP system. For more information, see: Authorizations in the SAP HANA Smart
Data Integration and SAP HANA Smart Data Quality documentation.
• If you want to stream ABAP tables for loading large amounts of data without running into memory issues,
you need to configure suitable security privileges for successful registration on an SAP Gateway and you
need to create an RFC destination of type TCP/IP in the ABAP source system. With the RFC destination you
register the Data Provisioning Agent as server program in the source system. For more information, see
Prerequisites for ABAP RFC Streaming [page 154].
Data Flows
Before you can use the connection for data flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
In the Cloud Connector configuration, an administrator has made sure that access to the required
resources is granted.
For more information, see Configure Cloud Connector [page 156].
Related Information
To be able to successfully validate and use a connection to SAP Fieldglass for remote tables or data flows,
certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the CloudDataIntegrationAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
Related Information
To be able to successfully validate and use a connection to SAP HANA Cloud or SAP HANA (on-premise) for
remote tables or data flows certain preparations have to be made.
A DW administrator has uploaded the TLS server certificate DigiCert Global Root CA
(DigiCertGlobalRootCA.crt.pem).
For more information, see Manage Certificates for Connections [page 166].
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
For more information, see Configure Cloud Connector [page 156].
Related Information
To be able to successfully validate and use a connection to SAP Marketing Cloud for remote tables or data
flows, certain preparations have to be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the CloudDataIntegrationAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• A communication arrangement has been created for communication scenario SAP_COM_0531 in the
source system.
For more information, see Integrating CDI in the SAP Marketing Cloud documentation.
Data Flows
Before you can use the connection for data flows, the following is required:
• A communication arrangement has been created for communication scenario SAP_COM_0531 in the
source system.
For more information, see Integrating CDI in the SAP Marketing Cloud documentation.
Related Information
To be able to successfully validate and use a connection to SAP SuccessFactors for remote tables or data flows
certain preparations have to be made.
Related Information
To be able to successfully validate and use a connection to SAP S/4HANA Cloud, certain preparations have to
be made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the CloudDataIntegrationAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• A communication arrangement has been created for communication scenario SAP_COM_0531 in the
source system.
For more information, see Integrating CDI in the SAP S/4HANA Cloud documentation.
If you want to use federated access to CDS view entities using the ABAP SQL service exposure from SAP
S/4HANA Cloud, see Using ABAP SQL Services for Accessing Data from SAP S/4HANA Cloud [page 200].
Before you can use the connection for data flows, the following is required:
Replication Flows
Before you can use the connection for replication flows, the following is required:
• A communication arrangement has been created for communication scenario SAP_COM_0532 in the SAP
S/4HANA Cloud system.
For more information, see Integrating CDS Views Using SAP Datasphere in the SAP S/4HANA Cloud
documentation.
If you want to replicate CDS view entities using the ABAP SQL service exposure from SAP S/4HANA Cloud, see
Using ABAP SQL Services for Accessing Data from SAP S/4HANA Cloud [page 200].
Model Import
Before you can use the connection for model import, the following is required:
• A connection to an SAP HANA Smart Data Integration (SDI) Data Provisioning Agent with a registered
CloudDataIntegrationAdapter.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• In the SAP S/4HANA Cloud system, communication arrangements have been created for the following
communication scenarios:
• SAP_COM_0532
For more information, see Integrating CDS Views Using SAP Datasphere in the SAP S/4HANA Cloud
documentation.
• SAP_COM_0531
For more information, see Integrating CDI in the SAP S/4HANA Cloud documentation.
• SAP_COM_0722
For more information, see Integrating SAP Data Warehouse Cloud in the SAP S/4HANA Cloud
documentation.
Related Information
The ABAP SQL service provides SQL-level access to published CDS view entities for SAP Datasphere. You can
use the service to replicate data with replication flows or to federate data with remote tables.
Note
This feature requires developer extensibility in SAP S/4HANA Cloud (including ABAP development tools),
which is only available in a 3-system landscape. For more information, see the SAP S/4HANA Cloud
documentation:
• Developer Extensibility
• System Landscapes in SAP S/4HANA Cloud
For both consumption scenarios using the SQL service, data federation and data replication, privileged data
access needs to be enabled for communication users in SAP S/4HANA Cloud.
For more information about the consumption scenarios and privileged access, see Data Integration Patterns in
the SAP S/4HANA Cloud documentation.
In SAP S/4HANA Cloud, a business user and administrator must perform the following steps to prepare data
federation with remote tables:
1. There are some prerequisites and constraints that must be considered before using the SQL service.
For more information, see Prerequisites and Constraints in the SAP S/4HANA Cloud documentation. Note
that the information about the ODBC driver is not relevant for SAP Datasphere as a consumer of an
exposed SQL service.
2. To expose CDS view entities using the SQL service, an SAP S/4HANA Cloud business user has created a
service definition and a corresponding service binding of type SQL1 in the ABAP Development Tools. The
service definition lists the set of CDS view entities that shall be exposed, and a service binding of type SQL
for that service definition enables their exposure via the ABAP SQL Service.
In the Enabled Operations area of the service binding, the business user must select access type SELECT to
enable federated access.
For more information, see Creating a Service Definition and an SQL-Typed Service Binding in the SAP
S/4HANA Cloud documentation.
3. To expose the SQL service to get privileged access to the CDS view entities with a communication user, a
communication arrangement is required. This involves the following steps:
1. An SAP S/4HANA Cloud business user has created a custom communication scenario in the ABAP
Development Tools.
When filling out the authorizations for authorization object S_SQL_VIEW in the communication
scenario, note the following:
• On the Sources tab of the Data Builder view editors in SAP Datasphere, the service binding name
from the SQL_SCHEMA authorization field is visible as (virtual) schema.
• In the SQL_VIEWOP authorization field, select the option SELECT to grant federated access.
In SAP S/4HANA Cloud, a business user and administrator must perform the following steps to prepare data
replication with replication flows:
1. There are some prerequisites and constraints that must be considered before using the SQL service.
For more information, see Prerequisites and Constraints in the SAP S/4HANA Cloud documentation. Note
that the information about the ODBC driver is not relevant for SAP Datasphere as a consumer of an
exposed SQL service.
2. To expose CDS view entities using the SQL service, an SAP S/4HANA Cloud business user has created a
service definition and a corresponding service binding of type SQL1 in the ABAP Development Tools. The
service definition lists the set of CDS view entities that shall be exposed, and a service binding of type SQL
for that service definition enables their exposure via the ABAP SQL Service.
In the Enabled Operations area of the service binding, the business user must select access type
REPLICATE to enable data replication.
For more information, see Creating a Service Definition and an SQL-Typed Service Binding in the SAP
S/4HANA Cloud documentation.
3. To expose the SQL service to get privileged access to the CDS view entities with a communication user, a
communication arrangement is required. This involves the following steps:
1. An SAP S/4HANA Cloud business user has created a custom communication scenario in the ABAP
Development Tools.
When filling out the authorizations for authorization object S_SQL_VIEW in the communication
scenario, note the following:
• In the SQL_VIEWOP authorization field, select the option REPLICATE to allow replication on the
specified views.
2. An administrator has created a communication system and user in the SAP Fiori launchpad of the
ABAP environment.
3. An administrator has created a communication arrangement for exposing the SQL service in the SAP
Fiori launchpad of the ABAP environment.
For more information about the above steps, see Exposing the SQL Service for Data Federation and
Replication with Privileged Access in the SAP S/4HANA Cloud documentation.
4. An administrator has created a communication arrangement for communication scenario SAP_COM_0532
in the SAP Fiori launchpad of the ABAP environment.
For more information, see Creating a Communication Arrangement to Enable Replication Flows in SAP
Datasphere in the SAP S/4HANA Cloud documentation.
Note
Note that the same communication user must be added to all communication arrangements.
To be able to successfully validate and use a connection to SAP S/4HANA, certain preparations have to be
made.
Remote Tables
Before you can use the connection for creating views and accessing data via remote tables, the following is
required:
• An administrator has connected an SAP HANA smart data integration Data Provisioning Agent to SAP
Datasphere and registered the ABAPAdapter.
For the Language setting in the connection properties to have an effect on the language shown in the Data
Builder, Data Provisioning Agent version 2.0 SP 05 Patch 10 (2.5.1) or higher is required.
For more information, see Preparing Data Provisioning Agent Connectivity [page 146].
• The ABAP user specified in the credentials of the SAP ABAP connection needs to have a specific set of
authorizations in the SAP ABAP system. For more information, see: Authorizations in the SAP HANA Smart
Data Integration and SAP HANA Smart Data Quality documentation.
• If you want to stream ABAP tables for loading large amounts of data without running into memory issues,
you need to configure suitable security privileges for successful registration on an SAP Gateway and you
need to create an RFC destination of type TCP/IP in the ABAP source system. With the RFC destination you
register the Data Provisioning Agent as server program in the source system. For more information, see
Prerequisites for ABAP RFC Streaming [page 154].
Data Flows
Note
The availability of the data flow feature depends on the used version and Support Package level of SAP
S/4HANA or the DMIS addon in the source. Make sure your source systems meet the required minimum
versions. We recommend to use the latest available version of SAP S/4HANA and the DMIS add-on where
possible and have the latest SAP notes and TCI notes implemented in your systems.
Before you can use the connection for data flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
In the Cloud Connector configuration, an administrator has made sure that access to the required
resources is granted.
For more information, see Configure Cloud Connector [page 156].
See also: SAP Note 2835207 (ABAP connection type for SAP Data Intelligence)
Replication Flows
Note
The availability of the replication flow feature depends on the used version and Support Package level
of SAP S/4HANA or the DMIS addon in the source. Make sure your source systems meet the required
minimum versions. We recommend to use the latest available version of SAP S/4HANA and the DMIS
add-on where possible and have the latest SAP notes and TCI notes implemented in your systems.
For more information about required versions, recommended system landscape, considerations for the
supported source objects, and more , see SAP Note 2890171 .
Before you can use the connection for replication flows, the following is required:
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
In the Cloud Connector configuration, an administrator has made sure that access to the required
resources is granted.
For more information, see Configure Cloud Connector [page 156].
See also: SAP Note 2835207 (ABAP connection type for SAP Data Intelligence)
Supported source versions: SAP S/4HANA 1809 or higher (SAP_BASIS 753 and higher)
Before you can use the connection to import entities with data access Remote Tables, the following is required:
In SAP S/4HANA
• An administrator has followed the instructions from SAP Note 3081998 to properly set up the SAP
S/4HANA system, which includes:
1. SAP Note 3283282 has been implemented to provide the required infrastructure in the SAP S/
4HANA system.
2. The required corrections have been implemented and checks have been performed to make sure that
SAP Note 3283282 and subsequent corrections have been applied properly and all required objects
to provide the infrastructure are available and activated.
3. Report ESH_CSN_CDS_TO_CSN has been run to prepare the CDS Views for the import.
Field Value
SRV_NAME EF608938F3EB18256CE851763C2952
SRV_TYPE HT
• Authorization object SDDLVIEW - Search access authorization for the search view
CSN_EXPOSURE_CDS
Field Value
DDLSRCNAME CSN_EXPOSURE_CDS
ACTVT 03
• An adminstrator has checked that the required InA services are active in transaction code SICF:
• /sap/bw/ina/GetCatalog
• /sap/bw/ina/GetResponse
• /sap/bw/ina/GetServerInfo
• /sap/bw/ina/ValueHelp
• /sap/bw/ina/BatchProcessing
• /sap/bw/ina/Logoff
• An administrator has activated OData service ESH_SEARCH_SRV in Customizing (transaction SPRO)
under SAP NetWeaver Gateway OData Channel Administration General Settings Activate and
Maintain Services .
Cloud Connector
• An administrator has installed and configured Cloud Connector to connect to your on-premise source.
For more information, see Configure Cloud Connector [page 156].
In SAP Datasphere
• In System Administration Data Source Configuration Live Data Sources , you have switched on
Allow live data to securely leave my network.
For more information, Set Up Cloud Connector in SAP Datasphere [page 161].
• In the side navigation area of SAP Datasphere, click System Administration Data Source
Configuration On-premise data sources , you have added the location ID of your Cloud Connector
instance.
For more information, Set Up Cloud Connector in SAP Datasphere [page 161].
Supported source versions: SAP S/4HANA 2021 or higher (SAP_BASIS 756 and higher)
Before you can use the connection to import entities with data access Replication Flow to Local Tables, the
following is required:
1. You have met all prerequisites mentioned in section Model Import (Data Access: Remote Tables) [page
203].
2. You have met all prerequisites mentioned in SAP Note 3463326 .
Related Information
To securely connect to SAP S/4HANA on-premise when searching for ABAP CDS Views to be imported with
the Import Entities wizard, you need to connect via Cloud Connector. This requires that you create a live data
connection of type tunnel to the SAP S/4HANA system.
Prerequisites
Procedure
Note
If your SAP Datasphere tenant was provisioned prior to version 2021.03, click (Product Switch)
Analytics (Connections) and continue with step 3.
Note
In the next step, you will need to specify the virtual host that is mapped to your on-premise system.
This depends on the settings in your selected Cloud Connector location.
8. Add your SAP S/4HANA host name, HTTPS port, and client.
Use the virtual host name and virtual port that were configured in the Cloud Connector.
9. Optional: Choose a Default Language from the list.
This language will always be used for this connection and cannot be changed by users without
administrator privileges.
Note
You must know which languages are installed on your SAP S/4HANA system before adding a language
code. If the language code you enter is invalid, SAP Datasphere will default to the language specified by
your system metadata.
Results
The connection is saved and now available for selection in the SAP Datasphere connection creation wizard for
the SAP S/4HANA On-Premise connection.
Monitor Data Provisioning Agent connectivity in SAP Datasphere, manage the impacts of agent changes in SAP
Datasphere, and troubleshoot Data Provisioning Agent or Cloud Connector connectivity.
For connected Data Provisioning Agents, you can proactively become aware of resource shortages on the agent
instance and find more useful information.
In Configuration Data Integration On-Premise Agents choose the Monitor button to display the agents
with the following:
• Information about free and used physical memory and swap memory on the Data Provisioning Agent
server.
• Information about when the agent was connected the last time.
• Information about the overall number of connections that use the agent and the number of connections
that actively use real-time replication, with active real-time replication meaning that the connection
type supports real-time replication and for the connection at least one table is replicated via real-time
replication.
You can change to the Connections view to see the agents with a list of all connections they use and their
real-time replication status. You can pause real-time replication for the connections of the while applying
changes to the agent. For more information, see Pause Real-Time Replication for an Agent [page 211].
Access the Data Provisioning Agent adapter framework log and the adapter framework trace log directly in SAP
Datasphere.
With the integrated log access, you don’t need to leave SAP Datasphere to monitor the agent and analyze agent
issues. Accessing the log data happens via the Data Provisioning Agent File adapter which reads the log files
and saves them into the database of SAP Datasphere.
<DPAgent_root>/log/framework_alert.trc Data Provisioning Agent adapter framework log. Use this file
to monitor data provisioning agent statistics.
You can review the logs in SAP Datasphere after log access has been enabled for the agent in question.
We display the actual log files as well as up to ten archived log files that follow the naming convention
framework.trc.<x> respectively framework_alert.trc.<x> with <x> being a number between one and
ten.
Related Information
Enable accessing an agent’s log files before you can view them in SAP Datasphere.
Prerequisites
A Data Provisioning Agent administrator has provided the necessary File adapter configuration with an access
token that you need for enabling the log access in SAP Datasphere.
To define the access token in the agent's secure storage, the administrator has performed the following steps
in the agent configuration tool in command-line interactive mode:
For more information about the File adapter configuration, see File in the Installation and Configuration Guide of
the SAP HANA Smart Data Integration and SAP HANA Smart Data Quality documentation.
Results
The Review Logs entry in the menu of the agent’s tile is enabled and the framework_alert.trc and
framework.trc logs are written to the database of SAP Datasphere. You can now review the current and
archived log files from the agent's tile.
Use the logs to monitor the agent and analyze issues with the agent.
Prerequisites
The logs are written to the database of SAP Datasphere. For more information, see Enable Access to Data
Provisioning Agent Logs [page 208].
Procedure
The Review Agent Logs dialog initially shows 50 log entries. To load another chunks of 50 entries each,
scroll down to the bottom of the dialog and use the More button.
3. To show the complete message for a log entry, click More in the Message column.
4. You have the following options to restrict the results in the display of the logs:
• Search: In the <agent name> field, enter a search string and click (Search) to search in the
messages of the logs.
• Filters: You can filter based on time, message type and log file name. When you’ve made your selection,
click Apply Filters.
If your local time zone differs from the time zone used in the Data Provisioning Agent logs and
you're applying a time-based filter, you might get other filter results than expected.
5. [optional] Export the logs as CSV file to your local system. Note that filters and search restrictions will be
considered for the exported file.
For a selected SAP HANA Smart Data Integration Data Provisioning Agent, you can configure to get notified
when the agent’s status changes from connected to disconnected or the other way round.
Prerequisites
To run recurring scheduled tasks on your behalf, you need to authorize the job scheduling component of SAP
Datasphere. In your profile settings under Schedule Consent Settings, you can give and revoke your consent
to SAP Datasphere to run your scheduled tasks in the future. Note that when you don't give your consent or
revoke your consent, tasks that you own won't be executed but will fail.
Context
A recurring task will check for any status changes according to the configured frequency and send the
notifications to the user who is the owner of the configuration. The initial owner is the user who created
the configuration. Any user with the appropriate administration privileges can take over the ownership for this
task if required, for example in case of vacation replacement or when the previous owner left the department or
company.
Procedure
2. Go to the On-Premise Agents section and click (menu) Configure Sending Notifications.
3. If you haven't authorized SAP Datasphere yet to run your scheduled tasks for you, you will see a message
at the top of the Configure Sending Notifications dialog asking for your consent. Give your consent.
4. Switch on the Send Notifications toggle.
This will start the first status check. After the first check, the status check will be performed according to
the defined frequency.
Results
If the status check finds any status change for the agent, a notification will be sent that you can find by clicking
(Notifications) on the shell bar.
When you click on the notification, you’ll get to the On-Premise Agents section in (System)
(Configuration) Data Integration where you can start searching for the root cause in case the agent is
disconnected.
Next Steps
If you need to take over the ownership and receive the notifications for an agent’s status changes, go the the
Configure Sending Notifications dialog as described above, click Assign to Me and save the configuration. From
now on you will receive the notifications about any status changes for the agent. If you haven’t done so yet, you
need to provide your consent before you can take over the ownership.
For a selected SAP HANA Smart Data Integration Data Provisioning Agent, you can pause real-time replication
for the connections that use the agent while applying changes to it, such as configuration changes or applying
patches. After you have finished your agent changes, you can restart real-time replication.
Context
If you need to perform maintenance activities in a source system, you can pause real-time replication for the
corresponding connection. For more information, see Pause Real-Time Replication for a Connection.
1. In SAP Datasphere, from the main menu, open Configuration Data Integration On-Premise
Agents .
2. To show the Data Provisioning Agent tiles with a list of all connections they use, click the Connections
button.
3. To pause the agent's connections with replication status Active or Inactive, on the tile of the agent choose
(menu) and then Pause All Connections.
In the list of connections shown on the tile, the status for affected connections changes to Paused. You can
also see the status change for the connections in the Connections application.
In the Remote Table Monitor the status for affected tables changes to Paused and actions related to
real-time replication are not available for these tables. Also, you cannot start real-time replication for any
table of a paused connection.
4. You can now apply the changes to your Data Provisiong Agent.
5. Once you're finished with the changes, restart real-time replication for the agent. Choose (menu) and
then Restart All Connections.
The status in the list of connections shown on the tile, in the Connections application as well as in the
Remote Table Monitor changes accordingly and you can again perform real-time related actions for the
tables or start real-time replication.
If you encounter problems with the Data Provisioning Agent, you can perform various checks and take actions
to troubleshoot the problems.
The following sections provide information about checks, logs, and actions that you can take to troubleshoot
problems with the Data Provisionning Agent:
Note
In the following sections, filepaths and screenshots are based on a Linux-based installation of the agent.
If you have installed the agent on a Microsoft Windows server, the slashes "/” must be replaced by
backslashes “\”.
Initial Checks
• Firewall
For a successful connection, make sure that outbound connections from the Data Provisioning Agent to
the target host and port, which is provided in the Data Provisioning Agent registration information in SAP
Datasphere, are not blocked by your firewall.
• Agent version
Make sure to always use the latest released version of the Data Provisioning Agent. For information
on supported and available versions for the Data Provisioning Agent, see the SAP HANA Smart Data
Integration Product Availability Matrix (PAM) .
Make sure that all agents that you want to connect to SAP Datasphere have the same latest version.
• Java Installation
Check whether a Java installation is available by running the command java -version. If you receive
a response like java: command not found, use the Java installation which is part of the agent
installation. The Java executable can be found in folder <DPAgent_root>/sapjvm/bin.
The agent configuration is stored in the <DPAgent_root>/dpagentconfig.ini file in the agent installation
root location (<DPAgent_root>). A Data Provisioning Agent administrator can double-check for the correct
values (please do not maintain the parameters directly in the configuration file; the values are set with the
command-line agent configuration tool):
agent.name=<Agent Name> Agent Name (the name defined by the user who registered
the agent in SAP Datasphere; the name is case sensitive)
hana.onCloud=false n/a
jdbc.encrypt=true n/a
If you use a proxy server in your landscape, additionally check for the following parameters:
dpagentconfig.ini file
proxyType=http
jdbc.useProxy=true
For more information, see Agent Configuration Parameters in the SAP HANA Smart Data Integration and SAP
HANA Smart Data Quality documentation.
To troubleshoot connection issues, a Data Provisioning Agent administrator can enable logging and JDBC
tracing for the Data Provisioning Agent.
• Agent Logs
Change the logging level to INFO (default), ALL, DEBUG, or TRACE according to your needs. For more
informatiaon, see SAP Note 2496051 - How to change "Logging Level" (Trace level) of a Data Provisioning
Agent - SAP HANA Smart Data Integration.
The parameters for the logging level in the <DPAgent_root>/dpagentconfig.ini file are:
• framework.log.level
• service.log.level
Note
Changing the level to DEBUG or ALL will generate a large amount of data. We therefore recommend to
change the logging level to these values only for a short period of time while you are actively debugging
and change it to a lower information level after you have finished debugging.
See also SAP Note 2461391 - Where to find Data Provisioning Agent Log Files
• JDBC Trace
For information about activating JDBC tracing, see Trace a JDBC Connection in the SAP HANA Service for
SAP BTP in AWS and Google Cloud Regions documentation.
To set the trace level, execute the JDBC driver *.jar file from the <DPAgent_root>/plugins directory.
Performance
If you experience performance issues when replicating data via the Data Provisioning Agent, a Data
Provisioning Agent administrator can consider increasing the agent memory as described in SAP Note
2737656 - How to increase DP Agent memory.
For general memory sizing recommendations for SAP HANA Smart Data Integration, see
• Data Provisioning Agent - Best Practices and Sizing Guide in the SAP HANA Smart Data Integration and
SAP HANA Smart Data Quality documentation.
• SAP Note 2688382 - SAP HANA Smart Data Integration Memory Sizing Guideline
Validating the Connection from the Server the Agent is Running to SAP
Datasphere
In SAP Datasphere
Note
When you connect a new agent, it might take several minutes until it is connected.
Via Data Provisioning Agent Configuration Tool (for agent versions lower than 2.7.4)
1. Navigate to the command line and run <DPAgent_root>/bin/agentcli.bat --configAgent.
2. Choose Agent Status to check the connection status.
3. Make sure the output shows Agent connected to HANA: Yes.
4. If the output doesn't show that the agent is connected, it may show an error message. Resolve the error,
and then select option Start or Stop Agent, and then option Start Agent to start the agent.
Note
For agent version 2.7.4 and higher, if in the agent status the message No connection established yet is
shown, this can be ignored. You can check the connection status in SAP Datasphere instead. For more
information about the agent/SAP HANA connection status in agent version 2.7.4 and higher, see SAP Note
3487646 .
To validate the connection, you can directly use the JDBC driver jar file from the command line interface. You
must ensure that you’re using the same JDBC driver as used by the Data Provisioning Agent. The JDBC driver
jar file (com.sap.db.jdbc_*.jar) is located in the <DPAgent_root>/plugins directory.
Navigate to the <DPAgent_root>/plugins/ directory and run one of the following commands by replacing
the variables as needed and depending on your landscape:
• Without proxy:
• With proxy:
If the connection works properly the statement should look like this:
If you are unable to connect your Data Provisioning Agent to SAP Datasphere and have already validated the
connection as described in the previous section, open the agent framework trace file framework.trc in the
<DPAgent_root>/log/ folder and check whether the output matches any of the following issues.
If you see this kind of error, it is most likely related to a missing entry in the IP Allowlist inSAP Datasphere.
Verify that the external (public) IPv4 address of the server where the agent is installed is in the IP allowlist.
When using a proxy, the proxy's address needs to be included in IP allowlist as well.
Authentication fails because of invalid HANA User for Agent Messaging credentials in the agent secure storage.
To update the credentials, use the agent configuration tool and then restart the agent.
For more information, see Manage the HANA User for Agent Messaging Credentials in the SAP HANA Smart
Data Integration and SAP HANA Smart Data Quality documentation.
Firewall/Proxy Issues
This issue typically indicates that the JDBC driver is not capable of resolving the SAP HANA server URL to
connect to theSAP Datasphere tenant and/or to establish a correct outbound call. Please check your firewall/
proxy settings and make sure to enable outbound connections accordingly.
In case of missing encryption the log containts the following statement: "only secure connections are allowed".
When testing the connectivity directly with the JDBC driver, add the parameter -o encrypt=true.
The logs are located in the <DPAgent_root>/log directory. For more information on the available log files,
see SAP Note 2461391 .
If the agent is connected, you can review the framework log (framework_alert.trc) and the framework
trace log (framework.trc) directly in SAP Datasphere. For more information, see Monitoring Data
Provisioning Agent Logs [page 207].
SAP Notes
SAP Note 2511196 - What ports are used by Smart Data Integration
SAP Note 2091095 - SAP HANA Smart Data Integration and SAP HANA Smart Data Quality
SAP Note 2400022 - FAQ: SAP HANA Smart Data Integration (SDI)
SAP Note 2688382 - SAP HANA Smart Data Integration Memory Sizing Guideline
Support Information
For information about troubleshooting Cloud Connector related issues when creating or using a connection in
SAP Datasphere, see 3369433 .
These are some of the most common issues that can occur when you use the Cloud Connector to connect to
on-premise remote sources via SAP HANA Smart Data Access.
The following error occurs if you try to connect to a remote source using the Cloud Connector, but the
connectivity proxy hasn’t been enabled:
SAP Datasphere takes care of enabling the connectivity proxy. This might take a while.
2. The connectivity proxy is enabled but not fully ready to serve requests
The following error occurs if the connectivity proxy has been enabled but is not yet ready to be used:
SAP Datasphere takes care of enabling the connectivity proxy. This might take a while.
The following error occurs if you’ve used a virtual host name with an underscore, for example, hana_01:
The following error occurs if the specified virtual host cannot be reached:
The following error occurs if an invalid location ID was specified in the Data Source Configuration of the SAP
Datasphere Administration:
The following error occurs when the Cloud Connector's IP is not included in the allowlist list:
The following error occurs when the subaccount certificate used in the Cloud Connector has expired:
You can find the related logs in the ljs_trace.log file in the Cloud Connector. For example:
2021-07-29 04:50:42,131
+0200#ERROR#com.sap.core.connectivity.tunnel.client.notification.NotificationClie
nt#notification-client-277-3#
#Unable to handshake with notification server
connectivitynotification.cf.sap.hana.ondemand.com/<virtual_host>:<virtual_port>
javax.net.ssl.SSLException: Received fatal alert: certificate_expired
For information about renewing a subaccount certificate, see Update the Certificate for a Subaccount in the
SAP BTP Connectivity documentation.
The following error occurs if the on-premise backend system requires TCP SSL:
Related Information
Troubleshooting Connection Issues with the Cloud Connector (SAP HANA Cloud, SAP HANA Database
documentation)
Users with the DW Administrator role can create database user groups in SAP Datasphere to allow users to
work in a sandboxed area in the underlying SAP HANA Cloud database, unattached to any space. These users
can transfer an existing data warehouse implementation into the SAP Datasphere database or do any other
work in SAP HANA Cloud and then make it available to one or more spaces as appropriate.
Context
When creating a database user group, an administrator is also created. This administrator can create other
users, schemas, and roles using SAP Datasphere stored procedures. The administrator and their users
can create data entities (DDL) and ingest data (DML) directly into their schemas and prepare them for
consumption by spaces.
For detailed information about user groups, see User Groups in the SAP HANA Cloud documentation.
Note
Users with the DW Space Administrator role can create database users, which are associated with their
space (see Integrating Data via Database Users/Open SQL Schemas).
Procedure
1. In the side navigation area, click (System) (Configuration) Database Access Database
User Groups .
2. On the Database User Group page, click Create.
3. Enter a suffix for your database user group and click Create.
The group is created and the connection details and administrator credentials are displayed.
If you want to work with the SAP HANA database explorer, you will need to enter your password to grant
the explorer access to the database user group schema. When connecting to SAP HANA Cloud with other
tools, users will need the following properties:
• Database Group Administrator (name and password)
• Host Name
• Port
4. Click Close to close the dialog.
A database user group administrator can create users, schemas, and roles to organise and staff their group.
Creating schemas and roles and granting, revoking, and dropping roles require the use of SAP Datasphere
stored procedures.
To connect to SAP HANA Cloud with the administrator, select your newly created user group in the list, and
click Open Database Explorer, enter the password when requested, and click OK.
The SAP HANA database explorer opens with your database user group at the top level. You can now use the
SQL editor to create users, roles and schemas.
Create a User
You can create a user in your user group with the following statement:
Note
To avoid possible conflicts, we recommend that you use the prefix DWCDBGROUP#<DBgroup_name># when
naming users, schemas, and roles in your group (see Rules for Technical Names [page 138]).
You can create a schema in your database user group by using the following SAP Datasphere stored procedure:
CALL "DWC_GLOBAL"."CREATE_USERGROUP_SCHEMA"
(
SCHEMA_NAME => '<schema_name>',
OWNER_NAME => '<user_name>'
);
Note
To avoid possible conflicts, we recommend that you use the prefix DWCDBGROUP#<DBgroup_name># when
naming users, schemas, and roles in your group (see Rules for Technical Names [page 138]).
The owner of the new schema must be a user of the database user group. If the owner name is set to null, then
the database user group administrator is set as the owner.
In our example, we create a new schema, DWCDBGROUP#DWMIGRATE#STAGING, and set BOB as the owner:
CALL "DWC_GLOBAL"."CREATE_USERGROUP_SCHEMA"
(
SCHEMA_NAME => 'DWCDBGROUP#DWMIGRATE#STAGING,
OWNER_NAME => 'DWCDBGROUP#DWMIGRATE#BOB'
);
Create a Role
You can create a role in your database user group by using the following SAP Datasphere stored procedure:
CALL "DWC_GLOBAL"."CREATE_USERGROUP_ROLE"
(
ROLE_SCHEMA_NAME => '<schema_name>',
ROLE_NAME => '<role_name>'
);
Note
To avoid possible conflicts, we recommend that you use the prefix DWCDBGROUP#<DBgroup_name># when
naming users, schemas, and roles in your group (see Rules for Technical Names [page 138]).
Once the role is created, you can grant it to a user or to another role, revoke it, and drop it.
CALL "DWC_GLOBAL"."CREATE_USERGROUP_ROLE"
(
ROLE_SCHEMA_NAME => 'DWCDBGROUP#DWMIGRATE#STAGING',
ROLE_NAME => 'DWCDBGROUP#DWMIGRATE#DWINTEGRATOR'
);
You can grant a role to a user or to another role in your database user group by using the following SAP
Datasphere stored procedure:
CALL "DWC_GLOBAL"."GRANT_USERGROUP_ROLE"
(
ROLE_SCHEMA_NAME => '<schema_name>',
ROLE_NAME => '<role_name>',
GRANTEE => '<user_name>',
GRANTEE_ROLE_NAME => NULL,
WITH_ADMIN_OPTION => FALSE
);
The role schema, grantee, and grantee role must all be in the same database user group.
CALL "DWC_GLOBAL"."GRANT_USERGROUP_ROLE"
(
ROLE_SCHEMA_NAME => 'DWCDBGROUP#DWMIGRATE#STAGING',
ROLE_NAME => 'DWCDBGROUP#DWMIGRATE#DWINTEGRATOR',
GRANTEE => 'DWCDBGROUP#DWMIGRATE#BOB',
GRANTEE_ROLE_NAME => NULL,
WITH_ADMIN_OPTION => FALSE
);
Revoke a Role
You can revoke a role from a user by using the following SAP Datasphere stored procedure:
CALL "DWC_GLOBAL"."REVOKE_USERGROUP_ROLE"
(
ROLE_SCHEMA_NAME => '<schema_name>',
ROLE_NAME => '<role_name>',
GRANTEE => '<user_name>',
GRANTEE_ROLE_NAME => NULL
);
CALL "DWC_GLOBAL"."REVOKE_USERGROUP_ROLE"
(
ROLE_SCHEMA_NAME => 'DWCDBGROUP#DWMIGRATE#STAGING',
ROLE_NAME => 'DWCDBGROUP#DWINTEGRATOR',
GRANTEE => 'DWCDBGROUP#DWMIGRATE#BOB',
GRANTEE_ROLE_NAME => NULL
);
You can drop a role by using the following SAP Datasphere stored procedure:
CALL "DWC_GLOBAL"."DROP_USERGROUP_ROLE"
(
ROLE_SCHEMA_NAME => '<schema_name>',
ROLE_NAME => '<role_name>'
);
CALL "DWC_GLOBAL"."DROP_USERGROUP_ROLE"
(
ROLE_SCHEMA_NAME => 'DWCDBGROUP#DWMIGRATE#STAGING',
ROLE_NAME => 'DWCDBGROUP#DWMIGRATE#DWINTEGRATOR'
);
By default, no SAP Datasphere space can access the database user group schema. To grant a space read
privileges from the database user group schema, use the GRANT_PRIVILEGE_TO_SPACE stored procedure.
Prerequisites
Only the administrator of a database user group has the privilege to run the stored procedure
"DWC_GLOBAL"."GRANT_PRIVILEGE_TO_SPACE".
Context
You can grant read privileges by running an SAP Datasphere specific stored procedure in the SQL console in
the SAP HANA Database Explorer.
Procedure
1. From the side navigation area, go to (System) → (Configuration) → Database Access → Database
User Groups.
2. Select the database user group and click Open Database Explorer.
CALL "DWC_GLOBAL"."GRANT_PRIVILEGE_TO_SPACE" (
OPERATION => <operation>,
PRIVILEGE => <privilege>,
SCHEMA_NAME => <schema name>,
OBJECT_NAME => <object name>,
SPACE_ID => <space ID>);
privilege 'SELECT' [required] Enter the read privilege that you want
to grant (or revoke) to the space.
schema_name '[name of database user group schema]' [required] Enter the name of the schema you
want the space to be able to read from.
object_name • '' [required] You can grant the read privileges, ei-
ther at the schema level or at the object level.
• null
• '[name of the objet]'
• At the schema level (all objets in the
schema): enter null or ' '.
• At the object level: enter a valid table name.
space_id '[ID of the space]' [required] Enter the ID of the space you are
granting the read privileges to.
CALL "DWC_GLOBAL"."GRANT_PRIVILEGE_TO_SPACE" (
OPERATION => 'GRANT',
PRIVILEGE => 'SELECT',
SCHEMA_NAME => 'SALE#ETL',
OBJECT_NAME => '',
SPACE_ID => 'SALES');
CALL "DWC_GLOBAL"."GRANT_PRIVILEGE_TO_SPACE" (
OPERATION => 'GRANT',
PRIVILEGE => 'SELECT',
SCHEMA_NAME => 'SALE#ETL',
OBJECT_NAME => 'MY_TABLE',
SPACE_ID => 'SALES');
If the run is successful, you receive a confirmation message in the Result pane. You can then open the Data
Builder, create a data flow, and select the tables as sources.
To grant a space write privileges in the database user group schema, use the GRANT_PRIVILEGE_TO_SPACE
stored procedure. Once this is done, data flows running in the space can select tables in the schema as targets
and write data to them.
Prerequisites
Only the administrator of a database user group has the privilege to run the stored procedure
"DWC_GLOBAL"."GRANT_PRIVILEGE_TO_SPACE".
Context
You can grant write privileges by running an SAP Datasphere specific stored procedure in the SQL console in
the SAP HANA Database Explorer.
Procedure
1. From the side navigation area, go to (System) → (Configuration) → Database Access → Database
User Groups.
2. Select the database user group and click Open Database Explorer.
3. In the SQL console in SAP HANA Database Explorer, call the stored procedure to grant the 'INSERT',
'UPDATE', or 'DELETE' privilege to a space using the following syntax:
CALL "DWC_GLOBAL"."GRANT_PRIVILEGE_TO_SPACE" (
OPERATION => <operation>,
PRIVILEGE => <privilege>,
SCHEMA_NAME => <schema name>,
OBJECT_NAME => <object name>,
SPACE_ID => <space ID>);
privilege • 'INSERT" [required] Enter the write privilege that you want
to grant (or revoke) to the space.
• 'UPDATE'
• 'DELETE'
Note
You can grant one privilege at a time.
schema_name '[name of database user group schema]' [required] Enter the name of the schema you
want the space to be able to write from.
object_name • '' [required] You can grant the write privileges, ei-
ther at the schema level or at the object level.
• null
• '[name of the objet]'
• At the schema level (all objets in the
schema): enter null or ' '.
• At the object level: enter a valid table name.
space_id '[ID of the space]' [required] Enter the ID of the space you are
granting the write privileges to.
CALL "DWC_GLOBAL"."GRANT_PRIVILEGE_TO_SPACE" (
OPERATION => 'GRANT',
PRIVILEGE => 'UPDATE',
SCHEMA_NAME => 'SALE#ETL',
OBJECT_NAME => '',
SPACE_ID => 'SALES');
CALL "DWC_GLOBAL"."GRANT_PRIVILEGE_TO_SPACE" (
OPERATION => 'GRANT',
PRIVILEGE => 'UPDATE',
SCHEMA_NAME => 'SALE#ETL',
OBJECT_NAME => 'MY_TABLE',
SPACE_ID => 'SALES');
Results
If the run is successful, you receive a confirmation message in the Result pane. You can then open the Data
Builder, create a data flow, and select the tables as targets.
Administrators have access to various monitoring logs and views, and can create database analysis users, if
necessary, to help troubleshoot issues.
Click (System Monitor) to access the main monitoring tool. The System Monitor allows to monitor the
performance of your system and identify storage, task, out-of-memory, and other issues across all spaces.
For example, you can see all the errors (such as failed tasks and out-of-memory errors) that occurred
yesterday or the top five statements with the highest peak memory consumption.
Note
For optimal performance, it is recommended that you consider staggering the scheduled run time of tasks
such as data flows and task chains that may contain these tasks. There is a limit on how many tasks can be
started at the same time. If you come close to this limit, scheduled task runs may be delayed and, if you go
beyond the limit, some scheduled task runs might even be skipped.
Card Description
Disk Storage Used Shows the total amount of disk storage used in all spaces, broken down between:
• Data in Spaces: All data that is stored in spaces.
• Audit Log Data: Data related to audit logs (see Audit Logging).
Note
Audit logs can grow quickly and consume a great deal of disk storage (see
Delete Audit Logs [page 251]).
• Other Data: Includes data stored in database user group schemas (see Creating
a Database User Group [page 224]) and SAP HANA data (such as statistics sche-
mas).
• Administrative Data: Data used to administer the tenant and all spaces (such as
space quota, space version). Includes all information stored in the central schemas
(DWC_GLOBAL, DWC_GLOBAL_LOG, DWC_TENANT_OWNER).
Disk Used by Spaces for Shows the total amount of disk storage used in all spaces, out of the total amount of disk
Storage storage assigned to all spaces. You can see a breakdown of this amount in the card Disk
Storage Used.
Memory Used by Spaces for Shows the total amount of memory storage used in all spaces, out of the total amount of
Storage memory storage assigned to all spaces.
Disk Assigned to Spaces for Shows the total amount of disk storage assigned to all spaces.
Storage
Memory Assigned to Spaces Shows the total amount of memory storage assigned to all spaces.
for Storage
Monitor Tasks
For example, you can find out if tasks have to be scheduled at another time so that high-memory consuming
tasks do not run at the same time. If single tasks consume too much memory, some additional views may need
to be persisted or the view partitioning may need to be used to lower the memory consumption.
To investigate issues:
Card Description
2. Click View Logs in a card to go to the Logs tab, then Tasks sub-tab, which displays information filtered on
the card criteria. For more information on the Tasks tab, see Tasks Logs Tab [page 238].
3. Click the links in the following columns:
• Activity column - For the spaces you have access to (via scoped roles), a link opens the run in the Data
Integration Monitor (see Managing and Monitoring Data Integration).
• Object Name column - For the spaces you have access to (via scoped roles), a link opens the editor of
the object.
Monitor Statements
Note
If expensive statement tracing is not enabled, then statement information and errors are not traced and
you cannot see them in the System Monitor. For more information on enabling and configuring expensive
statement tracing, see Configure Monitoring [page 244].
Card Description
Top 5 MDS Requests Shows the 5 SAP HANA multi-dimensional services (MDS) requests (used for example
by Processing Memory in SAP Analytics Cloud consumption), whose processing memory consumption is the
Consumption highest.
Out-of-Memory Errors (MDS Shows the out-of-memory errors that are related to SAP HANA multi-dimensional serv-
Requests) ices (MDS) requests, which is used for example for SAP Analytics Cloud consumption.
Top 5 Out-of-Memory Errors Shows the schemas in which out-of-memory errors have occurred in the last 7 days
(Workload Class) by Space because the statement limits have been exceeded.
To set the statement limits for spaces, see Set Priorities and Statement Limits for
Spaces [page 134].
2. Click View Logs in a card to go to the Logs tab, then Statements sub-tab, which displays information filtered
on the card criteria. For more information on the Statements tab, see Statements Logs Tab [page 240].
3. Click the links in the Statement Details column.
Card Description
Top 5 Admission Control Shows the 5 spaces with the highest number of rejected statements in the last 7 days.
Rejection Events by Space
Note
A space that has been deleted is prefixed with an asterisk character.
Top 5 Admission Control Shows the 5 spaces with the highest number of queued statements in the last 7 days.
Queuing Events by Space
Note
A space that has been deleted is prefixed with an asterisk character.
For more information about admission control thresholds, see Set Priorities and Statement Limits for Spaces
[page 134].
Once you’ve created an elastic compute node in the Space Management app (see Create an Elastic Compute
Node [page 39]), you can monitor its key figures, such as the start and end time of the last run or the amount
of memory used for data replication.
1. In the side navigation area, click (System Monitor), then click the Elastic Compute Nodes tab.
2. From the dropdown list, select the elastic compute node that you want to monitor.
Note
If one elastic compute node exists, related monitoring information is automatically displayed in the
tab. If several elastic compute nodes exist, you must select a node from the dropdown list to display
monitoring information in the tab.
You can view elastic compute node key figures or identify issues with the following cards:
Card Description
Configuration Shows the following information about the current elastic compute node:
• Technical name.
• Status, such as Ready or Running (see Run an Elastic Compute Node [page 44]).
• The performance class and the resources allocated to the node: number of com-
pute blocks, memory, disk storage and number of vCPUs.
Run Details Shows the following information about the latest or the previous run of the current
elastic compute node:
• The date and time at which the elastic compute node has started and stopped.
• The total run duration (uptime) from the starting to the stopping phase.
• The number of block-hours is the numbers of hours that have been consumed by
the run. The number of block-hours is the result of the run duration in numbers
of hours multiplied by the number of compute blocks. If a node that includes 4
compute blocks runs for 5 hours, 20 block-hours have been consumed. In such a
case, the uptime equals the block-hours. If a node that includes 8 compute blocks
runs for 5 hours, 40 block-hours have been consumed.
Monthly Uptime Shows the following information about the elastic compute node runs for the current
month or the last month:
• The total duration (uptime) of all runs in the current or last month.
• The total number of block-hours consumed by all the runs in the current or last
month.
Average CPU Shows the average percentage of the number of vCPUs consumed, during the latest or
previous run of the elastic compute node.
The trend icon (up or down arrow) indicates if the percentage is higher or lower than the
previous run.
To see the real-time average CPU utilization in percentage for the elastic compute note,
click Performance Monitor (SAP HANA Cockpit), which opens the Performance Monitor
page in the SAP HANA Cockpit (see The Perfomance Monitor in the SAP HANA Cloud
Database Administration with SAP HANA Cockpit).
Average Memory Shows the average amount of memory consumed (in GiB), during the latest or previous
run of the elastic compute node.
The trend icon (up or down arrow) indicates if the percentage is higher or lower than the
previous run.
To see the real-time average memory utilization in GB for the elastic compute note,
click Performance Monitor (SAP HANA Cockpit), which opens the Performance Monitor
page in the SAP HANA Cockpit (see The Perfomance Monitor in the SAP HANA Cloud
Database Administration with SAP HANA Cockpit).
Total Uptime Shows the total duration in hours of all runs of the elastic compute node.
Top 5 Statements Shows the 5 statements whose memory consumption was the highest during the last
by Processing Memory run of the elastic compute node.
Consumption
To see detailed information about the statements, you can click View Logs, which takes
you to the Logs Statements tab. See Monitoring SAP Datasphere [page 232].
Out-of-Memory Errors Shows the number of out-of-memory errors that have occurred in tasks and statements
related to the elastic compute node, during the last run.
To see detailed information about the errors, you can click View Logs, which takes you to
the Logs Statements tab. See Monitoring SAP Datasphere [page 232].
Memory Distribution Shows the amount of memory allocated to the elastic compute node, if in a running
state, broken down between:
• Unused Memory - Shows the amount of memory available for the elastic compute
node.
• Memory Used for Data Replication - Shows the amount of memory used to store
replicated data for the elastic compute node.
• Memory Used for Processing - Shows the amount of memory used by the processes
that are currently running for the elastic compute node. For example: consumption
of the queries running on the elastic compute node.
Note
If the elastic compute node is not in a running state, no data is displayed.
Property Description
Start Time Shows at what time (date and hour) the task has started to run.
Duration (sec) Shows how many seconds the task has run.
Object Type Shows the type of object that was run in the task. For example: view, remote table, data flow.
Activity Shows the action that was performed on the object. For example: persist, replicate, execute.
You can click on the activity name, which takes you to the Data Integration Monitor.
Space Name Shows the name of the space in which the task is run.
Object Name Shows the name of the object. You can click on the object name, which opens the object in
the Data Builder.
SAP HANA Peak Memory Shows the maximum amount of memory (in MiB) the task has used during the runtime in
SAP HANA.
Note
You can see this information:
• If the option Enable Expensive Statement Tracing is enabled and if the task exceeds
the thresholds specified in (Configuration) → Monitoring.
• And if the task is run for these objects (and activities): views (persist, remove_per-
sisted_data), remote tables (replicate, enable_realtime), data flows (execute) and
intelligent lookup (execute, delete_data).
SAP HANA CPU Time Shows the maximum amount of CPU time (in ms) the task has used in SAP HANA.
Note
You can see this information:
• If the option Enable Expensive Statement Tracing is enabled and if the task exceeds
the thresholds specified in (Configuration) → Monitoring. See Configure Moni-
toring [page 244].
• And if the task is run for these objects (and activities): views (persist, remove_per-
sisted_data), remote tables (replicate, enable_realtime), data flows (execute) and
intelligent lookup (execute, delete_data).
Note
The CPU time indicates how much time is used for all threads. It means that if the CPU
time is significantly higher than the duration of the statement, then many threads are
used. If many threads are used for a long time, no other tasks should be scheduled at
that point in time, or resource bottlenecks may occur and tasks may even be canceled.
Records Shows the number of records of the target table after the task has finished running.
Note
You can see this information only if the task is run for these objects (and activities):
views (persist), remote tables (replicate, enable_realtime), data flows (execute) and
intelligent lookup (execute, delete_data). Otherwise, no number is displayed.
SAP HANA Used Memory Shows the amount of memory (in MiB) that is used by the target table in SAP HANA after
the task has finished running.
SAP HANA Used Disk Shows the amount of disk space (in MiB) that is used by the target table in SAP HANA after
the task has finished running.
Substatus For tasks with the status “failed”, shows the substatus and a message describing the cause
of failure. For more information about failed task substatuses, see Understanding Statuses
and Substatuses.
Target Table Shows the SAP HANA database technical name of the target table.
Statements Shows a link you can click to view all the statements of the task in the Statements tab, if the
information is available.
Note
• You can see this information if the option Enable Expensive Statement Tracing is
enabled in (Configuration) → Monitoring. See Configure Monitoring [page 244].
• However, as statements are traced for a limited period, you may not be able to see
the statements used in the task.
Out-of-Memory Shows if the task has an out-of-memory error ("Yes" is then displayed) or not ("No" is then
displayed).
Start Date Shows at which date the task has started to run.
Note
Data on tasks are kept for the time specified in (Configuration) → Tasks.
In Logs Statements , the table shows the following information, depending on what you've specified in
(Configuration) → Monitoring:
• If the option Enable Expensive Statement Tracing is disabled, then the Statements tab is disabled.
• If the option Enable Expensive Statement Tracing is enabled, you can see all the database statements that
exceed the specified thresholds.
Property Description
Start Time Shows at what time (date and hour) the statement has started to run.
Duration (ms) Shows how many milliseconds the statement has run.
Object Type • Shows the type of object that was run in the statement (for example: view, remote
table, data flow).
• Or shows the area where the statement was run:
• MDS - this is an SAP HANA multi-dimensional services (MDS) statement, which is
caused for example by stories when SAP Analytics Cloud queries SAP Datasphere.
• Data Flow - the statement was run by a data flow.
• Analysis - the statement was run by a database analysis user.
• Space SQL - the statement was run by a database user of a space.
• Business Layer Modeling - the statement was run in the Business Builder.
• Data Layer Modeling - the statement was run in the data preview of the view editor
in the Data Builder.
• DWC Space Management - the statement was run in the Space Management, for
example, when deploying an object.
• DB Usergroup - the statement was run by a user of a database user group.
• DWC Administration - the statement was run for an administration task such as
writing a task framework status.
• System - any other SAP HANA system statement.
Activity Shows the action that was performed. For example: update, compile, select.
Object Name If the statement is related to a task, it shows the name of the object for which the statement
was run.
Schema Name Shows the name of the schema in which the statement is run.
SAP HANA Peak Memory Shows the maximum amount of memory (in MiB) the statement has used during the
runtime in SAP HANA.
Note
You can see the information if the option Enable Expensive Statement Tracing is ena-
bled and if the statement exceeds the thresholds specified in (Configuration) →
Monitoring. See Configure Monitoring [page 244].
SAP HANA CPU Time Shows the amount of CPU time (in ms) the statement has used in SAP HANA.
Note
You can see the information if the option Enable Expensive Statement Tracing is ena-
bled and if the statement exceeds the thresholds specified in (Configuration) →
Monitoring. See Configure Monitoring [page 244].
Note
The CPU time indicates how much time is used for all threads. It means that if the CPU
time is significantly higher than the duration of the statement, then many threads are
used. If many threads are used for a long time, no other tasks should be scheduled at
that point in time, or resource bottlenecks may occur and tasks may even be canceled.
Statement Details Shows the More link that you can click to view the complete SQL statement.
Note
For MDS queries - If you’ve enabled the tracing of MDS information (see Configure Mon-
itoring [page 244]), the payload of the MDS query that is run by SAP Analytics Cloud is
displayed. If identified in the payload, the following information is also displayed: story
ID, story name and data sources. You can copy or download the displayed information.
Parameters Shows the values of the parameters of the statement that are indicated by the character "?"
in the popup that opens when clicking More in the Statement Details column.
Out-of-memory Shows if the statement has an out-of-memory error ("Yes" is then displayed) or not ("No" is
then displayed).
Task Log ID If the statement is related to a task, it shows the identifier of the task within a link, which
takes you to the Tasks tab filtered on this task.
Elastic Compute Node If the statement exceeds the thresholds specified in the option Enable Expensive Statement
Tracing in (Configuration) → Monitoring (see Configure Monitoring [page 244]):
• Shows the name of the elastic compute node if the statement is run on an elastic
compute node.
• Shows a hyphen (-) if the statement is run on the main instance.
Error Code If the statement has failed, it shows the numeric code of the SQL error. See SQL Error Codes
in the SAP HANA SQL Reference Guide for SAP HANA Platform.
Error Message If the statement has failed, it shows a description of the SQL error.
Workload Class If the statement has an out-of-memory error, it shows the name of the workload class
whose limit has been exceeded.
Start Date Shows at which date the statement has started to run.
Data on statements are kept for a time that depends on the thresholds specified in (Configuration) →
Monitoring (see Configure Monitoring [page 244]). As a certain number of statements are kept (30.000 by
default), if very low thresholds are set, the time period may be very low (for example, only a few hours). To
keep the statements for a longer time, the thresholds should be set accordingly.
You can control the tables in Tasks and Statements in the following ways:
Example
To only see the tasks that have failed on remote tables, in the Include area, select the column
Object Type, then the filtering value contains and enter "REMOTE". Then, add a filter, select the
column Status, then the filtering value contains and enter "FAILED". Once applied, the filter is
displayed above the table.
Note
• The filtering options available depend on the data type of the column you filter on.
• Filters applied to text columns are case-sensitive.
• You can enter filter or sort values in multiple columns.
• You cannot view data in an SQL view if any of its sources is shared from another space and has
an input parameter.
• To increase performance, only the first 1,000 rows are displayed. Use filters to find the data
you are looking for. Filters are applied to all rows, but only the first filtered 1,000 rows are
displayed.
Note
If you filter on one of the following columns and you enter a number, use the “.” (period) character
as the decimal separator, regardless of the decimal separator used in the number formatting that
you’ve chosen in the general user settings ( Settings Language & Region ): SAP HANA Peak
Memory, SAP HANA CPU Time, SAP HANA Used Memory and SAP HANA Used Disk.
You can control which monitoring data is collected and also obtain independent access to the underlying SAP
HANA monitoring views that power the System Monitor.
Procedure
1. In the side navigation area, click (System) (Configuration) and then select the Monitoring tab.
2. To obtain independent access to the underlying SAP HANA monitoring views that power the System
Monitor:
1. Select a space from the drop-down list and click Confirm Selected Space.
2. If you've created the <SAP_ADMIN> space and you want to enable it, click Enable access to SAP
Monitoring Content Space. If there isn't any space named <SAP_ADMIN> in your tenant, this is not
available for selection.
For more information, see Working with SAP HANA Monitoring Views [page 245].
3. To analyze individual SQL queries whose execution exceeds one or more thresholds that you specify, select
Enable Expensive Statement Tracing, specify the following parameters to configure and filter the trace
details, then save your changes.
Property Description
In-Memory Tracing Specify the maximum number of records that are stored in the monitoring tables.
Records
For example, if about 5 days are traced in the expensive statement tables and you don’t want
to change the thresholds, you can double the number of records in In-Memory Tracing Records
so that about 10 days are traced. Be aware that increasing this number will also increase the
used storage.
Maximum: 100,000
Threshold CPU Time Specify the threshold CPU time of statement execution.
Recommended: 0
Recommended: 1,000MB
Recommended: 5
Trace Parameter Val- In SQL statements, field values may be specified as parameters (using a "?" in the syntax). If
ues these parameter values are not required, then do not select the option to reduce the amount of
data traced.
If expensive statement tracing is not enabled, then statement information and errors are not traced and
you cannot see them in the System Monitor (see Monitoring SAP Datasphere [page 232]).
For more information about these parameters, see Expensive Statements Trace in the SAP HANA Cloud,
SAP HANA Database Administration Guide.
4. To analyze individual SAP HANA multi-dimensional services (MDS) queries, select Enable MDS Information
Tracing and save.
Property Description
MDS Tracing Records Specify the maximum number of records that are stored for MDS requests in the monitoring
tables.
You can increase this number in order to trace more data in the System Monitor.
If the tracing is enabled, you can view information on MDS queries when clicking More in the column
Statement Details of the Statements tab in the System Monitor (see Monitoring SAP Datasphere [page
232]).
5. To trace elastic compute node data, select Enable Elastic Compute Node Data Tracing and save.
• If the tracing is disabled, only the statements of currently running nodes are displayed in the System
Monitor. If a node is stopped, its information is deleted.
• If the tracing is enabled and a node is started and stopped more than once, only the information
about the previous run is displayed. The information is kept for 10 days or is deleted if more than 100
individual elastic compute nodes have run.
You can obtain independent access to the underlying SAP HANA monitoring views that power the System
Monitor to do additional analysis on them and visualize them in SAP Analytics Cloud.
Monitoring information includes information on all spaces and views and so these views should not be made
accessible to all SAP Datasphere users. An administrator can select two spaces dedicated to monitoring
information and assign users to these spaces with modeling privileges so they can work with the monitoring
views in the Data Builder.
Note
The data from these monitoring views is available directly in the System Monitor (see Monitoring SAP
Datasphere [page 232]). Working with them independently is optional and allows you to do further analysis
that is not supported in the standard monitor.
As the monitoring spaces you choose will provide unfiltered access to monitoring views, be aware that the
users assigned to the spaces will be able to see all metadata and object definitions of all spaces.
Note
If you have already selected a space for monitoring before version 2021.19, you need to select another
space, then select the initial space again so that you can access all the views.
• <SAP_ADMIN> space - This space can contain the pre-configured monitoring views provided by SAP
via the Content Network. First create the space with the space ID <SAP_ADMIN> and the space name
<Administration (SAP)>, enable access to it, and import the package from the Content Network.
Note
Do not create a space with the space ID <SAP_ADMIN> for another purpose.
Monitoring Views
• SAP HANA SYS Schema Monitoring Views - All SAP HANA monitoring views start with M_. For more
information, see Monitoring Views in the SAP HANA Cloud, SAP HANA Database SQL Reference Guide.
The views for monitoring expensive statements are M_EXPENSIVE_STATEMENTS and
M_EXPENSIVE_STATEMENT_EXECUTION_LOCATION_STATISTICS (see M_EXPENSIVE_STATEMENTS
and M_EXPENSIVE_STATEMENT_EXECUTION_LOCATION_STATISTICS).
The view M_MULTIDIMENSIONAL_STATEMENT_STATISTICS and M_MULTIDIMENSIONAL_STATEMENTS
provide extensive information about MDS queries. For more
The following monitoring views have the suffix _V_EXT and are ready to use in the DWC_GLOBAL schema:
• SPACE_SCHEMAS_V_EXT:
Column Description
SPACE_ID Identifier of the SAP Datasphere space. Note that one space can contain several schemas.
• SPACE_USERS_V_EXT:
Column Description
SPACE_ID Identifier of the SAP Datasphere space. Note that one space can contain several users.
USER_TYPE Type of user, such as space technical user (for example database user for open SQL
schemas) or global user.
• TASK_SCHEDULES_V_EXT:
Column Description
SPACE_ID (Key) Identifier of the SAP Datasphere space which contains the object with the defined schedule.
OBJECT_ID (Key) Identifier of the SAP Datasphere object for which the schedule is defined.
Note
For each application, you can have multiple activities (for example, replicating or deleting
data).
OWNER Identifier of the responsible of the schedule, schedule executed on users behalf, consent is
checked against (< DWC User ID >).
NULL (no schedule defined, or a SIMPLE schedule is defined) For example: "0 */1 * * *" for
hourly (see Schedule a Data Integration Task (with Cron Expression)).
NULL (no schedule defined, or a CRON schedule is defined) or schedule definition, for
example Daily + start date + time + duration (see Schedule a Data Integration Task (Simple
Schedule)).
CHANGED_AT Timestamp containing Date and Time, at which the schedule was last changed.
• TASK_LOGS_V_EXT:
Column Description
SPACE_ID Identifier of the SAP Datasphere space which contains the object with the defined schedule.
OBJECT_ID Identifier of the SAP Datasphere object for which the schedule is defined.
ACTIVITY For each application there could be multiple activities, e.g. replicating or deleting data.
PEAK_MEMORY Captures the highest peak memory consumption (in bytes). Not available for all apps.
Requires Enable Expensive Statement Tracing (see Configure Monitoring [page 244]).
Gives Null if not available for the application, Enable Expensive Statement Tracing not set, or
the threshold defined is not reached, 0 or value of the memory consumption.
PEAK_CPU Total CPU time (in microseconds) consumed by the task. Not available for all apps. Requires
Enable Expensive Statement Tracing (see Configure Monitoring [page 244]).
Gives Null if not available for the application, Enable Expensive Statement Tracing not set, or
the threshold defined is not reached, 0 or value of the CPU time consumption.
RECORDS Shows the number of records of the target table after the task has finished running.
START_TIME Timestamp containing Date and Time, at which the scheduled task was started.
END_TIME Timestamp containing Date and Time, at which the scheduled task was stopped.
TRIGGERED_TYPE Indicates if task execution was triggered manually (DIRECT) or via schedule (SCHEDULED).
APPLICATION_USER The user on whose behalf the schedule was executed (the owner at this point in time).
DURATION Duration of the task execution (also works for ongoing execution).
• TASK_LOG_MESSAGES_V_EXT:
Column Description
MESSAGE_NO (Key) Order sequence of all messages belonging to a certain Tasklog ID.
SEVERITY Indicates if the message provides general information (INFO) or error information (ERROR).
DETAILS Technical additional information. For example, it can be an error stack or a correlation ID.
• TASK_LOCKS_V_EXT:
Column Description
LOCK_KEY (Key) Identifier, flexible field as part of the lock identifier, usually set to WRITE or EXECUTE.
SPACE_ID (Key) Identifier of the SAP Datasphere space which contains the object with the defined schedule.
OBJECT_ID (Key) Identifier of the SAP Datasphere object for which the schedule is defined.
TASK_LOG_ID Uniquely identifies the task execution that set the lock.
Note
Cross-space sharing is active for all SAP HANA monitoring views. The row level access of shared views is
bound to the space read access privileges of the user who consumes the view.
These SAP Datasphere monitoring views help you monitor data integration tasks in a more flexible way. They
are built on the V_EXT views, and are enriched with further information as preparation for consumption in an
SAP Analytics Cloud story.
See the blogs SAP Datasphere: Data Integration Monitoring – Sample Content for Reporting (published in
October 2021) and SAP Datasphere: Data Integration Monitoring – Running Task Overview (published in
November 2021).
You must:
Monitor the read and change actions (policies) performed in the database with audit logs, and see who did
what and when.
If Space Administrators have enabled audit logs to be created for their space (see Enable Audit Logging), you
can get an overview of these audit logs. You can do analytics on audit logs by assigning the audit views to a
dedicated space and then work with them in a view in the Data Builder.
Audit logs can consume a large quantity of GB of disk in your database, especially when combined with
long retention periods (which are defined at the space level). You can delete audit logs when needed, which
will free up disk space. For more information, see Delete Audit Logs [page 251].
All spaces for which auditing is enabled, are listed in the Audit Log Deletion area.
For each space, you can delete separately all the audit log entries recorded for read operations and all the audit
log entries recorded for change operations. All the entries recorded before the date and time you specify are
deleted.
Note
Audit logs are automatically deleted when performing the following actions: deleting a space, deleting a
database user (open SQL schema), disabling an audit policy for a space, disabling an audit policy for a
database user (open SQL schema), unassigning an HDI container from a space. Before performing any of
these actions, you may want to export the audit log entries, for example by using SAP HANA Database
Explorer (see Export Audit Logs).
Monitor the changes that users perform on modeling objects (such as spaces and tables) as well as changes to
the system configuration (such as roles and users).
For example:
Note
To delete the activity log, you must be granted the privilege Activity Log with the permission Delete,
which is included in the system owner role and which you can include in a custom role.
For more information, see Track User Activities in the SAP Analytics Cloud Help.
A database analysis user is an SAP HANA Cloud database user with wide-ranging privileges. It can be used to
support monitoring, analyzing, tracing, and debugging of your SAP Datasphere run-time database.
Context
A user with the DW Administrator role can create a database analysis user.
Note
You should only create a database analysis user to resolve a specific database issue and then delete it
immediately after the issue is resolved. This user can access all SAP HANA Cloud monitoring views and all
SAP Datasphere data in all spaces, including any sensitive data stored there.
1. In the side navigation area, click (System) (Configuration) Database Access Database
Analysis Users .
2. Click Create and enter the following properties in the dialog:
Property Description
Database Analysis Enter the suffix, which is used to create the full name of the user. Can contain a maximum of 31
User Name Suffix uppercase letters or numbers and must not contain spaces or special characters other than _
(underscore). See Rules for Technical Names [page 138].
Enable Space Schema Select only if you need to grant the user access to space data.
Access
Database analysis Select the number of days after which the user will be deactivated. We strongly recommend
user expires in creating this user with an automatic expiration date.
The host name and port, as well as the user password are displayed. Note these for later use.
4. Select your user in the list and then click one of the following and enter your credentials:
• Open SAP HANA Cockpit - Open the Database Overview Monitoring page for the SAP
Datasphere run-time database, which offers various monitoring tools.
For more information, see Using the Database Overview Page to Manage a Database).
• Open Database Explorer - Open an SQL Console for the SAP Datasphere run-time database.
For more information, see Getting Started With the SAP HANA Database Explorer).
A database analysis user can run a procedure in Database Explorer to stop running statements. For
more information, see Stop a Running Statement [page 253].
Note
All actions of the database analysis user are logged in the ANALYSIS_AUDIT_LOG view, which is stored
in the space that has been assigned to store audit logs (see Enable Audit Logging).
The audit logs entries are kept for 180 days, after which they are deleted.
Using a database analysis user, you can stop a statement that is currently running.
You may for example want to stop a statement that has been running for a long time and is causing
performance issues.
You can only stop a statement that has been run by space users, analysis users, user group users and Data
Provisioning Agent users.
In SAP HANA Database Explorer, run a database procedure using the following syntax:
ACTION CANCEL Enter CANCEL to run the statement ALTER SYSTEM CAN-
CEL [WORK IN] SESSION (see ALTER SYSTEM CANCEL
[WORK IN] SESSION Statement (System Management) in
the SAP HANA Cloud, SAP HANA Database SQL Reference
Guide.)
Note
You can find the connection ID in (System Monitor)
Logs Statements , then the column Connection
ID.
For more information on database explorer, Getting Started With the SAP HANA Database Explorer.
Delete your database analysis user as soon as the support task is completed to avoid misuse of sensitive data.
Procedure
1. In the side navigation area, click (System) (Configuration) Database Access Database
Analysis Users .
2. Select the user you want to delete and then click Delete.
Configure notifications about system events and network connection issues, and define the SMTP server to be
used for email deliveries.
When there are problems with a system, your users would like to know whether it is something that they
control or if the issues are related to the network. You can't create messages for all situations, but you can let
them know when the network connection is unstable.
When the notification is on, everyone who uses the application on that tenant will see the notification in the top
right corner of their application.
By default, when users are added to your SAP Datasphere tenant, they receive a welcome email which contains
a link to the tenant so they can activate their account and log in for the first time. You can disable the welcome
email from being sent to new users. You may want to do so in the following cases:
• When SAML single sign-on (SSO) is setup and it's not necessary for the users to activate their account.
• When you want to setup single sign-on (SSO) before users are given the go to access the system.
• When the custom SAML Identity Provider (IdP) is changed.
• When you need to import users from a public tenant to a private tenant.
Note
If you disable the welcome email and then add a user who doesn't have an activated account for SAP
Datasphere, they will not be able to access the system. The new user needs to go to the tenant log-on page
and click "Forgot password?". They must enter the email address associated with their account and follow
the instructions of the received email to set up a password.
Configuring an email server of your choice ensures greater security and flexibility while delivering email for your
business.
View a list of users whose authorization consent will expire in less than four weeks.
To view a list of users whose authorization consent will expire within the next four weeks, click
(Configuration) Tasks . Then, in the Consent Expiration section of the Tasks page, click the View
Expiration List link. SAP Datasphere now displays a dialog in which you can view a list of users whose
authorization consent will expire within a given timeframe.
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
• Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your
agreements with SAP) to this:
• The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
• SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
• Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering an SAP-hosted Web site. By using
such links, you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this
information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax
and phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of
example code unless damages have been caused by SAP's gross negligence or willful misconduct.
Bias-Free Language
SAP supports a culture of diversity and inclusion. Whenever possible, we use unbiased language in our documentation to refer to people of all cultures, ethnicities,
genders, and abilities.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.