Cortex Cloud Documentation
Cortex Cloud Documentation
9.5.2. Repositories
9.5.2.1. Explore repository assets
9.5.2.2. Expanded repository asset information
9.5.2.3. Manage Repository assets
Cortex Cloud is an easily extensible platform to consolidate Application Security, Cloud Posture Security, Runtime Security, and Security Operations (SOC). It is
enterprise ready for regulated organizations with data residency preserving scanning worldwide. It provides consolidated and flexible reporting for executives
and operators for all cloud security postures. It also includes an AI Copilot across the platform to simplify your day to day activities.
Application Security: Prevent issues from getting into your production environment.
Cloud Posture Security: Reduce and prioritize risks already present in your cloud environment.
Cloud Runtime: Stop an attacker from exploiting risks present in your cloud environment.
Discover your cloud inventory: Comprehensive and uniform inventory across all types of infrastructure across all cloud providers: Compute, APIs,
Containers, Serverless, Data, AI Services, Identities, Networks.
Visualize asset relationships: Rich relationship graph demonstrates how assets and findings impact each other.
Simple, frictionless onboarding: View your entire cloud estate with agentless scanning.
Vulnerabilities
Misconfigurations
Network exposure
Cloud vulnerability management–Vulnerability management across VMs, Containers, Serverless, and OSS packages:
Prioritize impactful vulnerabilities with context: Understand environmental factors such as severity, exploitability, patch availability, internet exposure
and others.
Reduce the number of fixes: AI-driven detection consolidates related vulnerabilities with the same root cause—one fix resolves multiple issues.
Track KPIs: Understand Insights about the state of vulnerabilities in cloud native environments and their evolution over time.
Validate industry regulation with dozens of built-in standards across security, privacy, and AI: PCI DSS, HIPAA, GDPR, NIST, ISO, and more.
Meet unique organizational requirements: Create custom compliance rules and standards.
Generate audit-ready reports: Export, schedule, and share compliance reports with stakeholders.
Detect critical risks: Discover combinations of individual risk signals that form attack paths.
Prioritize harmful risks: Uncover which misconfigurations enable lateral movement to high-value assets such as sensitive data stores.
Gain end-to-end visibility: Visualize attack paths in a rich graph, gaining full risk context.
Cases:
Dramatically reduce alerts: AI-driven detection consolidates related risks and attack paths into a high priority case.
Enable full context for effective mitigation: Clearly identify high-impact posture issues, preventing wasted resources on less significant threats.
Powerful Ecosystem: Cortex Cloud integrates with your entire ecosystem with thousands of third-party integrations ranging from workflow solutions
to security vendors.
Out Of the Box Remediation Playbooks: Cortex Cloud offers many playbooks for remediating security issues to reduce MTTR.
Build your own Remediation Playbooks: Cortex Cloud offers a no code automation wizard to build your own security playbooks.
Visibility: Provides agentless, comprehensive visibility across cloud environments, including IaaS, PaaS, Kubernetes, containers, serverless functions,
networks, and storage services in a unified cloud asset inventory.
Posture Management: Supports configuration assessment, compliance monitoring, vulnerability management, code-to-cloud remediation, and reporting.
Includes attack path analysis to aggregate potential risks.
Data, Identity, and Application Security: Deploys quickly to deliver real-time visibility and protection across cloud environments.
After you onboard your cloud or code providers, the data collected by Cortex Cloud is mapped into Asset groups.
The different types of data being ingested such as, audit logs, configurations, data coming from workloads is displayed on the left.
Identities (human, non-human such as, service accounts), are displayed in the center.
The Command Center also provides you details on the total number of assets in your environment, number of issues closed as well as the amount of time saved.
Cortex Cloud includes a unified Asset Inventory, which provides a complete list of your different assets in a single place. Here you can see your AI models and
deployments, Applications, APIs, and all your Compute instances, Data assets, and Identities.
Now that you have an overview of your environment and all the assets, you can navigate to Cases to view and resolve your issues as well as create a New
Case.
Now that you have familiarized yourself with Cortex Cloud, consider taking the following actions to begin securing your cloud resources:
Cloud security engineers or architects design and maintain a system that protects everything their organization hosts across various cloud service providers.
Their goal is to achieve zero critical risks and compliance violations without blocking the productivity of other teams.Their responsibilities include:
Get complete visibility without blind spots across multi cloud environments
Manageable alerts
Avoid manual parsing that prolongs risk assessment and reporting efforts
Get better risk context that aids prioritization and remediation efforts with cross-functional teams
Security add-ons:
Security add-ons:
Data Ingestion
Application Security
Forensics
Host Insights
Capacity add-ons:
Data Retention
Query Capacity
Learn about the deployment preparation and procedures to onboard and configure Cortex Cloud.
Plan and prepare your Cortex Cloud deployment. Then, activate and configure your Cortex Cloud tenant using the Deployment steps.
Onboard your cloud assets for automation and core analytics, data ingestion, enterprise runtime security, cloud posture security and cloud runtime security.
Depending on your license and add-ons, onboard your cloud assets and modules.
Before you get started with Cortex Cloud, consider the following:
Determine the amount of log storage you need for your Cortex Cloud deployment. Talk to your partner or sales representative to determine whether you
must purchase additional storage within the Cortex Cloud tenant.
Determine the region in which you want to host Cortex Cloud and any associated services, such as Directory Sync Service. If you plan to stream data
from a Strata Logging Service instance, it must be in the same region as Cortex Cloud. For more information, see Cortex Cloud supported regions.
Review the plan and prepare considerations, and then use the onboarding checklist to deploy and onboard successfully Cortex Cloud.
Abstract
For more information about setting up 2FA in the Customer Support Portal, see Two Factor Authentication (2FA) Overview. You can also add an IdP, which is
recommended. See How to Enable a Third Party IdP.
To activate a Cortex Cloud tenant, you need to log into Cortex Gateway, which is a centralized portal for activating and managing tenants, users, roles, and
user groups. After activating the tenant you can then access the tenant. If you have multiple Cortex Cloud tenants, you will need to repeat this task for each
tenant. The activation process includes accessing Cortex Gateway, activating the tenant, and then accessing the tenant.
PREREQUISITE:
Before activating your Cortex Cloud tenant, you need to set up your Customer Support Portal account. See How to Create Your Customer Support
Portal User Account. When you create a Customer Support Portal account you can set up two-factor authentication (2FA) to log into the Customer
Support Portal, by using one of the following:
Okta Verify
Users who create the Customer Support Portal account are granted the Super User role. If you are the first user to access Cortex Gateway with the
Customer Support Portal Super User role, you are automatically granted Account Admin permissions for the gateway.
You can activate Cortex Cloud new tenants, access existing tenants, and create and manage role-based access control (RBAC) for all of your tenants.
How to activate Cortex Cloud
1. Enable and verify access to Cortex Cloud communication servers, storage buckets, and various resources in your firewall configuration. For more
information, see Enable access to required PANW resources.
2. Go to Cortex Gateway.
You can also access the link from the activation email.
3. Enter your username and password or multi-factor authentication (if set up) by using your Customer Support Portal account credentials to sign in.
Tenants that are allocated to your Customer Support Portal account and ready for activation. After activation, you cannot move your tenant to a
different Customer Support Portal account.
Tenant details such as license type, number of endpoints, and purchase date.
Tenants that were activated and are now available. If you have more than one Customer Support Portal account, the tenants are displayed
according to the Customer Support Portal account name.
4. In the Available for Activation section, use the serial number to locate the tenant that needs activation, and then click Activate.
Tenant Name: Enter a name for the tenant. Use a name that is unique across your company account and up to 59 characters long.
Region: Geographic location where your tenant will be hosted. For more information, see Cortex Cloud supported regions.
Tenant Subdomain: DNS record associated with your tenant. Enter a name that will be used to access the tenant directly using the full URL:
https://<xsiam-tenant>.xdr.<region>.paloaltonetworks.com
(Optional) If you want to bring your own keys for encrypting your data, under Advanced, select BYOK and follow the instructions of the wizard as
detailed in Encryption Method.
Encryption Method
Cortex Cloud enables you to select the method used to encrypt your tenant data at rest. You can select the encryption method of your tenant only
when creating new tenants. Select the encryption method in Advanced → Encryption Method.
All data stored by Cortex Cloud is encrypted at rest using a dedicated key management system. Cortex Cloud provides strict key access controls
and auditing, and encrypts user data at rest according to AES-256 encryption standards. We recommend all our customers use this default
system.
BYOK (Bring Your Own Keys) enables you to generate your own encryption keys and securely import and manage them via Cortex Gateway to
retain greater control over your tenant data and encryption. This requires further setup.
7. Click Activate.
The activation process can take about an hour and does not require that you remain on the activation page. Cortex Cloud sends a notification to your
email when the process is complete.
8. After activation, from Cortex Gateway, in the Available Tenants when hovering over the activated tenant, do the following:
In the dialog box, view the tenant status, region, serial number, and license details.
Abstract
Encrypt your tenant data at rest using Bring Your Own Keys (BYOK)
PREREQUISITE:
Access to BYOK (Bring Your Own Keys) functionality is restricted to tenants that were initially activated with BYOK.
Bring Your Own Keys (BYOK) enables you to generate your own encryption keys and securely import and manage them via Cortex Gateway to retain greater
control over your tenant data and encryption.
BYOK supports the following key management operations. Cortex Cloud provides detailed audit logs and email notifications on all key management
operations.
To import a new encryption key, whether for initial tenant setup or key rotation, use the Bring your own keys (BYOK) setup.
If you're performing the activation for the first time, in the Cortex Gateway, follow the Tenant Activation wizard. In Tenant Activation → Define Tenant
Settings, under Advanced, select BYOK (Bring Your Own Keys) and click Create Tenant and Set Up Keys.
The tenant is now initialized, which may take a few minutes. You can set up your keys now, or return at a later stage and click Set Up Encryption Keys
next to the tenant in the gateway to continue the process.
If you've already started the activation process and paused, locate your tenant in the Available Tenants list in the Cortex gateway, click Set Up Encryption
Keys next to your tenant and set up your keys.
To rotate your encryption keys, in the Cortex gateway, open the more options menu next to the tenant, select Rotate Encryption Key, and follow the Bring your
own keys (BYOK) setup.
To resume the process, in the main gateway, open the more menu next to the tenant, select Continue Rotation, and follow the Bring your own keys (BYOK)
setup.
As long as the rotation hasn't been completed, you can cancel the rotation process from the three dot menu next to the tenant.
NOTE:
The new keys you import will serve as primary encryption keys for newly generated data.
To disable your encryption keys, in the main gateway, open the three dot menu next to the tenant, select Disable All Keys & Deactivate Tenant.
PREREQUISITE:
To disable your encryption keys and deactivate a tenant, you must have an Account Admin role.
CAUTION:
Disabling all encryption keys and deactivating the tenant renders the tenant inaccessible and non-operational.
Disabling the keys affects the communication with the agents, may prevent the agents from receiving updates to policies, configurations, and crucial
information, and may result in loss of data.
To secure your tenant data and to prevent unauthorized access, re-enabling the keys and re-activating the tenant are strictly controlled and require manual
intervention by the Cortex Cloud Customer Success team.
Cortex BYOK uses two keys for encrypting your data at rest. One key is for BigQuery and the other is for all the other services within the tenant. You can
generate a single key for both or create two separate keys for each service.
After completing the process, the imported keys become the primary keys used for encrypting any newly generated data stored within the tenant.
NOTE:
At any stage, you can select Continue Later to pause the process. To resume the process, in the Cortex gateway, select Continue Setup next to the tenant,
and follow the wizard instructions from the point you left off, as detailed below. If you don't use either of your wrapping keys within three days, they expire and
you'll have to restart the process.
Generate a key that meets these requirements using your preferred method or use the provided OpenSSL command:
When your encryption key is ready, select I have a 32-byte symmetric encryption key ready and click Next.
2. In the Wrap & Upload screen, repeat the following procedure for both Data lake wrapping key and Services wrapping key.
The wrapping key is valid for up to three days. After three days, you need to download a new wrapping key.
b. Use an OpenSSL editor to wrap your encryption key using the following code:
openssl pkeyutl \
-encrypt \
-pubin \
-inkey <WRAPPING_KEY_FULL_PATH> \
-in <YOUR_32_BYTE_KEY_FULL_PATH> \
-out <TARGET_WRAPPED_KEY_FULL_PATH> \
-pkeyopt rsa_padding_mode:oaep \
-pkeyopt rsa_oaep_md:sha256 \
-pkeyopt rsa_mgf1_md:sha256
Abstract
Supported regions in which you want to host Cortex Cloud and any associated services.
The following table lists the regions available to host Cortex Cloud and any associated services:
Country Description
United States (US) All Cortex Cloud logs and data remain within the boundaries of the United States.
United Kingdom All Cortex Cloud logs and data remain within the boundaries of the United Kingdom.
(UK)
Europe (EU) All Cortex Cloud logs and data remain within the boundaries of Europe.
Singapore (SG) All Cortex Cloud logs and data remain within the boundaries of Singapore.
Japan (JP) All Cortex Cloud logs and data remain within the boundaries of Japan.
Canada (CA) All Cortex Cloud logs and data remain within the boundaries of Canada. However, if you have a WildFire Canada cloud subscription,
consider the following:
You will not be protected against macOS-borne zero-day threats. However, you will receive protection against other macOS
malware in regular WildFire updates.
Australia (AU) All Cortex Cloud logs and data remain within the boundaries of Australia.
Germany (DE) All Cortex Cloud logs and data remain within the boundaries of Germany.
Country Description
India (IN) All Cortex Cloud logs and data remain within the boundaries of India.
Switzerland (CH) All Cortex Cloud logs and data remain within the boundaries of Switzerland.
Poland (PL) All Cortex Cloud logs and data remain within the boundaries of Poland.
Taiwan (TW) All Cortex Cloud logs and data remain within the boundaries of Taiwan.
Qatar (QT) All Cortex Cloud logs and data remain within the boundaries of Qatar.
France (FR) All Cortex Cloud logs and data remain within the boundaries of France.
Israel (IL) All Cortex Cloud logs and data remain within the boundaries of Israel.
Saudi Arabia (SA) All Cortex Cloud logs and data remain within the boundaries of Saudi Arabia.
Indonesia (ID) All Cortex Cloud logs and data remain within the boundaries of Indonesia.
Spain (ES) All Cortex Cloud logs and data remain within the boundaries of Spain.
Italy (IT) All Cortex Cloud logs and data remain within the boundaries of Italy.
South Korea (KR) All Cortex Cloud logs and data remain within the boundaries of South Korea.
Abstract
Learn more about enabling network access to the Cortex Cloud resources.
After you receive your account details, enable and verify access to Cortex Cloud communication servers, storage buckets, and various resources in your
firewall configuration.
Some of the IP addresses required for access are registered in the United States. As a result, some GeoIP databases do not correctly pinpoint the location in
which IP addresses are used. All customer data is stored in your deployment region, regardless of the IP address registration, and restricts data transmission
through any infrastructure to that region.
If you use the specific Palo Alto Networks App-IDs indicated in the tables, you do not need to allow access to the resource.
A dash (—) indicates there is no App-ID coverage for a resource. Enable access from the agent to the console; this does not need to be bidirectional.
For IP address ranges in Google Cloud Platform (GCP), refer to these lists for IP address coverage for your deployment:
If you use SSL decryption and experience difficulty in connecting the Cortex XDR agent to the server, we recommend that you add the FQDNs required
for access to your SSL Decryption Exclusion list.
In PAN-OS 8.0 and later releases, you can configure the list in Device → Certificate Management → SSL Decryption Exclusion.
<tenant-name> refers to the selected subdomain of your Cortex Cloud tenant, and <region> is the region in which your tenant is deployed. For more
information, see Cortex Cloud supported regions.
The following table lists the required resources by region, including FQDNs, IP addresses, ports, and App-ID coverage for your deployment:
CA (Canada): 34.120.31.199
JP (Japan): 35.241.28.254
SG (Singapore): 34.117.211.129
AU (Australia): 34.120.229.65
DE (Germany): 34.98.68.183
IN (India): 35.186.207.80
CH (Switzerland): 34.111.6.153
PL (Poland): 34.117.240.208
TW (Taiwan): 34.160.28.41
QT (Qatar): 35.190.0.180
FA (France): 34.111.134.57
IL (Israel): 34.111.129.144
ID (Indonesia): 34.111.58.152
ES (Spain): 34.111.188.248
IT (Italy): 34.8.224.70
Port: 443
Used for the first request in registration flow where the Port: 443
agent passes the distribution id and obtains the ch-
<tenant-name>.traps.paloaltonetworks.com of
its tenant.
EU (Europe): 35.244.251.25
CA (Canada): 35.203.99.74
JP (Japan): 34.84.201.32
SG (Singapore): 34.87.61.186
AU (Australia): 35.244.66.177
DE (Germany): 34.107.61.141
IN (India): 35.200.146.253
CH (Switzerland): 34.65.213.226
PL (Poland): 34.118.62.80
TW (Taiwan): 34.80.34.30
QT (Qatar): 34.18.34.73
FA (France): 34.163.57.57
IL (Israel): 34.165.43.106
ID (Indonesia): 34.101.214.157
ES (Spain): 34.175.18.78
IT (Italy): 34.154.154.5
Port: 443
CA (Canada): 34.96.120.25
JP (Japan): 34.95.66.187
SG (Singapore): 34.120.142.18
AU (Australia): 34.102.237.151
DE (Germany): 34.107.161.143
IN (India): 34.120.213.187
CH (Switzerland): 34.149.180.250
PL (Poland): 35.190.13.237
TW (Taiwan): 34.149.248.76
QT (Qatar): 34.107.129.254
FA (France): 34.36.155.211
IL (Israel): 34.128.157.130
ID (Indonesia): 34.128.156.84
ES (Spain): 34.120.102.147
IT (Italy): 34.8.234.58
Port: 443
JP (Japan): 34.95.66.187
SG (Singapore): 34.120.142.18
AU (Australia): 34.102.237.151
DE (Germany): 34.107.161.143
IN (India): 34.120.213.188
CH (Switzerland): 34.149.180.250
PL (Poland): 35.190.13.237
TW (Taiwan): 34.149.248.76
QT (Qatar): 34.107.129.254
FA (France): 34.36.155.211
IL (Israel): 34.128.157.130
ID (Indonesia): 34.128.156.84
ES (Spain): 34.120.102.147
IT (Italy): 34.8.234.58
Port: 443
CA (Canada): 35.203.82.121
JP (Japan): 34.84.125.129
SG (Singapore): 34.87.83.144
AU (Australia): 35.189.18.208
DE (Germany): 34.107.57.23
IN (India): 35.200.158.164
CH (Switzerland): 34.65.248.119
PL (Poland): 34.116.216.55
TW (Taiwan): 35.234.8.249
QT (Qatar): 34.18.46.240
FA (France): 34.155.222.152
IL (Israel): 34.165.156.139
ID (Indonesia): 34.128.115.238
ES (Spain): 34.175.30.176
IT (Italy): 34.154.195.120
Port: 443
CA (Canada): 35.203.35.23
JP (Japan): 34.84.225.105
SG (Singapore): 35.247.161.94
AU (Australia): 35.201.23.188
DE (Germany): 35.242.201.199
IN (India): 35.244.57.196
CH (Switzerland): 34.65.137.215
PL (Poland): 34.116.213.71
TW (Taiwan): 35.229.186.216
QT (Qatar): 34.18.53.229
FA (France): 34.155.110.169
IL (Israel): 34.165.2.110
ID (Indonesia): 34.101.155.198
ES (Spain): 34.175.205.166
IT (Italy): 34.154.230.76
Port: 443
Broker VM Resources
EU (Europe): 34.91.128.226
CA (Canada): 34.95.8.232
JP (Japan):34.85.74.43
SG (Singapore): 34.87.167.125
AU (Australia): 35.244.93.0
DE (Germany): 35.198.112.13
IN (India): 35.200.234.99
CH (Switzerland): 34.65.51.103
PL (Poland): 34.116.176.97
TW (Taiwan): 34.80.230.166
QT (Qatar): 34.18.37.73
FA (France): 34.155.90.61
IL (Israel): 34.165.24.222
ID (Indonesia): 34.101.101.170
ES (Spain): 34.175.182.55
IT (Italy): 34.154.168.139
Port: 443
Port: 443
pool.ntp.org
Email Notifications
Egress
35.225.156.101
34.69.88.119
EU (Europe)
34.147.67.188
34.90.16.31
CA (Canada)
35.203.57.162
35.203.90.79
UK (United Kingdom)
34.142.3.42
34.142.44.136
JP (Japan)
34.146.60.215
34.84.93.160
SG (Singapore)
35.240.144.192
35.240.255.15
AU (Australia)
35.244.73.76
35.201.22.63
DE (Germany)
34.107.83.197
34.159.53.97
IN (India)
34.93.118.113
35.244.5.205
CH (Switzerland)
34.65.233.60
34.65.222.25
PL (Poland)
34.116.223.119
34.118.92.214
TW (Taiwan)
104.199.223.229
34.81.38.132
QT (Qatar)
34.18.39.0
34.18.32.96
FA (France)
34.155.197.131
34.155.5.100
IL (Israel)
34.165.46.47
34.165.17.246
SA (Saudi Arabia)
34.166.58.243
34.166.54.238
ID (Indonesia)
34.101.125.66
34.101.218.184
ES (Spain)
34.175.255.99
34.175.230.35
IT (Italy)
34.154.229.60
34.154.173.134
KR (South Korea)
34.64.189.205
34.64.45.118
To Collect 3rd Party Data from Customer's SaaS and Cloud resources
US (United States)
34.66.69.154
35.202.21.123
AU (Australia)
35.197.181.108
35.197.175.44
CA (Canada)
34.95.33.72
34.95.62.136
SG (Singapore)
35.247.148.38
35.247.173.40
JP (Japan)
34.85.68.167
34.84.99.239
IN (India)
34.93.3.196
34.93.175.218
DE (Germany)
34.89.197.46
34.107.3.224
UK (United Kingdom)
34.105.227.146
34.105.137.22
EU (Europe)
34.90.70.107
35.204.129.196
CH (Switzerland)
34.65.225.124
34.65.89.6
PL (Poland)
34.118.71.237
34.118.124.130
TW (Taiwan)
35.201.142.86
35.189.176.163
QT (Qatar)
34.18.44.71
34.18.30.132
FA (France)
34.163.125.167
34.163.155.105
IL (Israel)
34.165.131.171
34.165.120.206
SA (Saudi Arabia)
34.166.59.20
34.166.53.242
ID (Indonesia)
34.101.158.32
34.101.79.159
ES (Spain)
34.175.27.251
34.175.198.50
IT (Italy)
34.154.208.247
34.154.243.11
KR (South Korea)
34.64.107.163
34.64.84.25
The following table lists the required resources for the federal government of the United States, including FQDNs, IP addresses, ports, and App-ID coverage
for your deployment:
Port: 443
Broker VM Resources
To Collect 3rd Party Data from Customer's SaaS and Cloud resources
— IP addresses cortex-xdr
34.68.217.16
34.69.175.202
Abstract
After activating your Cortex Cloud tenant, you can start to manage user roles and permissions. Cortex Cloud uses role-based access control (RBAC) to
manage roles with specific permissions for controlling user access. RBAC helps manage access to Cortex Cloud components and Cortex Query Language
(XQL) datasets, so that users, based on their roles, are granted minimal access required to accomplish their tasks.
Cortex Gateway: Manage roles and permissions for multiple tenants linked to the same Customer Support Portal account.
When you activate a tenant for the first time, users who were created in the Customer Support Portal will have access to the tenant, but will not have a
role. The gateway is usually used to assign roles after the activation process. Roles and permissions are applied across all tenants and all Cortex
products. You can exclude different tenants or different Cortex products. For more information, see Cortex Gateway Administrator Guide.
IMPORTANT:
Setting XQL dataset access permissions for a user role can only be performed from Cortex Cloud Access Management. For more information, see
Manage user roles.
Cortex Cloud Access Management: Manage roles and permissions, and authentication settings for a specific Cortex Cloud tenant only. For more
information, see Manage user access.
Role-based access control (RBAC) enables you to use predefined Palo Alto Networks roles to assign access rights to Cortex Cloud users. You can manage
roles for all Cortex Cloud tenants and services in the Gateway or in the Cortex Cloud tenant. By assigning roles, you enforce the separation of access among
functional or regional areas of your organization.
Each role extends specific privileges to users. The way you configure administrative access depends on the security requirements of your organization. Use
roles to assign specific access privileges to administrative user accounts.
You can manage role permissions in Cortex Cloud, which are listed by the various components according to the sidebar navigation in Cortex Cloud. Some
components include additional action permissions, such as pivot (right-click) options, to which you can also assign access, but only when you’ve given the
user View/Edit permissions to the applicable component.
The default Palo Alto Networks roles provide a specific set of access rights to each role. You cannot edit the default roles directly, but you can save them as
new roles and edit the permissions of the new roles. To view the predefined permissions for each default role, go to Settings → Configurations → Access
Management → Roles.
NOTE:
Some features are license-dependent. Accordingly, users may not see a specific feature if the feature is not supported by the license type or if they do not
have access based on their assigned role.
Account Admin A Super User role that is assigned directly to the user in Cortex Gateway
and has full access to all Cortex products in your account, including all
tenants added in the future. The Account Admin can assign roles for Cortex
instances and activate Cortex tenants specific to the product.
NOTE:
The user who activated the Cortex product is assigned the Account Admin
role. You cannot create additional Account Admin roles in the Cortex
Cloud tenant. If you do not want the user to have Account Admin
permission, you must remove the Account Admin role in Cortex Gateway.
Instance Administrator View and edit permissions for all components and access all pages in the
Cortex Cloud tenant. The Instance Administrator can also make other users
an Instance Administrator for the tenant. If the tenant has predefined or
custom roles, the Instance Administrator can assign those roles to other
users.
CLI Read Only Role View scripts, playbooks, credentials, and CLI tool.
CLI Role View scripts, playbooks, and credentials. View and edit permission for CLI
tool.
AppSec Admin Full permissions for all Cloud Application Security related activities. Create
and modify detection rules within the Code/Build domain, track progress,
and adjust enforcements as needed. Additionally, triage and investigate
findings, issues, and cases spanning from code to cloud. The role also
includes complete visibility into all cloud assets.
Security Admin Can triage and investigate issues and cases, respond (excluding Live
Terminal), and edit profiles and policies.
Cortex Cloud provides predefined built-in user roles that provide specific access rights that cannot be modified. You can also create custom, editable user
roles. If a user does not have any Cortex Cloud access permissions that are assigned specifically to them, the field displays No-Role.
TIP:
To apply the same settings to multiple users, select them, and then right-click and select Edit Users Permissions.
Select all to view the combined permissions for every role and user group assigned to the user.
Select a specific role assigned to the user to view the available permissions for that role.
IMPORTANT:
Setting Cortex Query Language (XQL) dataset access permissions for a user role can only be performed from Cortex Cloud Access Management. For
more information, see Manage user roles.
6. (Optional) If Scope-Based Access Control is enabled for the tenant, click Scope and select a tag family and the corresponding tags.
If you select a tag family without specific tags, permissions apply to all tags in the family.
The scope is based only on the selected tag families. If you scope only based on tags from Family A, then Family B is disregarded in scope
calculations and is considered as allowed.
7. You can also set user access permissions for the various Cortex Query Language (XQL) datasets.
8. Click Save.
Users are assigned roles and permissions either by being assigned a role directly or by being assigned membership in one or more user groups. A user
group can only be assigned to a single role, but users can be added to multiple groups if they require multiple roles. You can also nest groups to achieve the
same effect. Users who have multiple roles through either method will receive the highest level of access based on the combination of their roles.
Example 1.
Jane has an analyst role and is a member of the Tier-1 Analyst user group, which is assigned the Triage role. Jane has the permissions of an analyst and
the Triage role. Jane is assigned 2 roles, and has the highest permission based on the combination of both roles.
John is a member of two user groups - Tier-1 Analyst and Tier-2 Analyst. Each group is configured to use a different role - Triage role and Case
Response role. John is assigned both roles and has the highest permissions based on the combination of all roles.
Jack is a member of the Tier-2 user group which has a Case response role. This user group is included in a Tier-3 user group (Threat Hunter role),
added as a nested group. Jack is assigned both roles and has the highest permissions based on the combination of all roles.
On the User Groups page, you can create a new user group for several different system users or groups. You can see the details of all user groups, the roles,
nested groups, IdP groups (SAML) when the group was created/updated, etc.
You can also right-click in the table to perform actions such as edit, save as a new group, remove (delete) a group, and copy text to the clipboard.
You can create user groups in the tenant or Cortex Gateway. If created in Cortex Gateway, they cannot be mapped to SAML groups. Only groups that are
created in the tenant support SAML group mapping. We recommend creating user groups in the Cortex Cloud tenant because user groups are available for all
tenants and you may want different user groups in different tenants, such as dev/prod.
2. To create a new user group for several different system users or groups, click New Group, and add the following:
Field Description
Group for product (Cortex Gateway only) If you have other products, select the relevant Cortex product.
Role Select the group role associated with this user group. You can only have a single role designated per group.
In Cortex Gateway, you can only select either Instance Administrator or a custom role created in the Gateway.
Users Select the users you want to belong to this user group.
Nested Groups Lists any nested groups associated with this user group. If you have an existing group you can add a nested group.
User groups can include multiple users and nested groups, which inherit the permissions of parent user groups. The user
group will have the highest level of permission.
For example:
If you add Group A as a nested group in Group B, Group A inherits Group B's permissions (Tier-1 and Tier-2 permissions).
In Cortex Gateway, you can only add user groups that are created in Cortex Gateway.
SAML Group (Relevant when creating a user group in the Cortex Cloud tenant only).
Mapping
When SSO is enabled you can see your organization's Identity Provider (IdP groups), which are automatically mapped to the
user group.
NOTE:
When using Azure AD for SSO, the SAML group mapping needs to be provided using the group object ID (GUID) and not the
group name.
3. Click Create to create a new user group and assign the relevant users to the group.
For more information about additional tasks such as creating a custom role, modifying a user's role, or removing a user's role, see Manage user access or
Cortex Gateway Administrator Guide.
Abstract
Explains how to onboard cloud service providers from the Data Source page.
The cloud service provider (CSP) onboarding wizard is designed to facilitate the seamless setup of CSP data into Cortex Cloud. The guided experience
requires minimal user input; simply define the scope of your CSP accounts and specify the scan mode. For full control of the CSP setup, you can use the
advanced settings. Based on the onboarding settings, Cortex Cloud generates an authentication template to establish trust to the CSP and grant permissions
to Cortex Cloud. The template must be executed in the CSP to complete the onboarding process. Execution of the template grants the permissions and
includes a component that notifies Cortex Cloud of the execution details and a new cloud instance is created.
NOTE:
The cloud accounts being onboarded must be owned by the customer performing the onboarding process.
You can leverage your CSP hierarchy and choose whether to onboard individual accounts one at a time or collection of accounts (such as organization in AWS
and GCP or management group in Azure). Various options are available for each CSP to allow you to customize your data collection.
Cloud scan: (Recommended) The scanning takes place within the Cortex Cloud cloud environment. No additional setup is needed.
Outpost scan: The scanning is performed on infrastructure deployed to a CSP account owned by you. The CSP account should be a dedicated account
for the outpost, free from other resources. Each CSP account can host only one outpost. This mode requires additional cloud provider permissions and
may incur additional cloud costs.
To allow you to fine tune your CSP data collection, you can modify the scope of data collection by including or excluding specific regions. If you selected to
collect data from an organizational unit that is not the lowest on the CSP hierarchy (such as organization or organizational unit in AWS, organization or folder in
GCP, and tenant or management group in Azure), you can also modify the scope by including or excluding specific accounts, projects, or subscriptions. If you
choose to include specific accounts, only those specified accounts will be included, even if additional accounts are added to the CSP after onboarding. If you
choose to exclude specific accounts, any new accounts added to the CSP after onboarding will be included in the scope. Excluded accounts are not visible in
Cortex Cloud.
The advanced settings allow you to select which Cortex Cloud modules you want to enable for this CSP. By default, the following security capabilities are
enabled:
Discovery engine
XSIAM analytics: Analyzes your endpoint data to develop a baseline and raise Analytics and Analytics BIOC alerts when anomalies and malicious
behaviors are detected.
Data security posture management: An agentless multi-cloud data security solution that discovers, classifies, protects, and governs sensitive data.
Registry scanning: Scan container registry images for vulnerabilities. malware, and secrets. You can configure your initial preference for scanning your
registry. Any newly discovered registry, repository or image in the account will be scanned by default.
The Container Registry Scanning feature automatically detects and scans all cloud-native container registries within your onboarded cloud accounts, including
Amazon Elastic Container Registry (ECR), Azure Container Registry (ACR), and Google Artifact Registry (GAR). It identifies vulnerabilities, malware, and
secrets, ensuring comprehensive protection for containerized applications across various cloud environments without requiring manual intervention. After you
onboard your container registries, Cortex Cloud ensures that all containers and images are scanned at regular intervals and that you are notified about any
deviation from your organizations’ security policies and best practices.
To understand how container registry scanning works, it's essential to understand its core components:
Container image repository: Within a container registry, container images are organized into multiple repositories to improve management, access
control, collaboration, and security isolation. Each repository should ideally contain images related to a specific application, service, or project, allowing
for granular permissioning and security policies. Images within a repository often share a common base image or purpose, making it easier to apply
consistent security controls across related components.
Image Tags: Image tags are essential for identifying and managing container image versions within a repository, enabling the selection and deployment
of appropriate builds. From a security perspective, tags facilitate tracking vulnerable images, deploying patched versions, and maintaining image
provenance for auditing. There are two common formats for referencing image tags:
image:tag – A human-readable label that can be reassigned to different versions. For example, myapp:latest or myapp:v1.0.0.
image@sha – A cryptographic hash that provides an immutable reference to a specific image version. For example, myapp@sha256:abc123.
While human-readable tags like myapp:latest (reassignable) and myapp:v1.0.0 are common, using immutable tags such as myapp@sha256:abc123 provides
a cryptographically secure and verifiable reference, crucial for ensuring the integrity and trustworthiness of deployed images.
Image Digest: A cryptographic digest (SHA-256 hash) uniquely identifies a container image's content, providing a strong guarantee of immutability.
Unlike user-defined image tags, which can be reassigned, using the digest as a tag ensures that even if an image is renamed or retagged, its content
remains verifiably identical, making it a critical element for security auditing and ensuring the integrity of deployed applications. Relying on image digests
helps prevent potential supply chain attacks where malicious actors might attempt to replace images with compromised versions.
The process of container registry scanning consists of three key phases: discovery, scanning, and evaluation.
1. Discovery involves detecting registries, repositories, and image tags within an environment. This step ensures that all container images, regardless of
their source, are accounted for and available for analysis.
3. Evaluation creates compliance findings based on the scan results. These findings identify vulnerabilities, compliance violations, or potential threats that
require remediation before an image is deployed. Scan results are evaluated for vulnerabilities, malware and secrets, creating findings accordingly
Setting up container registry scanning ensures that organizations deploy only verified and compliant images across cloud environments.
6. Under Additional Security Capabilities, select Registry Scanning, then click Edit Preferences.
Latest Tag: Scans only those images that are tagged “latest”.
Days Modified: Scans assets added in the last few days. Enter the number of days in the text box.
8. Click Save.
Using the Modify Scanning Scope option, you can define conditions to automatically exclude selected scopes from scanning. These conditions can be based
on the registry, repository, or tag. After you set the scope, the exclusion conditions are automatically applied to newly discovered images in the account.
2. In the Cloud Provider section, locate the provider where your assets are stored and click View Details.
3. On the Cloud Instances page, click the instance name for which you want to modify the scope.
4. Under the Accounts section, select the account, right-click, and choose Edit.
6. From the list of images, select the image you want to modify.
7. Alternatively, you can also filter for a specific image by clicking the Filter icon and selecting Registry, Repository, or Tags option and then adding the
desired value to refine your search.
The search results are applied automatically, even if you do not select Save.
This ensures that the specified scanning scope is customized based on your needs.
After the container registry scanning is complete, the Container Image page displays a list of the scanned container images. Each unique image is listed only
once, regardless of how many times it appears in different cloud locations. This ensures efficient security analysis and reporting.
The Container Image page provides comprehensive details of all the scanned container images. Understanding these details helps security teams prioritize
vulnerabilities, track software components, and ensure compliance across cloud environments. By consolidating scan results into a single view, the Container
Image Page simplifies container security management.
When you click an image on the Container Image page, you will see various tabs with in-depth details about the scan results.
Overview tab
The Overview tab contains essential metadata about the container image, including:
Last Scanned – The date and time of the most recent scan.
In the Overview tab, the Scan Information section displays details about the scanning process, including:
Column Description
Scanner Name Name of the scanner used for scanning the image.
Column Description
Secrets Flags if any sensitive information (e.g., API keys, credentials) is found.
Agentless Disk Scanner Specifies whether the image was scanned using an agentless method.
In addition to container registry scanning (default scanning for container images), multiple scanners may be used to analyze the image, such as:
SBOM tab
The SBOM tab provides a list of all software packages included in the image. Essentially, it serves as a manifest detailing the components inside the container.
Vulnerabilities tab
This tab lists all detected security vulnerabilities for the image:
Applications tab
This tab identifies any embedded applications within the image, helping you assess security risks associated with the bundled software.
After the initial scan has been completed, the scan re-evaluation process ensures that container images remain secure over time without requiring a full re-
scan.
Instead of manually triggering new scans, the scan re-evaluation process process automatically reassesses existing scan results every 24 hours using the
latest threat intelligence feeds. This approach reduces the need for resource-intensive re-scans, while maintaining up-to-date security assessments.
By continuously monitoring container images for emerging threats, organizations can proactively mitigate risks and ensure compliance with security best
practices.
Abstract
Follow the GCP onboarding wizard and Cortex creates a custom authentication template to be executed in GCP.
Follow this wizard to onboard your Google Cloud Platform (GCP) environment. The GCP onboarding wizard is designed to facilitate the seamless setup of GCP
data into Cortex Cloud. The guided experience requires minimal user input; simply define the scope of your GCP accounts and specify the scan mode. For full
control of the setup, you can use the advanced settings. Based on the onboarding settings, Cortex Cloud generates an authentication template to establish
trust to GCP and grant permissions to Cortex Cloud. The template must be executed in GCP to complete the onboarding process. Execution of the template
grants the permissions and includes a component that notifies Cortex Cloud of the execution details and a new cloud instance is created.
PREREQUISITE:
Permission to upload a template and create the required resources in Google Cloud Deployment Manager
3. On the Add Data Sources page, search for and select Google Cloud Platform and click Connect.
4. In the onboarding wizard, choose the scope for this data source:
Cloud Scan: (Recommended) Security scanning is performed in the Cortex cloud environment.
Scan with Outpost: Security scanning is performed on infrastructure deployed to a cloud account owned by you. If you select this option, choose
the outpost account to use for this instance.
NOTE:
Scanning with an outpost may require additional CSP permissions and may incur additional CSP costs.
Scope Modifications: To allow you to fine tune your GCP data collection, you can modify the scope by including or excluding specific regions.
Additionally, if you selected organization or folder as the scope, you can modify the scope by including or excluding specific projects. If you
choose to include specific projects, only those specified projects will be included, even if additional projects are added to your GCP environment
after onboarding. If you choose to exclude specific projects, any new projects added to your GCP environment after onboarding will be included in
the scope. Excluded projects are not visible in Cortex Cloud.
Additional Security Capabilities: Enable additional Cortex security add-ons, if available. This may require additional cloud provider permissions. For
detailed information on the permissions required, see Cloud service provider permissions. The additional security capabilities you can enable
include:
XSIAM analytics: Analyzes your endpoint data to develop a baseline and raise Analytics and Analytics BIOC alerts when anomalies and
malicious behaviors are detected.
Data security posture management: An agentless multi-cloud data security solution that discovers, classifies, protects, and governs
sensitive data.
Registry scanning: Scan container registry images for vulnerabilities. malware, and secrets. You can configure your initial preference for
scanning your registry. Any newly discovered registry, repository or image in the account will be scanned by default.
Serverless scanning: Implement serverless scanning to detect and remediate vulnerabilities within serverless functions during the
development lifecycle. Seamless integration into CI/CD pipelines enables automated security scans for a continuously secure pre-
production environment.
Cloud Tags: Define tags and tag values to be added to any new resource created by Cortex in the cloud environment.
Log Collection Configuration: To maximize security coverage, include collection of audit logs (GCP Pub/Sub). This may require additional cloud
service provider permissions.For detailed information on the permissions required, see Cloud service provider permissions.
8. Click Save.
9. Download the template file by clicking Download Terraform and then click Close.
When the file has downloaded, proceed to manually upload the template to GCP.
Abstract
Learn how to manually deploy the Terraform template file in Google Cloud Console.
When you have downloaded the Terraform template file in the onboarding wizard, you must connect to Google Cloud Console to create a stack using the
template file.
PREREQUISITE:
A GCP account
Installed Terraform on your local machine. You can download Terraform from the official Terraform website and follow the installation instructions for
your operating system.
3. Create a directory on your local machine to store and run the Terraform code. If you have more than one GCP connector, you need a separate directory
for each one:
NOTE:
4. Navigate to the directory you created and extract the Terraform files. Ensure all necessary Terraform files are present (main.tf,
template_params.tfvars, etc).
cd ~/terraform/gcp-connector-1
tar -xzvf <your_template>.tar.gz
terraform init
6. Apply your Terraform configuration using the downloaded parameter file. When prompted, enter the project ID if you configured one in the onboarding
wizard:
When the template is successfully uploaded to GCP, the initial discovery scan is started. When the can is complete, you can view your cloud assets in Asset
Inventory.
Abstract
Follow the AWS onboarding wizard and Cortex creates a custom authentication template to be executed in AWS.
Follow this wizard to onboard your Amazon Web Services (AWS) environment. The AWS onboarding wizard is designed to facilitate the seamless setup of AWS
data into Cortex Cloud. The guided experience requires minimal user input; simply define the scope of your AWS accounts and specify the scan mode. For full
control of the setup, you can use the advanced settings. Based on the onboarding settings, Cortex Cloud generates an authentication template to establish
trust to AWS and grant permissions to Cortex Cloud. The template must be executed in AWS to complete the onboarding process. Execution of the template
grants the permissions and includes a component that notifies Cortex Cloud of the execution details and a new cloud instance is created.
PREREQUISITE:
3. On the Add Data Sources page, search for and select Amazon Web Services (AWS) and click Connect.
4. In the onboarding wizard, select the scope for this data source:
Organizational Unit: A group of AWS accounts within an organization. It can also contain other organizational units.
Cloud Scan: (Recommended) Security scanning is performed in the Cortex cloud environment.
Scan with Outpost: Security scanning is performed on infrastructure deployed to a cloud account owned by you. If you select this option, choose
the outpost account to use for this instance.
NOTE:
Scanning with an outpost may require additional CSP permissions and may incur additional CSP costs.
Scope Modifications: To allow you to fine tune your AWS scope, you can modify the scope by including or excluding specific regions. Additionally,
if you selected organization or organizational unit as the scope, you can modify the scope by including or excluding specific accounts. If you
choose to include specific accounts, only those specified accounts will be included, even if additional accounts are added to your AWS
environment after onboarding. If you choose to exclude specific accounts, any new accounts added to your AWS environment after onboarding
will be included in the scope.
Additional Security Capabilities: Choose from which security capabilities you want to benefit. Some security capabilities are enabled by default and
can be modified. Adding security capability typically requires additional cloud provider permissions. For detailed information on the permissions
required, see Cloud service provider permissions. The additional security capabilities you can enable include:
XSIAM analytics: Analyzes your endpoint data to develop a baseline and raise Analytics and Analytics BIOC alerts when anomalies and
malicious behaviors are detected.
Data security posture management: An agentless data security scanner that discovers, classifies, protects, and governs sensitive data.
Registry scanning: A container registry scanner that scans registry images for vulnerabilities. malware, and secrets.
Cloud Tags: Define tags and tag values to be added to any new resource created by Cortex in the cloud environment.
Log Collection Configuration: To maximize security coverage, include collection of audit logs using CloudTrail. This may require additional cloud
service provider permissions. For detailed information on the permissions required, see Cloud service provider permissions.
8. To complete the process, execute the template in AWS using one of the following methods:
Automated: (Recommended) Click Execute in AWS to connect to AWS CloudFormation and create the stack.
NOTE:
Manual: Click Download CloudFormation and follow the instructions to manually execute the CloudFormation template file in AWS CloudFormation.
NOTE:
The template is reusable and can be executed as many times as you want to create new instances with the settings you defined in the wizard.
9. Click Close.
When the template is successfully uploaded to AWS and the stack creation is complete, a new instance is created and the initial discovery scan is
started. When the scan is complete, you can view the discovered assets in Asset Inventory.
Abstract
Learn how to manually create a stack in AWS Management Console using the CloudFormation file downloaded in the onboarding wizard.
When you have downloaded the Terraform template file in the onboarding wizard, you must connect to AWS Management Console to create a stack using the
template file.
PREREQUISITE:
An AWS account
2. On the Stacks page, click Create stack, and then select With new resources (standard).
3. On the Create stack page, in Prerequisite - Prepare template, select Choose an existing template.
4. In Specify template, select Upload a template file, then click Choose file and upload the template downloaded from your Cortex Platform. Click Next.
6. In Parameters, enter a unique Amazon Resource Name (ARN) for the custom CortexPrismaRoleName role, and an ExternalID. Click Next and Next
again.
When the template is successfully uploaded to AWS and the stack creation is complete, the initial discovery scan is started. When the scan is complete, you
can view the discovered assets in Asset Inventory.
Abstract
Follow the Azure onboarding wizard and Cortex creates a custom authentication template to be executed in Azure.
Follow this wizard to onboard your Microsoft Azure environment. The Azure onboarding wizard is designed to facilitate the seamless setup of Azure data into
Cortex Cloud. The guided experience requires minimal user input; simply define the scope of your Azure accounts and specify the scan mode. For full control
of the setup, you can use the advanced settings. Based on the onboarding settings, Cortex Cloud generates an authentication template to establish trust to
Azure and grant permissions to Cortex Cloud. The template must be executed in Azure to complete the onboarding process. Execution of the template grants
the permissions and includes a component that notifies Cortex Cloud of the execution details and a new cloud instance is created.
PREREQUISITE:
An Azure subscription
Permission to deploy a custom template and create its resources in Microsoft Azure Portal (Owner or Global Admin).
Tenant ID and subscription ID. You can view these in Microsoft Azure Portal in Management groups.
3. On the Add Data Sources page, search for and select Microsoft Azure and click Connect.
4. In the onboarding wizard, choose the scope for this data source:
Tenant: (Default) A specific instance of Azure Active Directory, which can contain several subscriptions.
Cloud Scan: (Recommended) Security scanning is performed in the Cortex cloud environment.
Scan with Outpost: Security scanning is performed on infrastructure deployed to a cloud account owned by you. If you select this option, choose
the outpost account to use for this instance.
NOTE:
Scanning with an outpost may require additional CSP permissions and may incur additional CSP costs.
6. Select an approved tenant ID from the Tenant ID list. If no tenant IDs have been approved, enter the tenant ID. Click Approve in Azure to add Cortex as
an approved application on this tenant. When the tenant ID is approved, it appears with a green check next to it.
Scope Modifications: To allow you to fine tune your Azure data collection, you can modify the scope by including or excluding specific regions.
Additionally, if you selected tenant or management group as the scope, you can modify the scope by including or excluding specific subscriptions.
If you choose to include specific subscriptions, only those specified subscriptions will be included, even if additional subscriptions are added to
your Azure environment after onboarding. If you choose to exclude specific subscriptions, any new subscriptions added to your Azure environment
after onboarding will be included in the scope. Excluded subscriptions are not visible in Cortex Cloud.
Additional Security Capabilities: Enable additional Cortex security add-ons, if available. This may require additional cloud provider permissions. For
detailed information on the permissions required, see Cloud service provider permissions. The additional security capabilities you can enable
include:
XSIAM analytics: Analyzes your endpoint data to develop a baseline and raise Analytics and Analytics BIOC alerts when anomalies
and malicious behaviors are detected.
Data security posture management: An agentless multi-cloud data security solution that discovers, classifies, protects, and governs
sensitive data.
Registry scanning: Scan container registry images for vulnerabilities. malware, and secrets. You can configure your initial preference
for scanning your registry. Any newly discovered registry, repository or image in the account will be scanned by default.
Serverless scanning: Implement serverless scanning to detect and remediate vulnerabilities within serverless functions during the
development lifecycle. Seamless integration into CI/CD pipelines enables automated security scans for a continuously secure pre-
production environment.
Cloud Tags: Define tags and tag values to be added to any new resource created by Cortex in the cloud environment.
Log Collection Configuration: To maximize security coverage, include collection of audit logs (Event Hub). This may require additional cloud service
provider permissions. For detailed information on the permissions required, see Cloud service provider permissions.
8. Click Save.
Download Terraform: Download a Terraform file and proceed to manually upload the template to Azure.
Azure Resource Manager: Download a JSON file and proceed to upload the file to Azure Resource Manager.
When the template is successfully executed in Azure Resource Manager, the initial discovery scan is started. When the scan is complete, you can view
the discovered assets in Asset Inventory.
Abstract
When you have downloaded the Terraform template file in the onboarding wizard, you must log in to Microsoft Azure to deploy the template file.
PREREQUISITE:
An Azure subscription.
Permission to deploy a custom template and create its resources in Microsoft Azure (Owner or Global Admin).
Tenant ID and subscription ID. You can view these in Microsoft Azure Portal in Management groups.
Installed Terraform on your local machine. You can download Terraform from the official Terraform website and follow the installation instructions for
your operating system.
az login
3. Elevate access for a Global Administrator to grant you the User Access Administrator role at root scope (/):
4. Create a directory on your local machine to store and run the Terraform code. If you have more than one Azure connector, you need a separate directory
for each one:
5. Navigate to the directory you created and extract the Terraform files. Ensure all necessary Terraform files are present (main.tf,
template_params.tfvars, etc).
cd ~/terraform/azure-connector-1
tar -xzvf <your_template>.tar.gz.
terraform init
7. Apply your Terraform configuration using the downloaded parameter file. When prompted, enter the subscription ID:
8. When prompted, review the actions the Terraform will perform and approve them by entering yes.
When the template is successfully uploaded to Azure, the initial discovery scan is started. When the scan is complete, you can view your cloud assets in Asset
Inventory.
Abstract
To onboard your Kubernetes cluster, specify the connection method and settings and download the custom installer file. Execute the file in your Kubernetes
environment to grant Cortex Cloud permissions to collect the data.
Follow this wizard to deploy your Kubernetes Connector. The Kubernetes onboarding wizard is designed to facilitate the seamless setup of Kubernetes data
into Cortex Cloud. The guided experience requires minimal user input; simply enter a name for the installer file and select the type of connector you want to
install. For full control of the setup, you can use the advanced settings. Based on the onboarding settings, Cortex Cloud then creates a custom installer file for
running in your Kubernetes environment. This file, once executed in your Kubernetes environment, grants Cortex Cloud the necessary permissions to collect
the data. The installer file must be executed in your Kubernetes environment to complete the onboarding process. The connector then appears in Kubernetes
Connectors.
3. On the Add Data Sources page, search for and select Kubernetes and click Connect.
4. In the Connect Kubernetes onboarding wizard, enter a name for the installer file, the deployment YAML script that is generated by the selections you
choose in this wizard.
Connector: A lightweight solution that provides additional Kubernetes related capabilities, such as enhanced inventory with relations mapping and
policy enforcement.
XDR for Cloud: A unified solution that reduces maintenance by providing protection and vulnerability management capabilities for cloud native
environments (supports Linux servers, Kubernetes, OpenShift, etc.)
NOTE:
For GKE or EKS clusters with the metadata service disabled, the cluster URI (cluster name) must be specified.
Scan Cadence: Define how often to scan (from every one to 24 hours). Default is 12 hours.
Admission Controller: Select to allow enforcement policies to be configured, ensuring that only compliant resources are admitted into the cluster.
Version: Select which version of the Kubernetes Connector to install. For detailed information on each version, see What's new in Kubernetes
Connector?.
7. Click Next.
8. To complete the onboarding of the Kubernetes Connector, you must download the YAML script Connector_yaml and run it in your Kubernetes
environment: kubectl apply -f <installer_name.yml>
9. Verify the deployment succeeded when you see "Script deployed successfully."
When the Kubernetes Connector is deployed, the initial discovery scan is started and the connector appears in Kubernetes Connectors.
This topic describes the changes, additions, known issues, and fixes for each version of the Kubernetes Connector.
Cortex Cloud supports the following current Kubernetes Connector versions. Click the link to view the new features, addressed issues, and known issues per
release.
0.2 (Beta) release Kubernetes Connector version 0.2 (Beta) February 16, 2025
0.1 (Beta) Initial release Kubernetes Connector version 0.1 (Beta) November 3, 2024
New features
The following section describes the new features introduced in Kubernetes Connector version 0.2 (Beta).
Feature Description
KSPM Dashboard v1 The Kubernetes Security Posture Management dashboard is now available,
featuring key widgets such as Kubernetes Inventory, Kubernetes Security
Issues & Cases, Protection Coverage, Clusters with Malware, Secrets
Detected, and Top Vulnerable Clusters.
Kubernetes Connector custom detection rules You can choose to allow enforcement policies to be configured, ensuring
that only compliant resources are admitted into the cluster.
Admission prevention by tag Control which workloads are admitted based on predefined tags.
Snaptshot ingestion to Unified Asset Inventory Ensures the asset inventory stays up to date by automatically removing
deleted resources.
Known limitations
The following table describes known limitations in the Kubernetes Connector 0.1 (Beta) release.
Feature Description
Kubernetes Connectors Management experience Deleting the Kubernetes Connector is supported. You can achieve this by
following a manual deletion guide.
New features
The following section describes the new features introduced in Kubernetes Connector version 0.1 (Beta).
Feature
Kubernetes Connector Onboarding experience Onboard Kubernetes clusters in Cortex Cloud at Settings → Data Source →
Add Data Source.
Kubernetes Connectors Management experience Kubernetes Connectors instances can be managed in Cortex Cloud at
Settings → Data Source+Kubernetes.
Kubernetes Assets Inventory experience Explore the different Kubernetes clusters and resources assets in Cortex
Cloud at Assets Inventory → Compute → Kubernetes Cluster .
Kubernetes Compliance Scanner for CIS benchmarks The Kubernetes Connector scans Compliance Violations as part of the CIS
Benchmark.
Image Vulnerability Policy Enforcement via Admission Controller The Kubernetes Connector interacts with the Kubernetes Admission
Controller and enforces policies for Image Vulnerability when a Vulnerability
Rule was configured in Cortex Cloud at Detection Rule & Threat Intel →
Cloud Workload → Security Posture Rules → Vulnterability (Rule Type).
Compliance Policy Enforcement via Admission Controller The Kubernetes Connector interacts with the Kubernetes Admission
Controller and enforces policies for Compliance when a Compliance Rule
was configured in Cortex Cloud at Detection Rule & Threat Intel → Cloud
Workload → Security Posture Rules → Compliance (Rule Type).
Known limitations
The following table describes known limitations in the Kubernetes Connector 0.1 (Beta) release.
Feature Description
Kubernetes Connector Onboarding experience You must onboard the cloud account where the Kubernetes cluster
resides using the Cortex Cloud Onboarding wizard.
You must select a single Kubernetes cluster from the drop-down list,
download the YAML script, and apply it, manually, on the specific
cluster. The installation script cannot be used for a different cluster; it
is specifically configured for the selected cluster.
Kubernetes Connectors Management experience Editing and updating the Kubernetes Connector are supported. You
can do this by downloading and executing an edit/update flow
manually.
Abstract
Grant the correct cloud service provider permissions for Cortex Cloud.
When you set up Cortex Cloud to collect data from your cloud environments, the onboarding wizard will ensure that the correct permissions are granted for
Cortex Cloud. The following tables list the permissions required for each of the options available in the onboarding wizards.
Microsoft Azure
Abstract
ADS
ec2:ModifySnapshotAttribute Snapshots with managed_by: Share snapshot with the outpost account
paloaltonetworks tag
ec2:CreateTags Only as part of CreateSnapshot and Add tags for permission scoping and cost
CopySnapshot operations visibility
ec2:CopySnapshot Snapshots copied with managed_by: Re-encrypt snapshot with PANW's KMS key
paloaltonetworks tag
DSPM
Permission
rds:DeleteDBClusterSnapshot
rds:DeleteDBSnapshot
Permission
rds:AddTagsToResource
rds:CancelExportTask
rds:CreateDBClusterSnapshot
rds:CreateDBSnapshot
rds:Describe*
rds:List*
rds:StartExportTask
s3:PutObject*
s3:ListBucket
s3:GetObject*
s3:DeleteObject*
s3:GetBucketLocation
kms:DescribeKey
kms:GenerateDataKeyWithoutPlaintext
kms:CreateGrant
iam:PassRole
arn:aws:iam::aws:policy/AmazonMemoryDBReadOnlyAccess
Discovery Engine
Permission
arn:aws:iam::aws:policy/AmazonSQSReadOnlyAccess
arn:aws:iam::aws:policy/ReadOnlyAccess
Permission
arn:aws:iam::aws:policy/SecurityAudit
DS:DescribeDirectories
DS:ListTagsForResource
DirectConnect:DescribeConnections
DirectConnect:DescribeDirectConnectGateways
DirectConnect:DescribeVirtualInterfaces
Glue:GetSecurityConfigurations
WorkSpaces:DescribeTags
WorkSpaces:DescribeWorkspaceDirectories
WorkSpaces:DescribeWorkspaces
apigateway:GetDomainNames
bedrock-agent:GetAgents
bedrock-agent:GetDataSource
bedrock-agent:GetKnowledgeBases
bedrock-agent:ListAgentAliases
bedrock-agent:ListAgentKnowledgeBases
bedrock-agent:ListAgents
bedrock-agent:ListDataSource
bedrock:ListCustomModel
cloudcontrolapi:GetResource
cloudformation:AmazonCloudFormation
Permission
cloudformation:StackStatus
cloudformation:StackSummary
cloudwatch:describeAlarms
comprehendmedical:ListEntitiesDetectionV2Jobs
configservice:DescribeDeliveryChannels
elasticfilesysttem:DescribeFileSystemPolicy
elasticloadbalancingv2:DescribeSSLPolicies
forecast:ListTagsForResource
glue:GetConnections
glue:GetResourcePolicies
iam:AmazonIdentityManagement
iam:AttachedPolicy
iam:PolicyRole
iam:RoleDetail
opensearchserverless:ListCollections
s3-control:GetAccessPointPolicy
s3-control:GetAccessPointPolicyStatus
s3-control:GetPublicAccessBlock
s3-control:ListAccessPoints
servicecatalog-appregistry:ListApplications
servicecatalog-appregistry:ListAttributeGroups
Registry Scan
Permission
ecr:BatchGetImage
ecr:GetDownloadUrlForLayer
ecr:GetAuthorizationToken
ecr-public:GetAuthorizationToken
Log Collection
Permission
s3:GetObject
s3:ListBucket
sqs:ReceiveMessage
sqs:DeleteMessage
sqs:GetQueueAttributes
Abstract
ADS
compute.snapshots.setLabels Snapshots with "cortex-scan-" prefix Add snapshot labels for a cost visibility
DSPM
Permission Notes
bigquery.bireservations.get
bigquery.capacityCommitments.get
bigquery.capacityCommitments.list
bigquery.config.get
bigquery.datasets.get
bigquery.datasets.getIamPolicy
bigquery.models.getData
bigquery.models.getMetadata
bigquery.models.list
bigquery.routines.get
bigquery.routines.list
bigquery.tables.export
bigquery.tables.get
bigquery.tables.getData
bigquery.tables.getIamPolicy
bigquery.tables.list
cloudsql.backupRuns.get
cloudsql.backupRuns.create
cloudsql.backupRuns.delete
cloudsql.backupRuns.get
cloudsql.backupRuns.list
Permission Notes
artifactregistry.repositories.downloadArtifacts
Discovery Engine
Permission Notes
serviceusage.services.use
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.list
storage.buckets.listEffectiveTags
storage.buckets.listTagBindings
storage.objects.getIamPolicy
run.services.list
run.jobs.list
Permission Notes
run.jobs.getIamPolicy
cloudscheduler.jobs.list
baremetalsolution.instances.list
baremetalsolution.networks.list
baremetalsolution.nfsshares.list
baremetalsolution.volumes.list
baremetalsolution.luns.list
analyticshub.dataExchanges.list
analyticshub.listings.getIamPolicy
analyticshub.listings.list
notebooks.locations.list
notebooks.schedules.list
composer.imageversions.list
datamigration.connectionprofiles.list
datamigration.connectionprofiles.getIamPolicy
datamigration.conversionworkspaces.list
datamigration.conversionworkspaces.getIamPolicy
datamigration.migrationjobs.list
datamigration.migrationjobs.getIamPolicy
datamigration.privateconnections.list
datamigration.privateconnections.getIamPolicy
Permission Notes
datamigration.migrationjobs.list
datamigration.migrationjobs.getIamPolicy
datamigration.privateconnections.list
datamigration.privateconnections.getIamPolicy
aiplatform.batchPredictionJobs.list
aiplatform.nasJobs.list
cloudsecurityscanner.scans.list
Log Collection
Permission Notes
Registry Scan
Permission
artifactregistry.repositories.downloadArtifacts
roles/iam.serviceAccountTokenCreator
Abstract
ADS
DSPM
Permission
Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
Microsoft.Storage/storageAccounts/fileServices/fileshares/files/read
Microsoft.Storage/storageAccounts/listKeys/action
Microsoft.Storage/storageAccounts/ListAccountSas/action
Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action
Microsoft.Storage/*/read
Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action
Microsoft.DocumentDB/databaseAccounts/listKeys/
Microsoft.Storage/storageAccounts/tableServices/tables/entities/read
Microsoft.Storage/storageAccounts/fileServices/fileshares/files/read
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
Microsoft.CognitiveServices/*/read
Microsoft.CognitiveServices/*/action
Discovery Engine
Permission
Domain.Read.All
EntitlementManagement.Read.All
Permission
User.Read.All
Policy.ReadWrite.AuthenticationMethod
GroupMember.Read.All
RoleManagement.Read.All
Group.Read.All
AuditLog.Read.All
Policy.Read.All
IdentityProvider.Read.All
Directory.Read.All
Organization.Read.All
Microsoft.ContainerInstance/containerGroups/containers/exec/action
Microsoft.ContainerRegistry/registries/webhooks/getCallbackConfig/action
Microsoft.DocumentDB/databaseAccounts/listConnectionStrings/action
Microsoft.DocumentDB/databaseAccounts/listKeys/action
Microsoft.DocumentDB/databaseAccounts/readonlykeys/action
Microsoft.Network/networkInterfaces/effectiveNetworkSecurityGroups/action
Microsoft.Network/networkInterfaces/effectiveRouteTable/action
Microsoft.Network/networkWatchers/queryFlowLogStatus/*
Microsoft.Network/networkWatchers/read
Microsoft.Network/networkWatchers/securityGroupView/action
Microsoft.Network/virtualwans/vpnconfiguration/action
Permission
Microsoft.Storage/storageAccounts/listKeys/action
Microsoft.Web/sites/config/list/action
Microsoft.Advisor/configurations/read
Microsoft.AlertsManagement/prometheusRuleGroups/read
Microsoft.AlertsManagement/smartDetectorAlertRules/read
Microsoft.AnalysisServices/servers/read
Microsoft.ApiManagement/service/apis/diagnostics/read
Microsoft.ApiManagement/service/apis/policies/read
Microsoft.ApiManagement/service/apis/read
Microsoft.ApiManagement/service/identityProviders/read
Microsoft.ApiManagement/service/portalsettings/read
Microsoft.ApiManagement/service/products/policies/read
Microsoft.ApiManagement/service/products/read
Microsoft.ApiManagement/service/read
Microsoft.ApiManagement/service/tenant/read
Microsoft.AppConfiguration/configurationStores/read
Microsoft.AppPlatform/Spring/apps/read
Microsoft.AppPlatform/Spring/read
Microsoft.Attestation/attestationProviders/read
Microsoft.Authorization/classicAdministrators/read
Microsoft.Authorization/locks/read
Permission
Microsoft.Authorization/permissions/read
Microsoft.Authorization/policyAssignments/read
Microsoft.Authorization/policyDefinitions/read
Microsoft.Authorization/roleAssignments/read
Microsoft.Authorization/roleDefinitions/read
Microsoft.Automanage/configurationProfiles/Read
Microsoft.Automation/automationAccounts/credentials/read
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read
Microsoft.Automation/automationAccounts/read
Microsoft.Automation/automationAccounts/runbooks/read
Microsoft.Automation/automationAccounts/variables/read
Microsoft.AzureStackHCI/Clusters/Read
Microsoft.Batch/batchAccounts/pools/read
Microsoft.Batch/batchAccounts/read
Microsoft.Blueprint/blueprints/read
Microsoft.BotService/botServices/read
Microsoft.Cache/redis/firewallRules/read
Microsoft.Cache/redis/read
Microsoft.Cache/redisEnterprise/read
Microsoft.Cdn/profiles/afdendpoints/read
Microsoft.Cdn/profiles/afdendpoints/routes/read
Permission
Microsoft.Cdn/profiles/customdomains/read
Microsoft.Cdn/profiles/endpoints/customdomains/read
Microsoft.Cdn/profiles/endpoints/read
Microsoft.Cdn/profiles/origingroups/read
Microsoft.Cdn/profiles/read
Microsoft.Cdn/profiles/securitypolicies/read
Microsoft.Chaos/experiments/read
Microsoft.ClassicCompute/VirtualMachines/read
Microsoft.ClassicNetwork/networkSecurityGroups/read
Microsoft.ClassicNetwork/reservedIps/read
Microsoft.ClassicNetwork/virtualNetworks/read
Microsoft.ClassicStorage/StorageAccounts/read
Microsoft.CognitiveServices/accounts/read
Microsoft.CognitiveServices/accounts/deployments/read
Microsoft.CognitiveServices/accounts/raiPolicies/read
Microsoft.CognitiveServices/models/read
Microsoft.CognitiveServices/accounts/models/read
Microsoft.Communication/CommunicationServices/Read
Microsoft.Compute/availabilitySets/read
Microsoft.Compute/cloudServices/read
Microsoft.Compute/cloudServices/roleInstances/read
Permission
Microsoft.Compute/diskEncryptionSets/read
Microsoft.Compute/disks/beginGetAccess/action
Microsoft.Compute/disks/read
Microsoft.Compute/galleries/images/read
Microsoft.Compute/galleries/read
Microsoft.Compute/hostGroups/read
Microsoft.Compute/snapshots/read
Microsoft.Compute/virtualMachineScaleSets/networkInterfaces/read
Microsoft.Compute/virtualMachineScaleSets/publicIPAddresses/read
Microsoft.Compute/virtualMachineScaleSets/read
Microsoft.Compute/virtualMachineScaleSets/virtualMachines/networkInterfaces/ipConfigurations/publicIPAddresses/read
Microsoft.Compute/virtualMachineScaleSets/virtualMachines/read
Microsoft.Compute/virtualMachineScaleSets/virtualmachines/instanceView/read
Microsoft.Compute/virtualMachines/extensions/read
Microsoft.Compute/virtualMachines/instanceView/read
Microsoft.Compute/virtualMachines/read
Microsoft.Confluent/organizations/Read
Microsoft.ContainerInstance/containerGroups/containers/exec/action
Microsoft.ContainerInstance/containerGroups/read
Microsoft.ContainerRegistry/registries/metadata/read
Microsoft.ContainerRegistry/registries/pull/read
Permission
Microsoft.ContainerRegistry/registries/read
Microsoft.ContainerRegistry/registries/webhooks/getCallbackConfig/action
Microsoft.ContainerService/managedClusters/read
Microsoft.DBforMariaDB/servers/firewallRules/read
Microsoft.DBforMariaDB/servers/read
Microsoft.DBforMySQL/flexibleServers/configurations/read
Microsoft.DBforMySQL/flexibleServers/databases/read
Microsoft.DBforMySQL/flexibleServers/firewallRules/read
Microsoft.DBforMySQL/flexibleServers/read
Microsoft.DBforMySQL/servers/firewallRules/read
Microsoft.DBforMySQL/servers/read
Microsoft.DBforMySQL/servers/virtualNetworkRules/read
Microsoft.DBforPostgreSQL/flexibleServers/configurations/read
Microsoft.DBforPostgreSQL/flexibleServers/databases/read
Microsoft.DBforPostgreSQL/flexibleServers/firewallRules/read
Microsoft.DBforPostgreSQL/flexibleServers/read
Microsoft.DBforPostgreSQL/servers/configurations/read
Microsoft.DBforPostgreSQL/servers/firewallRules/read
Microsoft.DBforPostgreSQL/servers/read
Microsoft.DBforPostgreSQL/serversv2/firewallRules/read
Microsoft.Dashboard/grafana/read
Permission
Microsoft.DataBoxEdge/dataBoxEdgeDevices/read
Microsoft.DataFactory/datafactories/read
Microsoft.DataFactory/factories/integrationruntimes/read
Microsoft.DataFactory/factories/linkedservices/read
Microsoft.DataFactory/factories/read
Microsoft.DataLakeAnalytics/accounts/dataLakeStoreAccounts/read
Microsoft.DataLakeAnalytics/accounts/firewallRules/read
Microsoft.DataLakeAnalytics/accounts/read
Microsoft.DataLakeAnalytics/accounts/storageAccounts/read
Microsoft.DataLakeStore/accounts/firewallRules/read
Microsoft.DataLakeStore/accounts/read
Microsoft.DataLakeStore/accounts/trustedIdProviders/read
Microsoft.DataLakeStore/accounts/virtualNetworkRules/read
Microsoft.DataMigration/services/read
Microsoft.DataShare/accounts/read
Microsoft.Databricks/accessConnectors/read
Microsoft.Databricks/workspaces/read
Microsoft.Datadog/monitors/read
Microsoft.DesktopVirtualization/applicationgroups/read
Microsoft.DesktopVirtualization/hostpools/read
Microsoft.DesktopVirtualization/hostpools/sessionhostconfigurations/read
Permission
Microsoft.DesktopVirtualization/hostpools/sessionhosts/read
Microsoft.DesktopVirtualization/workspaces/providers/Microsoft.Insights/diagnosticSettings/read
Microsoft.DesktopVirtualization/workspaces/read
Microsoft.DevCenter/devcenters/read
Microsoft.DevTestLab/schedules/read
Microsoft.Devices/iotHubs/Read
Microsoft.Devices/iotHubs/privateLinkResources/Read
Microsoft.DigitalTwins/digitalTwinsInstances/read
Microsoft.DocumentDB/cassandraClusters/read
Microsoft.DocumentDB/databaseAccounts/listConnectionStrings/action
Microsoft.DocumentDB/databaseAccounts/listKeys/action
Microsoft.DocumentDB/databaseAccounts/read
Microsoft.DocumentDB/databaseAccounts/readonlykeys/action
Microsoft.DomainRegistration/domains/Read
Microsoft.Easm/workspaces/read
Microsoft.Elastic/monitors/read
Microsoft.EventGrid/domains/privateLinkResources/read
Microsoft.EventGrid/domains/read
Microsoft.EventGrid/namespaces/read
Microsoft.EventGrid/partnerNamespaces/read
Microsoft.EventGrid/topics/privateLinkResources/read
Permission
Microsoft.EventGrid/topics/read
Microsoft.EventHub/Namespaces/PrivateEndpointConnections/read
Microsoft.EventHub/clusters/read
Microsoft.EventHub/namespaces/authorizationRules/read
Microsoft.EventHub/namespaces/eventhubs/authorizationRules/read
Microsoft.EventHub/namespaces/eventhubs/read
Microsoft.EventHub/namespaces/ipfilterrules/read
Microsoft.EventHub/namespaces/read
Microsoft.EventHub/namespaces/virtualnetworkrules/read
Microsoft.HDInsight/clusters/applications/read
Microsoft.HDInsight/clusters/read
Microsoft.HealthBot/healthBots/Read
Microsoft.HealthcareApis/workspaces/read
Microsoft.HybridCompute/machines/read
Microsoft.Insights/ActivityLogAlerts/read
Microsoft.Insights/Components/read
Microsoft.Insights/DataCollectionEndpoints/Read
Microsoft.Insights/DataCollectionRules/Read
Microsoft.Insights/LogProfiles/read
Microsoft.Insights/MetricAlerts/Read
Microsoft.Insights/actionGroups/read
Permission
Microsoft.Insights/diagnosticSettings/read
Microsoft.Insights/eventtypes/values/read
Microsoft.IoTCentral/IoTApps/read
Microsoft.KeyVault/vaults/keys/read
Microsoft.KeyVault/vaults/privateLinkResources/read
Microsoft.KeyVault/vaults/read
Microsoft.Kusto/Clusters/Databases/read
Microsoft.Kusto/Clusters/read
Microsoft.Kusto/clusters/read
Microsoft.LabServices/labs/read
Microsoft.LoadTestService/loadTests/read
Microsoft.Logic/integrationAccounts/read
Microsoft.Logic/workflows/read
Microsoft.Logic/workflows/versions/read
Microsoft.MachineLearningServices/workspaces/computes/read
Microsoft.MachineLearningServices/workspaces/outboundRules/read
Microsoft.MachineLearningServices/workspaces/read
Microsoft.ManagedIdentity/userAssignedIdentities/read
Microsoft.ManagedServices/marketplaceRegistrationDefinitions/read
Microsoft.ManagedServices/registrationAssignments/read
Microsoft.Management/managementGroups/descendants/read
Permission
Microsoft.Management/managementGroups/read
Microsoft.Management/managementGroups/subscriptions/read
Microsoft.Maps/accounts/read
Microsoft.Migrate/moveCollections/read
Microsoft.MixedReality/ObjectAnchorsAccounts/read
Microsoft.NetApp/netAppAccounts/capacityPools/read
Microsoft.NetApp/netAppAccounts/capacityPools/volumes/read
Microsoft.NetApp/netAppAccounts/read
Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies/read
Microsoft.Network/applicationGateways/read
Microsoft.Network/applicationSecurityGroups/read
Microsoft.Network/azurefirewalls/read
Microsoft.Network/bastionHosts/read
Microsoft.Network/connections/read
Microsoft.Network/ddosProtectionPlans/read
Microsoft.Network/dnsZones/read
Microsoft.Network/expressRouteCircuits/authorizations/read
Microsoft.Network/expressRouteCircuits/peerings/connections/read
Microsoft.Network/expressRouteCircuits/peerings/peerConnections/read
Microsoft.Network/expressRouteCircuits/peerings/read
Microsoft.Network/expressRouteCircuits/read
Permission
Microsoft.Network/expressRouteCrossConnections/peerings/read
Microsoft.Network/expressRouteCrossConnections/read
Microsoft.Network/expressRouteGateways/expressRouteConnections/read
Microsoft.Network/expressRouteGateways/read
Microsoft.Network/expressRoutePorts/authorizations/read
Microsoft.Network/expressRoutePorts/links/read
Microsoft.Network/expressRoutePorts/read
Microsoft.Network/expressRoutePortsLocations/read
Microsoft.Network/firewallPolicies/read
Microsoft.Network/frontDoorWebApplicationFirewallPolicies/read
Microsoft.Network/frontDoors/backendPools/read
Microsoft.Network/frontDoors/frontendEndpoints/read
Microsoft.Network/frontDoors/healthProbeSettings/read
Microsoft.Network/frontDoors/loadBalancingSettings/read
Microsoft.Network/frontDoors/read
Microsoft.Network/frontDoors/routingRules/read
Microsoft.Network/frontDoors/rulesEngines/read
Microsoft.Network/loadBalancers/read
Microsoft.Network/localnetworkgateways/read
Microsoft.Network/locations/usages/read
Microsoft.Network/natGateways/read
Permission
Microsoft.Network/networkInterfaces/effectiveNetworkSecurityGroups/action
Microsoft.Network/networkInterfaces/effectiveRouteTable/action
Microsoft.Network/networkInterfaces/read
Microsoft.Network/networkSecurityGroups/defaultSecurityRules/read
Microsoft.Network/networkSecurityGroups/read
Microsoft.Network/networkSecurityGroups/securityRules/read
Microsoft.Network/networkWatchers/queryFlowLogStatus/*
Microsoft.Network/networkWatchers/read
Microsoft.Network/networkWatchers/securityGroupView/action
Microsoft.Network/p2sVpnGateways/read
Microsoft.Network/privateDnsZones/ALL/read
Microsoft.Network/privateDnsZones/read
Microsoft.Network/privateEndpoints/privateDnsZoneGroups/read
Microsoft.Network/privateEndpoints/read
Microsoft.Network/privateLinkServices/read
Microsoft.Network/publicIPAddresses/read
Microsoft.Network/publicIPPrefixes/read
Microsoft.Network/routeFilters/read
Microsoft.Network/routeFilters/routeFilterRules/read
Microsoft.Network/routeTables/read
Microsoft.Network/routeTables/routes/read
Permission
Microsoft.Network/serviceEndpointPolicies/read
Microsoft.Network/serviceEndpointPolicies/serviceEndpointPolicyDefinitions/read
Microsoft.Network/trafficManagerProfiles/read
Microsoft.Network/virtualNetworkGateways/read
Microsoft.Network/virtualNetworks/read
Microsoft.Network/virtualNetworks/subnets/read
Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read
Microsoft.Network/virtualWans/read
Microsoft.Network/virtualwans/vpnconfiguration/action
Microsoft.Network/vpnServerConfigurations/read
Microsoft.NetworkFunction/azureTrafficCollectors/read
Microsoft.NotificationHubs/Namespaces/NotificationHubs/read
Microsoft.NotificationHubs/Namespaces/read
Microsoft.OperationalInsights/clusters/read
Microsoft.OperationalInsights/querypacks/read
Microsoft.OperationalInsights/workspaces/read
Microsoft.OperationalInsights/workspaces/tables/read
Microsoft.Orbital/spacecrafts/read
Microsoft.PowerBIDedicated/capacities/read
Microsoft.PowerBIDedicated/servers/read
Microsoft.Quantum/Workspaces/Read
Permission
Microsoft.RecoveryServices/Vaults/backupProtectedItems/read
Microsoft.RecoveryServices/Vaults/read
Microsoft.RecoveryServices/vaults/backupPolicies/read
Microsoft.RedHatOpenShift/openShiftClusters/read
Microsoft.Relay/Namespaces/read
Microsoft.Resources/Resources/read
Microsoft.Resources/subscriptions/providers/read
Microsoft.Resources/subscriptions/read
Microsoft.Resources/subscriptions/resourceGroups/read
Microsoft.Resources/subscriptions/resourceGroups/write
Microsoft.Resources/templateSpecs/read
Microsoft.SaaS/applications/read
Microsoft.Search/searchServices/read
Microsoft.Security/advancedThreatProtectionSettings/read
Microsoft.Security/autoProvisioningSettings/read
Microsoft.Security/automations/read
Microsoft.Security/iotSecuritySolutions/read
Microsoft.Security/locations/jitNetworkAccessPolicies/read
Microsoft.Security/locations/read
Microsoft.Security/pricings/read
Microsoft.Security/secureScores/read
Permission
Microsoft.Security/securityContacts/read
Microsoft.Security/settings/read
Microsoft.Security/workspaceSettings/read
Microsoft.ServiceBus/namespaces/authorizationRules/read
Microsoft.ServiceBus/namespaces/networkrulesets/read
Microsoft.ServiceBus/namespaces/privateEndpointConnections/read
Microsoft.ServiceBus/namespaces/providers/Microsoft.Insights/diagnosticSettings/read
Microsoft.ServiceBus/namespaces/queues/read
Microsoft.ServiceBus/namespaces/read
Microsoft.ServiceBus/namespaces/topics/read
Microsoft.ServiceBus/namespaces/topics/subscriptions/read
Microsoft.ServiceFabric/clusters/read
Microsoft.SignalRService/SignalR/read
Microsoft.SignalRService/WebPubSub/read
Microsoft.Solutions/applications/read
Microsoft.Sql/managedInstances/databases/read
Microsoft.Sql/managedInstances/databases/transparentDataEncryption/read
Microsoft.Sql/managedInstances/encryptionProtector/Read
Microsoft.Sql/managedInstances/read
Microsoft.Sql/managedInstances/vulnerabilityAssessments/Read
Microsoft.Sql/servers/administrators/read
Permission
Microsoft.Sql/servers/auditingSettings/read
Microsoft.Sql/servers/databases/auditingSettings/read
Microsoft.Sql/servers/databases/dataMaskingPolicies/read
Microsoft.Sql/servers/databases/dataMaskingPolicies/rules/read
Microsoft.Sql/servers/databases/read
Microsoft.Sql/servers/databases/securityAlertPolicies/read
Microsoft.Sql/servers/databases/transparentDataEncryption/read
Microsoft.Sql/servers/encryptionProtector/read
Microsoft.Sql/servers/firewallRules/read
Microsoft.Sql/servers/read
Microsoft.Sql/servers/securityAlertPolicies/read
Microsoft.Sql/servers/vulnerabilityAssessments/read
Microsoft.SqlVirtualMachine/sqlVirtualMachines/read
Microsoft.Storage/storageAccounts/blobServices/read
Microsoft.Storage/storageAccounts/fileServices/read
Microsoft.Storage/storageAccounts/fileServices/shares/read
Microsoft.Storage/storageAccounts/listKeys/action
Microsoft.Storage/storageAccounts/providers/Microsoft.Insights/diagnosticSettings/read
Microsoft.Storage/storageAccounts/queueServices/read
Microsoft.Storage/storageAccounts/read
Microsoft.Storage/storageAccounts/tableServices/read
Permission
Microsoft.StorageCache/Subscription/caches/read
Microsoft.StorageCache/caches/read
Microsoft.StorageMover/storageMovers/read
Microsoft.StorageSync/storageSyncServices/privateLinkResources/read
Microsoft.StorageSync/storageSyncServices/read
Microsoft.StreamAnalytics/clusters/Read
Microsoft.StreamAnalytics/streamingjobs/Read
Microsoft.Subscription/Policies/default/read
Microsoft.Synapse/privateLinkHubs/privateLinkResources/read
Microsoft.Synapse/privateLinkHubs/read
Microsoft.Synapse/workspaces/privateLinkResources/read
Microsoft.Synapse/workspaces/read
Microsoft.Synapse/workspaces/sparkConfigurations/read
Microsoft.Synapse/workspaces/sqlPools/geoBackupPolicies/read
Microsoft.Synapse/workspaces/sqlPools/read
Microsoft.VideoIndexer/accounts/read
Microsoft.VisualStudio/Account/Read
Microsoft.Web/certificates/read
Microsoft.Web/customApis/read
Microsoft.Web/hostingEnvironments/Read
Microsoft.Web/serverfarms/Read
Permission
Microsoft.Web/sites/Read
Microsoft.Web/sites/basicPublishingCredentialsPolicies/Read
Microsoft.Web/sites/config/list/action
Microsoft.Web/sites/config/read
Microsoft.web/sites/config/appsettings/read
Microsoft.Web/sites/privateEndpointConnections/Read
Microsoft.Web/sites/read
Microsoft.Web/sites/slots/Read
microsoft.web/serverfarms/sites/read
Microsoft.Web/staticSites/Read
Microsoft.Workloads/monitors/read
Microsoft.classicCompute/domainNames/read
microsoft.app/containerapps/read
microsoft.monitor/accounts/read
microsoft.network/virtualnetworkgateways/connections/read
Registry Scan
Permission
Microsoft.ContainerRegistry/registries/metadata/read
Microsoft.ContainerRegistry/registries/pull/read
Microsoft.ContainerRegistry/registries/read
Microsoft.ContainerRegistry/registries/webhooks/getCallbackConfig/action
Abstract
Learn more about setting up the Cortex Cloud environment based on your preferences.
To create a more personalized user experience, Cortex Cloud enables you to customize and configure the following:
Server settings
Security settings
Log forwarding
Abstract
Configure server settings such as keyboard shortcuts, timezone, and timestamp format.
You can configure server settings such as keyboard shortcuts, timezone, and timestamp format, to create a more personalized user experience in Cortex
Cloud. Go to Settings → Configurations → General → Server Settings.
NOTE:
Keyboard shortcuts, timezone, and timestamp format are not set universally and only apply to the user who sets them.
Keyboard Shortcuts Enables you to change the default shortcut settings. The shortcut value
must be a keyboard letter, A through Z, and cannot be the same for both
shortcuts.
Timezone Select a specific timezone. The timezone affects the timestamps displayed
in Cortex Cloud, auditing logs, and when exporting files.
Timestamp Format The format in which to display Cortex Cloud data. The format affects the
timestamps displayed in Cortex Cloud, auditing logs, and when exporting
files.
Email Contacts A list of email addresses Cortex Cloud can use as distribution lists. The
defined email addresses are used to send product maintenance, updates,
and new version notifications. These addresses are in addition to email
addresses registered with your Customer Support Portal account.
Password Protection (for downloaded files) Enable password protection when downloading retrieved files from an
endpoint. This prevents users from opening potentially malicious files.
NOTE:
Scoped Server Access Enforces access restrictions on users with an assigned scope. A user can
inherit scope permissions from a group, or have a scope assigned directly
on top of the role assigned from the group.
If enabled, you must select the SBAC Mode, which is defined per tenant:
Permissive: Enables users with at least one scope tag to access the
relevant entity with that same tag.
Restrictive: Users must have all the scoped tags that are tagged
within the relevant entity of the system.
Data Ingestion Monitoring (Beta) Data ingestion health monitors the availability and overall health of data
collection. When enabled, Cortex Cloud creates the following types of
alerts:
Related information
View all health alerts on the Health Alerts page. For more information,
see About health issues.
By default, this setting is set to false and field values are evaluated as
case insensitive. This setting overwrites any other default configuration
except for BIOCs, which will remain case insensitive no matter what this
configuration is set to.
Define the incidents target MTTR per incident severity Determines within how many days and hours you want incidents resolved
according to the incident severity Critical, High, Medium, and Low.
Impersonation Role The type of role permissions granted to the Palo Alto Networks Support
team when opening support tickets. We recommend that role permissions
are granted only for a specific time frame, and full administrative
permissions are granted only when specifically requested by the Support
team.
LICENSE TYPE:
Prisma Cloud Compute Tenant Pairing
Requires a Cortex XSIAM or Cortex XDR Pro license
For more information, see Pairing Prisma Cloud with Cortex XSIAM .
Custom Content Export all custom content: Exports custom content, such as
playbooks and scripts as a content bundle, which you can import to
another Cortex Cloud tenant.
Alerts Create timer fields that display in the alerts table and alert layouts. For more
information, see Configure timer fields.
Unified Incident View Enable the Unified Incident view to see a consolidated view of all incidents
across your distributed environment and perform actions on child tenants.
Abstract
Configure security settings such as session expiration, user login expiration, and dashboard expiration.
You can configure security settings such as how long users can be logged into Cortex Cloud, and from which domains and IP ranges users can log in.
Session Expiration User Login Expiration The number of hours (between 1 and 24) after
which the user login session expires. You can
also choose to automatically log users out after a
specified period of inactivity.
Allowed Sessions Approved Domains The domains from which you want to allow user
access (login) to Cortex Cloud. You can add or
remove domains as necessary.
Approved IP Ranges The IP ranges from which you want to allow user
access (login) to Cortex Cloud. You can also
choose to limit API access from specific IP
addresses.
User Expiration Deactivate Inactive User Deactivate an inactive user, and also set the user
deactivation trigger period. By default, user
expiration is disabled. When enabled, enter the
number of days after which inactive users should
be deactivated.
Allowed Domains Domain Name The domain names that can be used in your
distribution lists for reports. For example, when
generating a report, ensure the reports are not
sent to email addresses outside your
organization.
Abstract
Learn how to manage access for users, user roles, user groups, and Single Sign-On (SSO) for users on a specific Cortex Cloud tenant.
You can manage access for users, and create and assign user roles and user groups for a specific tenant. When Single Sign-On (SSO) is enabled, you can
manage SSO for users.
Users
You can manage access permissions and activities for users allocated to a specific Customer Support Portal account and tenant.
User roles
User roles enable you to define the type of access and actions a user can perform. User roles are assigned to users, or to user groups.
Cortex Cloud provides predefined built-in user roles that provide specific access rights that cannot be modified. You can also create custom, editable user
roles.
You can also set dataset access permissions using user roles or set specific permissions using role-based access control (RBAC). Configuring administrative
access depends on the security requirements of your organization. Dataset permissions control dataset access for all components, while RBAC controls
access to a specific component. By default, dataset access management is disabled, and users have access to all datasets. If you enable dataset access
management, you must configure access permissions for each dataset type, and for each user role. When a dataset component is enabled for a particular
role, the Issues and Cases pages include information about datasets. For more information on how to set dataset access permissions, see Manage user roles.
User groups
You can use user groups to streamline configuration activities by grouping together users whose access permission requirements are similar. Import user
groups from Active Directory, or create them from scratch in Cortex Cloud.
Single Sign-On
Manage your SSO integration with the Security Assertion Markup Language (SAML) 2.0 standard to securely authenticate system users across enterprise-wide
applications and websites, with one set of credentials. This configuration allows system users to authenticate using your organization's Identity Provider (IdP),
such as Okta or PingOne. You can integrate any IdP with Cortex Cloud supported by SAML 2.0.
SSO with SAML 2.0 configuration activities are dependent on your organization’s IdP. Some of the field values need to be obtained from your organization’s IdP,
and some values need to be added to your organization’s IdP. It is your responsibility to understand how to access your organization’s IdP to provide these
fields, and to add any fields from Cortex Cloud to your IdP.
After SSO configuration is complete, when you sign in as an SSO user, the Cortex Cloud permissions granted to you after logging in, either from the group
mapping or from the default role configuration, are effective throughout the entire session for the defined maximum session length. Maximum session length is
defined in your Cortex Cloud Session Security Settings. This applies even if the default role configuration is updated, or the group membership settings were
changed.
Abstract
Update a user's role, add a user to a user group, and view permissions based on the role and user groups assigned to the user.
If Scope-Based Access Control (SBAC) is enabled for the tenant, you can use specific tags to assign user permissions. For more information, see Manage
user scope.
NOTE:
You can only reduce the permissions of an Account Admin user via Cortex Gateway.
TIP:
To apply the same settings to multiple users, select them, and then right-click and select Edit User Permissions.
Select all to view the combined permissions for every role and user group assigned to the user.
Select a specific role assigned to the user to view the available permissions for that role.
b. Under Components, expand each list to view the permissions to the various Cortex Cloud components.
c. Under Datasets, there are two possibilities for viewing a user's dataset access permissions:
When dataset access management is enabled and the user has access to certain Cortex Query Language (XQL) datasets, the datasets are
listed.
When dataset access management is disabled and users have access to all XQL datasets, the text No dataset has been selected is
displayed.
NOTE:
6. (Optional) If Scope-Based Access Control is enabled for the tenant, click Scope and select a tag family and the corresponding tags.
If you select a tag family without specific tags, permissions apply to all tags in the family.
The scope is based only on the selected tag families. If you scope only based on tags from Family A, then Family B is disregarded in scope
calculations and is considered as allowed.
7. Click Save.
Use a CSV file to import users who belong to a Customer Support Portal account, and assign them roles that are defined in Cortex Cloud. You can use the
CSV template provided in Cortex Cloud, or prepare a CSV file from scratch.
To use the CSV template, click Download example file, and replace the example values with your values.
Prepare a CSV file from scratch. Make sure the file includes these columns:
User email: Email address of the user belonging to a Customer Support Portal account, for example, john.smith1@exampleCompany.com.
Role name: Name of the role that you want to assign to this user, for example, Privileged Responder. The role must already exist in Cortex
Cloud.
Is an account role: A boolean value that defines whether the user is designated with an Account Admin role in Cortex Gateway. Set the value
to TRUE; otherwise, the value is set to FALSE (default).
5. Click Import.
TIP:
To apply the same settings to multiple users, select them, and then right-click and select Edit User Permissions.
Select all to view the combined permissions for every role and user group assigned to the user.
Select a specific role assigned to the user to view the available permissions for that role.
4. Under Components, expand each list to view the permissions to the various Cortex Cloud components.
5. Under Datasets, there are two possibilities for viewing a user's dataset access permissions:
When dataset access management is enabled and the user has access to certain Cortex Query Language (XQL) datasets, the datasets are listed.
When dataset access management is disabled and users have access to all XQL datasets, the text No dataset has been selected is displayed.
Hide user
There might be instances where you want to hide a user from the list of users, for example, a user that has a Customer Support Portal Super User role but isn't
active on your Cortex Cloud tenant. After you hide a user, they will no longer be displayed in the list of users when Show User Subset is selected on the Users
page.
TIP:
To apply the same settings to multiple users, select them, and then right-click and select Edit User Permissions.
4. Click Save.
Deactivate user
3. Click Deactivate.
3. Click Remove.
Abstract
Manage user roles that are assigned to Cortex Cloud users or user groups in Cortex Cloud Access Management.
NOTE:
Managing Roles requires an Account Admin or Instance Administrator role. For more information, see Predefined user roles.
Manage user roles that are assigned to Cortex Cloud users or user groups in Cortex Cloud Access Management. User roles enable you to define the type of
access and actions a user can perform.
You can only set dataset access permissions from a user role in Cortex Cloud Access Management. When creating user roles from the Cortex Gateway, these
settings are disabled. By default, dataset access management is disabled, and users have access to all datasets. If you enable dataset access management,
you must configure access permissions for each dataset type, and for each user role. When a dataset component is enabled for a particular role, the Issues
and Cases pages include information about datasets.
5. Under Components, expand each list and select the permissions for each of the components.
6. Under Datasets (Disabled), you have two options for setting the Cortex Query Language (XQL) dataset access permissions for the user role:
Set the user role with access to all XQL datasets by leaving the dataset access management as disabled (default).
Set the user role with limited access to certain XQL datasets by selecting the Enable dataset access management toggle and selecting the
datasets under the different dataset category headings.
7. Click Save.
3. (Optional) Under Role Name, modify the name for the user role.
4. (Optional) Under Description, enter a description for the user role or modify the current description.
5. Under Components, expand each list and select the permissions for each of the components.
6. Under Datasets, you have two options for setting the Cortex Query Language (XQL) dataset access permissions for the user role:
Set the user role with access to all XQL datasets by disabling the Enable dataset access management toggle.
Set the user role with limited access to certain XQL datasets by selecting the Enable dataset access management toggle and selecting the
datasets under the different dataset category headings.
7. Click Save.
2. Right-click the relevant user role, and select Save As New Role.
3. (Optional) Under Role Name, modify the name for the user role.
4. (Optional) Under Description, enter a description for the user role or modify the current description.
5. Under Components, expand each list and select the permissions for each of the components.
6. Under Datasets, you have two options for setting the Cortex Query Language (XQL) dataset access permissions for the user role:
Set the user role with access to all XQL datasets by disabling the Enable dataset access management toggle.
Set the user role with limited access to certain XQL datasets by selecting the Enable dataset access management toggle and selecting the
datasets under the different dataset category headings.
7. Click Save.
Abstract
Learn about Scope-Based Access Control (SBAC) and how to assign users to specific tags of different types in your organization.
With Scope-Based Access Control (SBAC), Cortex Cloud enables you to assign users to specific tags of different types in your organization. By default, all
users have management access to all tags in the tenant. However, after you (as an administrator) assign a management scope to a Cortex Cloud user (non-
administrator), the user is then able to manage only the specific tags and its associated entities that are predefined within that scope. To enable SBAC per
server, see Configure server settings.
The permissions in user or group settings define which entity the user can access, and the scope defines what the user can view within the entity.
Policy Management: Create and edit Prevention policies and profiles, Extension policies and profiles, and global and device Exceptions that are within
the scope of the user.
Action Center: View and take actions only on endpoints that are within the scope of the user.
Cases and Issues: View and manage cases and issues filtered according to the scope of the user or group.
CAUTION:
The rest of the functional areas and their permissions in Cortex Cloud do not support SBAC. Accordingly, if these permissions are granted to a scoped user,
the user will be able to access all endpoints in the tenant within this functional area. For example, a scoped user with permission to view incidents can view all
incidents in the system without limitation to a scope, however, will not be able to generate an issue or device exception.
Also, note that the Agent Installation widget is not available for scoped users.
The currently assigned scope of each user is displayed in the Scope column of the Users table.
Select All
Endpoint Groups: User is scoped according to endpoint groups. The tag selected refers to the specific endpoint group.
Endpoint Tags: User is scoped according to endpoint tags. The tag selected refers to the specific endpoint tag.
4. If you selected a Tag Family option, from the Tags field, select the relevant tags associated with the family.
NOTE:
If you select a tag family without specific tags, permissions apply to all tags in the family.
The scope is based only on the selected Tag Families. If you scope only based on tags from Family A, then Family B is disregarded in scope
calculations and considered as allowed.
5. Click Save.
The users to whom you have scoped particular endpoints are now able to use Cortex XDR only within the scope of their assigned endpoints.
NOTE:
Make sure to assign the required default permissions for scoped users. This depends on the structure and divisions within your organization and the particular
purpose of each organizational unit to which scoped users belong.
Dashboards consist of visualized data powered by fully customizable widgets, which enable you to analyze data from inside or outside Cortex Cloud, in
different formats such as graphs, pie charts, or text. Cortex Cloud displays the predefined dashboards when you log in. You can also create custom
dashboards that are based on the predefined dashboards, or built to your specifications, and you can save any of your dashboards as reports.
Cortex Cloud also provides Command Center dashboards that display interactive overviews of your system activity, with drilldowns to additional dashboards
and associated pages.
From the Dashboard & Reports menu, you can view and manage your dashboards and reports from the dashboard and incidents table, and view alert
exclusions.
Dashboard: Provides dashboards that you can use to view high-level statistics about your agents and incidents.
Reports: View all the reports that Cortex Cloud administrators have run.
Dashboards Manager: Add new dashboards with customized widgets to surface the statistics that matter to you most.
Reports Templates: Build reports using pre-defined templates, or customize a report. Reports can be generated on-demand scheduled.
Widget Library: Search, view, edit, and create widgets based on predefined widgets and user-created custom widgets.
Abstract
The All Assets page provides a centralized repository containing information about all assets within your environment, including enterprise, multi-cloud, code,
and external surfaces. Dedicated asset modules allow multi-method asset coverage, such as agent, agentless, logs, from various sources. Having full visibility
of assets allows for timely incident response, effective threat hunting, and attack surface reduction.
The asset card provides a unified view of an asset, consolidating attributes, enhancements, and related cases, issues, or findings. The Highlights section
provides an overview of the security risks associated with the asset. When you click an asset, the asset card opens in a tab, enabling users to easily switch
between multiple assets cards at the same time.
Leave comments for collaboration, and perform actions on the asset, depending on the type.
View asset data: see all relevant data and raw information connected to the asset.
Category, class, and type are terms used to facilitate the organization and classification of assets.
Class: represents the highest-level grouping of assets based on their general purpose or domain. It is a broad classification that defines the overall
function of the assets.
Category: represents a more detailed grouping within a class. It categorizes assets based on their normalized function or common type, regardless of
the provider or implementation.
Type: the most specific level of classification and represents the provider-specific name for a particular asset within a category. This level directly refers
to the specific implementation of an asset.
Examples: For the Virtual Machine category: EC2 Instance (AWS), Compute Engine Instance (GCP).
The following is a list of the fields displayed on the All Assets page. The assets shown, and their data, depend on your system's licensing.
Column Description
Provider The provider that hosts cloud assets, such as GCP, AWS, or Azure.
Class Grouping of assets according to industry standards. For example, Compute, Network, and Storage.
Category Asset types given by each cloud vendor are normalized into this field.
Column Description
Type A type is the most specific level of classification and represents the provider-specific name for a particular asset within a category.
NOTE:
Tags Users can add information about the asset by adding tags.
Critical cases When a critical Case is attached to an asset, the number of High or Critical cases appears in brackets.
Critical issues When a critical Issue is attached to an asset, the number of high or critical cases appears in brackets.
Groups Users can group assets using asset groups. The asset group indicates which assets are grouped together.
First observed Timestamp of when the asset was first observed by the source that reported it.
Last observed Timestamp of when the asset was last observed by the source that reported it.
Asset tabs
Assets are separated by their respective classes. The following table describes the tabs shown under All Assets.
Tab Description
Tab Description
Code This section provides an overview of code assets, including all code
repositories, Infrastructure as Code (IaC) resources, CI/CD pipelines, and
software packages.
Data The Data Inventory provides an overview of data assets and their
associated risks, including the number of Assets at Risk, data stored in
AWS, Azure, and GCP, Sensitive Assets, and assets marked as Open to the
World.
Device Overview of assets with devices that have an Cortex XDR agent installed.
External Surface The All External Surface Inventory provides an overview of external-facing
assets, including services versus websites, domains versus certificates,
and their distribution across providers.
Security Services This section provides a complete overview of the security services being
actively managed within your environment.
To see all resources within a connected cluster, click on the cluster, then on Resource Explorer. The Resource Explorer provides granular visibility of the cluster,
helping detect security breaches and fully understand the cluster's components. Disconnected clusters show no data. Ensure all clusters are connected for
maximum protection.
Click Kubernetes Connectivity Management to manage the connector-connectivity of cluster assets, including connector versions, upgrades, statuses, and
more. Here, you can check if a cluster is connected, view the status, and see the connector version. When a new version from Palo Alto Networks is available,
the user can update the connector version here.
API security refers to the process of ensuring your APIs are protected from unauthorized access and potential vulnerabilities.
Using Cortex Cloud API asset management capabilities, you can monitor, manage and enforce security policies set up for your APIs.
Start with integrating your cloud services with Cortex Cloud . Then navigate to API Inventory to analyze, manage and monitor the data ingested from the API.
API inventory
Drill down to review findings, cases and issues associated with the APIs across the cloud and services in your environment
Cortex Cloud API inventory of endpoints provides an overview of the API assets across networks, and applications enabling you to manage and implement
security measures to safeguard against security risks and potential vulnerabilities.
At a glance, we see a graphical representation of the APIs across the cloud and services.
The following table describes the fields in the API Inventory page table view.
Field Description
Field Description
Get
Post
Put
Patch
Delete
Head
Options
Trace
Connect
Risk factors Indication of the risk type associated with the API:
Internet Exposure ( )
Sensitive Data ( )
No Authentication ( )
No Encryption
Insecure Encryption
Unknown Encryption
API spec name The API specification name is obtained from the title field of the specification imported to Cortex Cloud.
API spec conformance Indicates if the endpoint was found/not found in the specification.
Undefined: Indicates that the endpoint from the gateway is not found in the specification.
Match: Indicates there's a match between the API path of the endpoint and the specification.
Mismatch: Indicates that the API path is the same in the endpoint and specification, but there is a missing query
parameter in the specification.
GCP
AWS
Azure
ONPREM
Apigee
Inspected The number of network traffic packets or connections that have been analyzed and verified by Cortex Cloud.
Field Description
Request/Response Sensitive Shows the sensitive data type contained in the request/response, such as passwords, credit card numbers, SSNs, or
Data bank account numbers.
Request/Response Content The data format sent/received in the request/response of the API calls.
Types
application/json
application/xml
application/x-www-form-urlencoded
multipart/form-data
HTTP
HTTPS
API key
Basic
Learning
OAuth
OIDC
Unknown
Unknown
HTTP
Logs
Traffic mirroring
Configuration
Apigee
API Gateway
APIM
Kong
Click the API asset to open the side card. Each tab includes detailed information from the parsed data of the API asset.
Overview
AWS
GCP
Azure
API Endpoint
API Specification
Account ID:
Cases/Issues/Findings: The link from the number opens the page where you can review the details. Refer to Cases and issues for detailed information.
Vulnerabilities
The vulnerabilities highlight the exposed elements of the API, enabling you to address the issues.
Vulnerability Findings
Packages
Layers
Applications
Endpoint Data
Shows the details of the API endpoint, and the components associated with authentication, such as token type, request/response body schema, and usage
statistics.
Cortex Cloud offers the option to import API specification that complies to the OpenAPI format. This includes format, file structure and data types.
The following table describes the fields in the API Specification page table view.
Field Description
User
CI/CD Pipeline
Traffic
Code Repository
Field Description
Servers List The server from where the API specification is hosted. This field is automatically filled if the specification contains the server URL or
host. If there is no URL or host in the specification, you must manually add the URL or host address.
NOTE:
Even if you already imported the specification, you can edit the API asset and add or update the server list.
API Versions The API version obtained from the API specification.
Associated The number of matches between the path of the endpoint and the specification.
Endpoints
You can right-click and select View Associated Endpoints to see the matched paths in the API Endpoints table.
Spec File Name The specification file name that was imported to Cortex Cloud.
Findings The total number of findings, divided by severity. Issues are generated for the highest severity.
Click the API asset to open the side card. Each tab includes detailed information from the parsed data of the API asset.
Overview
Asset ID
AWS
GCP
Azure
API Endpoint
API Specification
Cases/Issues/Findings: The link from the number opens the page where you can review the details. Refer to Cases and issues for detailed information.
Code
Insights
At a glance, we see a graphical representation of the specification scan results by severity and by category.
You can drill down by clicking a severity to see the details/information of the scan result. The Details page of the scan results item, includes a description of the
issue and a link to OpenAPI.
The page also shows where there are issues with the specification.
Cortex Cloud enables you to import YAML or JSON files. After importing the file, Cortex Cloud analyzes the data to identify vulnerabilities to help you effectively
manage and enforce security measures.
3. Drop or browse for the API specification file and add the server of where the file is hosted. This field is automatically filled if the file contains the server
URL or host. If there is no URL or host in the file, you must manually add the URL or host address.
NOTE:
Even if you already imported the file, you can edit the API asset and add or update the server list.
4. Click Import.
Abstract
Cortex Cloud Network Configuration provides a representation of your network assets by collecting and analyzing your network resources.
Network asset visibility is a crucial investigative tool for discovering rogue devices and preventing malicious activity within your network. The number of
managed and unmanaged assets in your network provides vital information for assessing security exposure and tracking network communication effectively.
Cortex Cloud Network Configuration accurately represents your network assets by collecting and analyzing the following network resources:
User-defined IP Address Ranges and Domain Names associated with your internal network.
ARP Cache
In addition to the network resources, Cortex Cloud allows you to configure a Windows Agent Profile to scan your endpoints using Ping. This scan provides
updated identifiers of your network assets, such as IP addresses and OS platforms. The scan is automatically distributed by Cortex Cloud to all the agents
configured in the profile and cannot be initiated by request.
With the data aggregated by Cortex Cloud Network Configuration, you can locate and manage your assets more effectively and reduce the amount of
research required to:
Monitor network data communications both within and outside your network.
Abstract
Define the IP address ranges and domain names used by Cortex Cloud to identify your network assets.
Internal IP address ranges and domain names must be defined in order to track and identify assets in the network. This enables Cortex Cloud to analyze,
locate, and display your network assets.
By default, Cortex Cloud creates Private Network ranges that specify reserved industry-approved ranges. These ranges can only be renamed.
Create New.
1. In the Create IP Address Range dialog box, enter the IP address Name and IP Address, Range or CIDR values.
NOTE:
You can add a range that is fully contained in an existing range, however, you cannot add a new range that partially intersects with another
range.
2. Click Save.
1. In the Upload IP Address Range dialogue box, drag and drop or search for a CSV file listing the IP address ranges. Download example file
to view the correct format.
2. Click Add.
Abstract
The External IP Address Ranges page lists all external IP addresses attributed to your organization.
NOTE:
Viewing external IP address ranges requires the Attack Surface Management add-on.
An external IP address range is an IP address range that Cortex Cloud has discovered through ASM scans and attributed to your organization. The complete
list of external IP Address Ranges can be viewed on the External IP Address Ranges page, as explained in the following steps. External IP address range
information is also available on asset details pages when an external IP address is used to attribute an asset to your organization.
1. In Cortex XSIAM, select Assets → Network Configuration → IP Address Ranges → External IP Address Ranges.
Active Responsive IPS count: Number of IP addresses in the range that are currently active and responsive.
Date Added: The first time that Cortex XSIAM identified this IP Range.
Organization Handles: Unique identifiers for the organizations managing the IP range.
2. In the Internal Domain Suffixes section, +Add the domain suffix you want to include as part of your internal network. For example, acme.com.
FIELD DESCRIPTION
Active Assets Number of assets within the defined range that have reported Cortex Agent logs or appeared in your Network Firewall Logs.
Active Managed Assets Number of assets within the defined range reported Cortex Cloud Agent logs.
Modification Time The timestamp shows when this range was last changed.
An asset group is a method of scoping within a system, whereby groups of assets with shared attributes are addressed and resolved collectively. When an
asset belongs to a group, the asset group information is displayed on the asset inventory page, in the overview section of the clicked asset.
By grouping assets, you can perform actions on all items within the group simultaneously, instead of managing them individually.
Dynamic Groups: Use the filters Provider, Realm, or Type Name to define and create a dynamic group. Click Create Dynamic Group.
Static Groups: To manually select and group all other specific assets, create a static group by selecting the desired assets. Click Create Static
Group.
Abstract
Perform a vulnerability assessment of all endpoints in your network using Cortex Cloud. This includes CVE, endpoint, and application analysis.
Cortex Cloud vulnerability assessment enables you to identify and quantify the security vulnerabilities on an endpoint. After evaluating the risks to which each
endpoint is exposed and the vulnerability status of an installed application in your network, you can mitigate and patch these vulnerabilities on all the endpoints
in your organization.
PREREQUISITE:
The following are prerequisites for Cortex Cloud to perform a vulnerability assessment of your endpoints.
Requirement Description
Cortex Cloud lists only CVEs relating to the operating system, and not CVEs
relating to applications provided by other vendors.
Cortex Cloud retrieves the latest data for each CVE from the NIST National
Vulnerability Database as well as from the Microsoft Security Response Center
(MSRC).
Cortex Cloud collects KB and application information from the agents but
calculates CVE only for KBs based on the data collected from MSRC and other
sources
Cortex Cloud does not display open CVEs for endpoints running Windows releases
for which Microsoft no longer fixes CVEs.
Linux
Cortex Cloud collects all the information about the operating system and the
installed applications, and calculates CVE based on the the latest data retrieved
from the NIST.
MacOS
Cortex Cloud collects only the applications list from MacOS without CVE
calculation.
If Cortex Cloud doesn't match any CVE to its corresponding application, an error message is
displayed, "No CVEs Found".
Setup and Permissions Ensure Host Inventory Data Collection is enabled for your Cortex XDR agent.
Limitations Cortex Cloud calculates CVEs for applications according to the application version, and not
according to application build numbers.
The Enhanced Vulnerability Assessment mode uses an advanced algorithm to collect extensive details on CVEs from comprehensive databases and to
produce an in-depth analysis of the endpoint vulnerabilities. Turn on the Enhanced Vulnerability Assessment mode from Settings → Configurations →
Vulnerability Assessment. This option may be disabled for the first few days after updating Cortex Cloud as the Enhanced Vulnerability Assessment engine is
initialized.
PREREQUISITE:
The following are prerequisites for Cortex Cloud to perform an Enhanced Vulnerability Assessment of your endpoints.
Requirement Description
Requirement Description
Cortex Cloud collects all the information about the operating system and the
installed applications, and calculates CVE based on the latest data retrieved from
the NIST.
CVEs that apply to applications that are installed by one user aren't detected when
another user without the application installed is logged in during the scan.
MacOS
Cortex Cloud collects all the information about the operating system and the
installed applications, and calculates CVE based on the latest data retrieved from
the NIST.
Setup and Permissions Ensure Host Inventory Data Collection is enabled for your Cortex XDR agent.
Limitations Some CVEs may be outdated if the Cortex XDR agent wasn't updated recently.
Application versions which have reached end-of-life (EOL) may have their version listed
as 0. This doesn't affect the detection of the CVEs.
Some applications are listed twice. One of the instances may display invalid
version, however, this doesn't affect the functionality.
The scanning process may impact performance on the Cortex XDR agent during
scanning. The scan may take up to two minutes.
You can access the Vulnerability Assessment panel from Inventory+Endpoints → Host Inventory → Vulnerability Assessment.
After enabling the feature for the first time, it may take up to a week to get the updated data into the platform. Re-collecting the data from all endpoints in your
network could take up to 6 hours. After that, Cortex Cloud initiates periodical recalculations to rescan the endpoints and retrieve the updated data. If at any
point you want to force data recalculation, click Recalculate. The recalculation performed by any user on a tenant updates the list displayed to every user on
the same tenant.
CVE Analysis
To evaluate the extent and severity of each CVE across your endpoints, you can drill down into each CVE in Cortex Cloud and view all the endpoints and
applications in your environment that are impacted by the CVE. Cortex Cloud retrieves the latest information from the NIST public database. From Inventory →
Endpoints → Host Inventory → Vulnerability Assessment, select CVEs on the upper-right bar. This information is also available in the va_cves dataset, which
you can use to build queries in XQL Search.
If you have the Identity Threat Module enabled, you can also view the CVE analysis in the Host Risk View. To do so, from Inventory → Assets → Asset Scores,
select the Hosts tab, right click on any endpoint, and select Open Host Risk View.
For each vulnerability, Cortex Cloud displays the following default and optional values.
Value Description
Affected endpoints The number of endpoints that are currently affected by this CVE. For
excluded CVEs, the affected endpoints are N/A.
TIP:
You can click each individual CVE to view in-depth details about it on a
panel that appears on the right.
Value Description
Excluded Indicates whether this CVE is excluded from all endpoint and application
views and filters, and from all Host Insights widgets.
Platforms The name and version of the operating system affected by this CVE.
Severity The severity level (Critical, High, Medium, or Low) of the CVE as ranked in
the NIST database.
Severity score The CVE severity score is based on the NIST Common Vulnerability Scoring
System (CVSS). Click the score to see the full CVSS description.
You can perform the following actions from Cortex Cloud as you analyze the existing vulnerabilities:
View CVE details: Left-click the CVE to view in-depth details about it on a panel that appears on the right. Use the in-panel links as needed.
View a complete list of all endpoints in your network that are impacted by a CVE: Right-click the CVE and then select View affected endpoints.
Learn more about the applications in your network that are impacted by a CVE: Right-click the CVE and then select View applications.
Exclude irrelevant CVEs from your endpoints and applications analysis: Right-click the CVE and then select Exclude. You can add a comment if needed,
as well as Report CVE as incorrect for further analysis and investigation by Palo Alto Networks. The CVE is grayed out and labeled Excluded and no
longer appears on the Endpoints and Applications views in Vulnerability Assessment, or in the Host Insights widgets. To restore the CVE, you can right-
click the CVE and Undo exclusion at any time.
NOTE:
The CVE will be removed/reinstated to all views, filters, and widgets after the next vulnerability recalculation.
Endpoint Analysis
To help you assess the vulnerability status of an endpoint, Cortex Cloud provides a full list of all installed applications and existing CVEs per endpoint and also
assigns each endpoint a vulnerability severity score that reflects the highest NIST vulnerability score detected on the endpoint. This information helps you to
determine the best course of action for remediating each endpoint. From Inventory → Endpoints+Host Inventory → Vulnerability Assessment, select Endpoints
on the upper-right bar. This information is also available in the va_endpoints dataset. In addition, the host_inventory_endpoints preset lists all endpoints, CVE
data, and additional metadata regarding the endpoint information. You can use this dataset and preset to build queries in XQL Search.
For each vulnerability, Cortex XDR displays the following default and optional values.
Value Description
CVEs A list of all CVEs that exist on applications that are installed on the endpoint.
TIP:
You can click each individual endpoint to view in-depth details about it on
a panel that appears on the right.
Last Reported Timestamp The date and time of the last time the Cortex XDR agent started the
process of reporting its application inventory to Cortex Cloud.
Value Description
Severity The severity level (Critical, High, Medium, or Low) of the CVE as ranked in
the NIST database.
Severity score The CVE severity score based on the NIST Common Vulnerability Scoring
System (CVSS). Click the score to see the full CVSS description.
You can perform the following actions from Cortex Cloud as you investigate and remediate your endpoints:
View endpoint details: Left-click the endpoint to view in-depth details about it on a panel that appears on the right. Use the in-panel links as needed.
View a complete list of all applications installed on an endpoint: Right-click the endpoint and then select View installed applications. This list includes the
application name, and version, of applications on the endpoint. If an installed application has known vulnerabilities, Cortex Cloud also displays the list of
CVEs and the highest Severity.
(Windows only) Isolate an endpoint from your network: Right-click the endpoint and then select Isolate the endpoint before or during your remediation to
allow the Cortex Cloud agent to communicate only with Cortex Cloud .
(Windows only) View a complete list of all KBs installed on an endpoint: Right-click the endpoint and then select View installed KBs. This list includes all
the Microsoft Windows patches that were installed on the endpoint and a link to the Microsoft official Knowledge Base (KB) support article. This
information is also available in the host_inventory_kbs preset, which you can use to build queries in XQL Search.
Retrieve an updated list of applications installed on an endpoint: Right-click the endpoint and then select Rescan endpoint.
Application Analysis
You can assess the vulnerability status of applications in your network using the Host inventory. Cortex Cloud compiles an application inventory of all the
applications installed in your network by collecting from each Cortex XDR agent the list of installed applications. For each application on the list, you can see
the existing CVEs and the vulnerability severity score that reflects the highest NIST vulnerability score detected for the application. Any new application
installed on the endpoint will appear in Cortex Cloud within 24 hours. Alternatively, you can re-scan the endpoint to retrieve the most updated list.
NOTE:
Starting with macOS 10.15, Mac built-in system applications are not reported by the Cortex XDR agent and are not part of the Cortex Cloud Application
Inventory.
To view the details of all the endpoints in your network on which an application is installed, right-click the application and select View endpoints.
To view in-depth details about the application, left-click the application name.
Abstract
A case provides the full contextual story of a problem that impacts your organization's security, giving you an end-to-end view of the problem and streamlining
your understanding of what needs to be solved and how.
Related issues: Issues represent problems that were detected in your environment that exceed defined thresholds, or surpass your organization's
accepted level of risk and threat tolerance.
Affected assets: Cortex Cloud identifies the assets impacted in a case, and how they fit into the story.
Artifacts: Cortex Cloud Artifacts are objects to which we can attribute behavior or influence, such as filenames, file signers, processes, domains, and IP
addresses.
The following tabs describe the main features of cases and the problems they can help you to solve:
Smart grouping
Problems: Understanding and remediating the full scope of a problem, and struggling to identify the real issues due to alert fatigue
To reduce the noise in your environment, Cortex Cloud groups related issues in a single case so that you can address the workload as a whole, and see the full
picture of the related risks and vulnerabilities.
In addition, to enable you to focus your efforts on the most critical issues and identify the issues that need to be addressed first, issues are ranked by severity.
Remediation suggestions
Cortex Cloud provides smart suggestions and guidance to help you to investigate and remediate the issues in a case.
A case can contain one or more related issues. Issues are linked to cases by matching their content. If a new issue is triggered in the system that doesn't
match any of the existing cases, a new case is created. When an issue is linked to a case, all associated assets and artifacts are also linked to the case. Each
case is individually configured and requires its own independent investigation. For more information about working with cases, see Investigate cases.
Once a case is resolved, Cortex Cloud can reopen that case for up-to six hours if a new issue is triggered that matches the case. After the six-hour period, any
new issues are linked to a new case for a new investigation. To see a list of all cases, go to Cases & Issues → Cases.
4.1.1.1 | Issues
Abstract
Issues identify the problems that you need to solve in your environment.
Issues identify the problems that you need to solve in your environment. Cortex Cloud creates issues when problems occur in your environment that cross
defined thresholds, or surpass your organization's accepted level of risk and threat tolerance.
How is your environment impacted: Affected assets or the impact of this issue in your environment
Issues are created from findings or from events that occur in your environment. When an issue is created, Cortex Cloud assess the content of the issue and
assigns it to a new or existing case. In addition, according to the content of the issue it is assigned to a domain that reflects the operational use case of the
issue, such as Security or Posture.
On the Cases page you see the issues that are linked to a case in the Issues & Insights tab. To start your investigation into an issue, open an issue in the issue
investigation pane to review issue details, review the causality chain, run automations, and see the suggested remediations.
Cortex Cloud offers the flexibility to manually link and unlink issues from cases. Issues can also be linked to multiple cases. In addition, you can create your
own custom issues from rules that you define. For example, correlation rules, malware rules, and vulnerability rules. For more information about setting up
rules, see What are detection rules?
Abstract
Findings and events form the core of our knowledge data lake. Findings provide context about the current state of the assets in your environment and Events
are logged activities that occur in your environment.
Findings and events form the core of our knowledge data lake.
Findings
Findings are non-actionable, informational objects that provide context about the current state of the assets in your environment.
To gather findings, Cortex Cloud periodically scans the assets in your environment and collects raw data about vulnerabilities, compliance, exposures,
malware, secrets, and other posture-related information about the asset. This raw data is processed, saved to datasets, and recorded as findings. Each time
the assets are scanned, the finding are updated to reflect the current state of the assets.
Each finding is categorized according to its context, for example Configuration, Vulnerability, Compliance, or Identity, and is related directly to the scanned
asset. When you investigate an asset through the Asset Inventory, you can see any findings that were collected for the asset.
Findings themselves are not issues, however findings that match a specific logic can generate issues. You can also set up your own rules to trigger issues
when certain types of findings are recorded. For example, you can set up Compliance rules that will create issues if specific compliance fails are identified in
compliance findings.
You can view all findings in the Findings Table. Access the Findings Table from the Issues page or Issues tab in a case. You can also search the Findings
data set to see the findings collected over time for an asset.
Events
Cortex Cloud collects event logs that audit the activities that occur in your environment. The logs are ingested from various sources, such as Palo Alto
Networks Next-Generation Firewall (NGFW), Prisma Access, third-party sources, and EDRs. These logs provide a complete picture of the events that occur in
the environment and the activities surrounding the events.
When certain malicious objects (such as malware) are discovered in the event logs, an issue is created. During case investigation, you can query your event
logs to see information about the actors and processes that triggered the issue.
Abstract
Cortex Cloud assigns each case and issue to a domain. Domains help you to organize and manage your work efforts, and differentiate between use cases.
Depending on the objects identified in a case or issue, each case and issue is assigned to a domain that reflects the root cause and the system areas of
operation.
Domains are a contextual boundary that allow you to manage and prioritize each use case and help you to differentiate between your security use cases and
non-security use cases. Domains help you to organize and manage your work efforts, streamline the assignment of cases, and enable you to create tailored
experiences for each domain.
When an issue is created, Cortex Cloud automatically assigns it to a domain, and the same domain is assigned to the associated case. If you create your own
case, you can select the domain to which you want to assign it.
Built-in domains
Domain Description
Security For cases and issues that are associated with case response activities for detecting, preventing, and
blocking threats as they occur in runtime.
For example, identification of malware in a file, a compromised endpoint, or a phishing attempt. These
cases can be assigned to a SOC analyst that specializes in blocking and remediating attacks.
Posture For cases and issues that are associated with risk management activities to detect and mitigate risks to
assets in the environment before they occur in runtime, and improve resilience.
For example, misconfigurations in cloud instances, over-permissive users, or the detection of secrets or
shadow data. These cases can be assigned to an analyst that specializes in strengthening the security
posture.
The Posture domain has subcategories that define the posture issue (Configurations, Vulnerability, Identity,
etc).
Health For cases and issues that are associated with health monitoring activities to ensure optimal platform
performance and gain insights into health drifts. For example, disruptions in data ingestion, collector
connectivity errors, correlation rule errors, and event forwarding errors.
Hunting For cases and issues that are associated with identifying and mitigating potential security threats before
they cause any damage. For example, monitoring network traffic, analyzing logs, and conducting
vulnerability assessments.
IT For cases and issues that are associated with operational activities for ensuring availability and reliability in
system performance. For example, server outages, network connectivity issues, application performance
problems, or IT tasks.
You can't merge cases with different domains, however, a case can be associated with issue from different domains.
For SBAC, there is a tag family for Case Domains, and new tags for each domain that enable you to control access to your domains.
Domains might affect custom content that is connected to cases and issues. Review you custom content to ensure it is associated with the intended
domains, this includes:
Starring Rules
Notifications
Issue Exclusions
Scoring Rules
XQL that accesses the cases or issues datasets in Scheduled Queries, and Widgets
Cortex Cloud Posture Cases group together issues and findings that can be resolved with a single fix, helping your security team focus on issues with the most
impactful outcomes. Cases are prioritized based on security context and issues. Features include:
Security Fix Efficiency: Using machine learning and generative models, Posture Cases help summarize tasks across the various issues that impact the
same asset, ensuring a comprehensive plan to reduce issues with the least number of required steps.
Execution and Delegation: Leverage your integrations on the Cortex Cloud platform to help delegate security fixes to your team.
Posture Cases offer seamless integration with existing cloud security tools such as:
Cloud Infrastructure and Entitlement Management (CIEM): Identities of instances and users are stored in this module. Analysts can access this
information when reviewing a Posture Case.
Attack Paths: Explore the attack paths to any asset in a Posture Case to learn more about potential threats.
Integration: Data ingestion across all platform tools to gather and group together findings, gives you more time to focus on faster resolution.
Follow the steps below to access and explore the Posture Cases menu:
1. Navigate to Home > Cases & Issues > Cases and select Posture Domain from the Domains drop-down.
2. Each case includes a brief summary of risks detected, total number of issues, and their source. The Posture Cases navigation menu includes the
following views and filtering options:
b. Use the Assigned and Status drop-down to filter cases by— Status and Assignee.
c. Select the Issues widget to sort cases by issue status. Criticality is selected by default. It takes into account not only the number of affected parts
of your system, but also security data in the context of your business.
3. Assigned — Click on the Assignee filter to update the assignments of any selected Posture Case.
From the Cases dashboard click on any Posture Case from the list view to get a closer look at the issues addressed by the case, as well as the number of
impacted assets.
Each Posture Case helps group issues to help burn them down more efficiently. Issues are logically grouped together so that a single action on one asset can
remove the corresponding issue.
Posture Cases consist of asset specific details to help security teams rapidly remediate the risk. Sections include:
Overview— Provides a summary of the posture case findings, the primary asset affected, as well as the ability to trace through the impact of the grouped
issues by viewing attack paths, issues, vulnerabilities, and findings.
Issues in this Case— Lists all the issues associated with the posture case. Click on any Issue to view Attack Paths, Vulnerabilities and other details
related to the alert. This view is highly customizable and issues can be individually addressed or snoozed from the Posture Case.
Actions— Leverages machine learning and large language models to combine issues across several issues that affect an asset to provide step by step
recommendations to remediate the risk.
For example, a Posture Case can include specific steps to address an issue with an identity role that needs to be updated to resolve several issues. Use the
Assigned button to update case ownership to any team member for further resolution.
Key Assets & Artifacts—Click on Posture Cases > Assets and select Related Assets from the menu to view the Issues impacting the selected asset.
Alerts & Insights—Select any Issue to further investigate the root cause. Use the Issues module, to resolve specific issues. Alternatively, you have the
option to create a policy to resolve issues based on your specific needs.
Timeline—Provides an audit trail of all relevant actions pertaining to the Posture Case. Events can be filtered by: Issues, Response/Case Management
Actions, Automatic Case Updates, and Automation.
Incident War Room—Available as a CLI investigation tool for select use cases.
Abstract
On the Cases page you can track, investigate, and take remedial action on your cases.
On the Cases page you can track, investigate, and take remedial action on your cases. Go to Case and Issues → Cases and locate the case you want to
investigate.
NOTE:
If you do not have permissions to access an asset of a case (which is shown as grayed out and locked), check your scoping permissions in Manage Users or
Manage User Groups.
If the case contains unassigned issues, or the issues are not assigned to case assignee, a dialog opens with options for assigning the issues.
To keep cases fresh and relevant, Cortex Cloud implements the following thresholds. When the case reaches a threshold, it stops accepting issues and groups
subsequent related issues in a new case.
You can track the threshold status in the Issues Grouping Status field in the cases table.
You can link and unlink issues from cases. An issue can be assigned to more than one case, and the case domain can be different from the issue domain.
From the Issues page, select one or more issues that you want to link, right-click and select Link to case. You can select one or more case to link the issues.
From the Issues page, select the issue that you want to unlink, right-click and select Unlink from case. You can select one or more cases to unlink the issue.
You cannot bulk select issues to unlink.
Abstract
When you resolve a case or issue you must also specify a resolution reason. The following table describes the resolution reasons available for selection.
Resolved - True The case was correctly identified by Cortex Cloud as a real threat, and the case was successfully handled and resolved.
Positive
NOTE:
Cases resolved as True Positive and False Positive help Cortex Cloud to identify real threats in your environment by
comparing future cases and associated issues to the resolved cases. Therefore, the handling and scoring of future cases is
affected by these resolutions.
Cases resolved as True Positive and False Positive help Cortex Cloud to identify real threats in your environment by
comparing future cases and associated issues to the resolved cases. Therefore, the handling and scoring of future cases is
affected by these resolutions.
Resolved - Security The case is related to security testing or simulation activity such as a BAS, pentest, or red team activity.
Testing
Resolved - Known The case is related to an existing issue or an issue that is already being handled.
Issue
Abstract
You can manually create a new case, assign it to a specific domain, and define custom fields for the case.
LICENSE TYPE:
To create a case manually, you must have View/Edit permission for Cases and Issues selected under Settings → Configurations → Access Management →
Roles → Components → Cases & Issues.
2. Under Case Details, specify the name, severity, and (Optional) description.
3. (Optional) Select MITRE ATT&CK tactics and techniques to assign to the case.
NOTE:
4. Under Issue Details, select the issues to link to the case, or create a new issue.
TIP:
The issues that you link to a case can be linked to multiple cases, and the issue domains do not need to match the case domain.
Each case creation generates one issue. The name, the severity, and the description of the generated issue mirrors the name, the severity, and the
description of the case.
NOTE:
Abstract
You can investigate and manage your cases on the Cases page.
Start you investigation by reviewing the cases on the Cases page. On this page you can see details about the cases in your environment and take actions to
assign, investigate, and remediate your cases. For more information about the connection between cases, issues, and findings, see What are cases?.
Abstract
Use the Cases page to review case details and take remedial action.
The Cases page is the first stop for investigating cases and issues. On the Cases page you can see information about all of the cases in your environment. You
can track and manage your cases, investigate issues linked to a case, and take remedial action. If your cases are configured with SLAs, you can monitor their
progress and make sure that they are aligned with your company's objectives.
By default open cases are displayed, but you can change the filters to browse through resolved cases too. To make it easy for you to identify the most critical
cases, Cortex Cloud provides color coded icons that indicate severity, case scores, and starred cases. In addition, saved table views provide customizable
filter configurations, enabling you to switch seamlessly between work queues and concentrate on the cases that matter most to you.
You can access the Cases page from Cases & Issues → Cases. The page is available in the following modes, and any changes that you make to the case
fields will persist between modes. To change between modes, click on the menu icon .
Detailed view (default): Displays cases in a split pane format with that provides key details of each case and makes it easy to prioritize the most urgent
cases.
The detailed view is a split paned format consisting of the list pane and the details pane. The list pane consolidates key information for each case based on
filtering options. From the list you can identify the most critical attacks and start prioritizing your cases. Click a case in the list pane to see its full details in the
details pane.
Using the detailed view you can quickly see a summary of each case, its severity, assigned domain, and score. Using this information you can start to triage
and prioritize your cases.
The details pane is split into the following sections and tabs:
Cases header Displays detailed key information and provides administration actions for the case, such as assigning an analyst and
setting the status. Hover and click on a field for more information, and edit where required.
Overview Displays the main case information including the MITRE ATT&CK tactics and techniques identified in the case, the number
of issues linked to the case, automation information about playbooks, and the key artifacts and assets involved in the case.
You can click any of the widgets to start your investigation.
Key Assets & Artifacts Displays asset and artifact information of the key artifacts, hosts, and users associated with the case. Hover over an icon
for more information, or click the more options icon to see the available views and actions. For more information about
investigating key assets and artifacts, see Investigate artifacts and assets.
Issues & Insights Displays a list of issues and insights linked to the case. Click on an issue or insight to open the investigation panel.
Timeline Displays a chronological representation of issues and actions relating to the case. Each timeline entry represents a type of
action that was triggered in the issue.
Issues that include the same artifacts are grouped into one timeline entry and display the common artifact in an interactive
link. Click on an entry to view additional details in the Details panel. You can also filter the timeline by action type.
Depending on the type of action, you can select the entry to further investigate and take action on the entry.
Executions Displays the causality chains associated with the case. On this tab you can investigate a causality chain and take actions
on a host.
TIP:
Set a default tab in the investigation pane by selecting the pin icon next to a tab.
Click on the Notepad, Messenger, and Case Context Data icons to add notes on the case, and see existing notes from other analysts.
To see a full list of case fields and descriptions, run the following query in the Query Builder:
Abstract
Saved table views help you to filter table data and create filter configurations that support your workflow.
Saved table views are saved filter configurations of table data that help you to focus on the data that most matters to you. You can filter your table data by
domain, context, work queue, or other criteria, and save configurations that support your workflow.
Click the button on the left side of the filters to open your predefined and custom table views. You can select a table view from the list, or create your own
custom configuration. Your saved table views are available on any page where saved table views are supported.
You can create custom table views from scratch, or by editing an existing table view.
You can also click Revert View the filters to the last saved version of the table view.
3. Click the save icon and type a name for the table view. You can also choose whether to share the view with other users.
1. Click the trash icon to remove all filters from the table.
3. Click the save icon and type a name for the table view. You can also choose whether to share the view with other users.
Abstract
On the Cases page you can track cases, investigate case details, see the issues linked to a case, and take remedial action. Navigate to Cases & Issues →
Cases and click on the case that you want to investigate.
NOTE:
If you do not have permissions to access an asset of a case (which is shown as grayed out and locked), check your scoping permissions in Manage Users or
Manage User Groups.
The following sections walk you through investigating case details on the Cases page.
The case list provide a short summary of each case to help you to quickly assess and prioritize your cases:
1. Review the case severity, score, assigned domain, and assignee. Select whether to Star the case.
2. Review the status of the case and when it was last updated.
Review the user name associated with the case. If there is more than one user, select the [+x] to display the additional user names.
Hover over the issue source icons to display the issue source type. Select the issue source icon to display the three most common issues that were
triggered and how many issues of each are associated with the case.
Click on a case to open the case in the right panel. In the case header you can update various data, such as the severity, case name, score, and merge
cases.
The default severity is based on the issue with the highest severity in the case. To manually change the severity select the severity tag and choose the
new severity.
Click on the assigned score to investigate how the score was calculated. The Manage Case Score dialog displays all rules that contributed to the case
total score, including rules that have been deleted. Deleted scores appear with a N/A.
You can override the Rule based score by selecting Set score manually or change the scoring method.
Select the assignee (or Unassigned) and begin typing the assignee’s email address for automated suggestions. Users must have logged in to the app to
appear in the auto-generated list.
Select the case Status to update the status to either New, Under Investigation, or Resolved. By updating the status you can indicate which cases have
been reviewed and to filter by status in the cases table.
When setting a case to Resolved, select the reason the resolution was resolved, add an optional comment, and select whether to Mark all issues as
resolved. For more information, see Resolution reasons for cases and issues.
7. Merge cases you think belong together. Click the more options icon and select Merge Cases.
Rule Based Score: Recalculates the case score to include the merged case scores.
If both cases have been assigned, the merged case takes the target case assignee.
If the target case is assigned and the source case is unassigned, the merged case takes the target assignee.
If the target case is unassigned and the source case is assigned, the merged case takes the existing assignee.
In the merged case, all source context data is lost even if the target case does or doesn't contain context data. If the target case contains context
data, that context data is preserved in the merged case.
8. Create an exclusion.
3. Filter the Issues table to define the issues that you want to include in the policy.
5. Click Create.
9. Review the remediation suggestions. Click the more options icon to open the Remediation Suggestions dialog.
Review the number of issues, issue sources, hosts, users, and wildfire hits associated with the case. Select Hosts, Users, and Wildfire Hits to display the
asset details.
Add notes or comments to track your investigative steps and any remedial actions taken.
Select the Notepad ( ) to add and edit the case notes. You can use notes to add code snippets to the case or add a general description of the
threat.
Use the Case Messenger ( ) to coordinate the investigation between analysts and track the progress of the investigation. Select the comments to
view or manage comments.
The case Overview tab displays the MITRE tactics and techniques, summarized timeline, and interactive widgets that visualize the number of issues, types of
sources, hosts, and users associated with the case.
Cortex Cloud displays the number of issues associated with each tactic and technique. Select the centered arrow at the bottom of the widget to expand
the widget and display the sub-techniques. Hover over a number of issues to display a link to the MITRE ATT&CK official site.
NOTE:
In some cases, the number of issues associated with the techniques will not be aligned with the number of the parent tactic because of missing tags or
in case an issue belongs to several techniques.
2. Investigate information about the Issues, Automation, Issues Sources, and Assets associated with the case.
Read more...
Review the Total number of issues and the colored line indicating the issue severity. Select the severity tag to pivot to the Issues & Insights
table filtered according to the selected severity.
Select each of the issue source types to pivot to the Issues & Insights table filtered according to the selected issue source.
Select See All to pivot to the Key Assets and Artifacts tab.
Select the host names to display the Details panel. The panel is only available for hosts with Cortex XDR agent installed and displays the
host name, whether it’s connected, along with the Endpoint Details, Agent Details, Network, and Policy information. Use the available actions
listed in the top right-hand corner to take remedial actions.
3. Review the artifacts and asset that are associated with the case.
You can click the more options icon next to an asset or artifact to open an associated view, or you can see more details in the Key Assets & Artifacts tab.
The Key Assets & Artifacts tab displays all the case asset and artifact information of hosts, users, and key artifacts associated with the case.
1. Investigate artifacts.
In the Artifacts section, search for and review the artifacts associated with the case. Each artifact displays, if available, the artifact information and
available actions according to the type of artifact; File, IP Address, and Domain.
2. Investigate hosts.
In the Hosts section, search for and review the hosts associated with the case. Each host displays, if available, host information and available actions.
3. Investigate users.
In the Users section, search for and review the users associated with the case. Each user displays, if available, the user information and available actions
The Issues & Insights tab displays a table of the issues and insights associated with the case.
1. Use the table tabs to switch between issues and insights, and add filters to the table to refine the displayed entries.
2. Click an issue to open the issue investigation panel. This panel provides detailed information about an issue, enables you to take actions on an issue,
open the causality, and start remediation.
3. If required, you can unlink the issue from the case or link it to other related cases. Click the more options icon and select Link to case or Unlink from
case.
The case Timeline tab is a chronological representation of issues and actions relating to the case.
1. Navigate to the Timeline tab and filter the actions according to the action type.
Each timeline entry is a representation of a type of action that was triggered in the issue. Issues that include the same artifacts are grouped into one
timeline entry and display the common artifact in an interactive link. Depending on the type of action, you can select the entry, host names, and artifacts
to further investigate the action:
For Response Actions and Case Management Actions, you can add and view comments relating to the action.
For Issues, Automatic Case Updates and Automation actions, click the action to open the Details panel. In the panel, go to the Issues tab to
view the issues table filtered by issues ID, the Key Assets to view a list of Hosts and Users associated to the issue, and an option to add
Comments.
Hash artifact: Displays the Verdict, File name, and Signature status of the hash value. Select the hash value to view the Wildfire Analysis
Report, Add to Block list, Add to Allow list and Search file.
Domain artifact: Displays the IP address and VT score of the domain. Select the domain name to Add to EDL.
IP address: Display whether the IP address is Internal or External, the Whois findings, and the VT score. Expand Whois to view the findings
and Add to EDL.
In action entries that involved more artifacts, expand Additional artifacts found to further investigate.
The Executions tab displays all the causality chains associated with the case. The causality chains are aggregated according to the following types of
groupings:
Host Name
Multiple Hosts
Undetected Host
User Name
Username
Multiple Users
Undetected Users
NOTE:
Prisma Cloud Compute issues are displayed in the Host Name grouping.
How to investigate case executions
In the Executions section, search for and review the hosts associated with the case. Review the host information and click the more options icon to
perform actions on the host, or open related views.
The causality chains are listed according to the Causality Group Owner (CGO), expand the CGO card you want to investigate. Each CGO card displays
the CGO name, the following CGO event details, and the causality chain:
CGO Name
Expand the causality chain to further investigate and perform available Causality View actions. For more information, see Causality view.
Abstract
Cortex Cloud generates issues to bring your attention to security risks in your framework.
Issues help you to monitor and control the security of your system framework by notifying you about risks to security in your framework. Cortex Cloud
generates issues from the following:
Findings
Findings themselves are not issues, however findings that match a specific logic can generate issues.
Firewalls
Integrations
Integrations enable you to ingest events, such as phishing emails, SIEM events, from third party security and management vendors. You might need to
configure the integrations to determine how events are classified as events. For example, for email integrations, you might want to classify items based
on the subject field, but for SIEM events, you want to classify by event type.
Abstract
The Issues page consolidates all non-informational issues from your detection sources.
The Issues page consolidates all non-informational issues from your detection sources. By default, the Issues page displays the security issues received over
the last seven days. To access the Issues page, go to Cases & Issues → Issues.
Each issue is linked to one or more cases. A case provides the full story of a problem by linking related issues, assets, and artifacts in one place. To make sure
that you understand the full picture of how an issue fits into the bigger picture, we recommend that you start your investigation from the Cases page. You can
see the issues linked to a case in the Issues & Insights tab of the selected case. Click on an issue to open the Issue investigation pane, for more information
see Issue investigation panel.
For issues associated with the Health domain, these issues are not linked to cases and should be investigated individually. You can also see Health domain
issues on the Health Issues page. For more information, see About health issues.
NOTE:
Every 12 hours, the system enforces a cleanup policy to remove the oldest issues once the maximum limit is exceeded. The default issue retention period in
Cortex Cloud is 186 days.
On the Issues page you can change the displayed information by changing the table view. When you open the page, the Security Domain table view is
displayed. Click the displayed table view to see your predefined and custom table views. You can create custom table views from scratch, or by editing the
predefined options. For more information, see Saved table views.
Cortex Cloud processes and displays the names of users in the following standardized format, also termed “normalized user”.
<company domain>\<username>
As a result, any issue triggered based on network, authentication, or login events displays the User Name in the standardized format in the Issues and Cases
pages. This impacts every issue for Analytics and Cortex Cloud Analytics BIOC, including Correlation, BIOC, and IOC issues triggered on one of these event
types.
Issue fields
To see a full list of issue fields and descriptions, run the following query in the Query Builder:
Abstract
Investigate an issue to view more detailed information and take any action as required.
Click on an issue to start your investigation. The issue investigation panel is context specific, and the content and tabs displayed in the pane reflect the context
of the selected issue.
In the issue investigation panel, gain more information about the cause of the issue, take any actions required, and see the remediation suggestions. The
following tabs are common to most alerts:
Tab Description
Overview Displays a description of the issue and provides key information, such as the assignee, status, and time that the issue was created and
updated.
You can also see the affected assets and link to the asset card, and see the number of cases linked to the issue.
Alert Overview Displays a summary of the issue, such as issue details, outstanding tasks, and indicators. Some fields are informational and some are
editable. Includes the following sections (depending on the layout):
ISSUE DETAILS: A summary of the issue, such as type, severity, and when the issue occurred. You can update these fields as
required.
NOTES: Helps you understand specific actions taken, and allow you to view conversations between analysts to see how they
arrived at a certain decision. You can see the thought process behind identifying key evidence and identifying similar cases.
Technical Displays an overview of the information collected about the investigation, such as indicators, email information, URL screenshots, etc.
Information When you run a playbook, the sections are automatically completed.
Investigation Enables you to take action on the issue, such as converting a JSON file to CSV and check if the IP address is in CIDR.
Tools
Abstract
You can triage, investigate, and take actions on issues from the Issues table.
Issues are displayed in the Issues table in the Issues & Insights table on the Cases page, and on the Issues page. Use the following steps to investigate and
triage an issue:
1. Review the data shown in the issue such as the command-line arguments (CMD), process info, etc.
When the app correlates an issue with additional endpoint data, the Issues table displays a green dot to the left of the row to indicate the issue is eligible
for analysis in the causality view. If the issue has a gray dot, the issue is not eligible for analysis in the causality view. This can occur when there is no
data collected for an event, or the app has not yet finished processing the EDR data. To view the reason analysis is not available, hover over the gray
dot.
3. If deemed malicious, consider responding by isolating the endpoint from the network.
Abstract
You can copy issue text into memory and paste them into an email. This is helpful if you need to share or discuss a specific issue with someone. If you copy a
field value, you can also paste it into a search or begin a query.
1. From the Issues page, right-click the issue you want to send.
3. Paste the URL into an email or use it as needed to share the information.
Abstract
Learn more about analyzing issues in the issue investigation view, and the causality view.
To help you understand the full context of an issue, Cortex Cloud provides the issue investigation view and the causality view to help you to quickly make a
thorough analysis of the issue.
The causality view is available for XDR agent issue that are based on endpoint data and for issues raised on network traffic logs that have been stitched with
endpoint data. In addition, you can use the cloud causality view to analyze cloud Cortex Cloud issues and cloud audit logs. While the SaaS causality view
enables you to analyze and investigate software-as-a-service (SaaS) related issues for audit stories, such as Office 365 audit logs and normalized logs.
1. From the Issues table, click in issue to open the issue investigation panel, or right-click an issue and select and select Investigate Causality Chain.
2. Review the chain of execution and available data for the process and, if available, navigate through the process tree.
Abstract
For Cortex XDR agent related issues, you can create profile exceptions for Window processes, BTP, and JAVA deserialization issues directly from the Issues
table.
Profile: Apply the exception to an existing profile or click and enter a Profile Name to create a new profile.
b. In the Profiles table, locate the OS in which you created your global or profile exception and right-click to view or edit the exception properties.
Abstract
You can review issue details offline by exporting issues to a TSV file.
To archive, continue investigation offline, or parse issue details, you can export issues to a tab-separated values (TSV) file:
1. From the Issues page, adjust the filters to identify the issues you want to export.
2. When you are satisfied with the results, click the download icon ( ).
Cortex Cloud exports the filtered result set to the TSV file.
Abstract
During the process of triaging and investigating issues, you might determine that an issue does not indicate threat. You can choose to exclude the issue, which
hides the issue, excludes it from cases, and excludes it from search query results.
You can also set up issue exclusion rules that automatically exclude issues that match certain criteria. For more information, see Issue exclusions.
1. From the Issues page, locate the issue you want to exclude.
Abstract
You can run queries on case and issue data with the cases and issues datasets.
You can query case and issue data in the cases and issues datasets. When using the issues dataset, keep in mind the following:
Issue fields are limited to certain fields available in the API. For the full list, see Get Alerts Multi-Events v2 API.
The issues dataset comprises issues from the Security and Health domains. To query only security issues, use the following XQL:
Abstract
Causality is the idea of telling a story in a simple and coherent manner and in a proper context. With the purpose of leading security teams to actionable
outcomes.
Palo Alto Networks products, such as Next-Generation Firewall (NGFW) or the Cortex XDR Agent, can be configured to send rich and detailed data about all
activities to the Strata Logging Service, not only items related to attacks. This means that millions of data points are collected about every entity every single
day. Analyzing so much data as log lines is practically impossible, so Cortex Cloud takes these data points and continuously stitches them automatically to
‘Causality Chains’. This automates the dot-connection process that an investigator would otherwise have to do manually during an investigation. This process
happens constantly for all collected data points, such as processes, files, network connections, and more, regardless of prevention, detection, or alerts of any
kind. With causality, when analysts decide to investigate alerts or go on a hunt, they don't need to manually connect the dots getting distracted with millions of
irrelevant data points, and instead they can focus only on data related to the investigation.
Even the most complicated investigations take just a few moments for a novice analyst, during which causality reveals answers to critical questions, such as:
Who’s involved?
What can be done to reduce the risk of the same thing happening again?
To achieve this, Palo Alto Networks invested and patented the causality engine and the ways it works.
How it works
Causality chains are built using a deep understanding of each operating system (OS) and the way it works, which processes fulfil the various functions and
more. Causality chains in Windows, macOS, and Linux work with the same guidelines, with different processes and methods used to decide how to build
chains.
There are some processes in the OS that have very specific roles to fill. For example, services.exe and explorer.exe are used mainly to spawn other
processes. This means that causality chains don’t show these processes by default and start from their child processes as these are only OS processes doing
their job; yet, you can manually add them by right clicking on the Causality Group Owner (CGO) and adding the parent process.
Cortex Cloud tracks Remote Procedure Call (RPC) requests between processes and it doesn't break the casualty chain into sub chains, so the analyst still
sees the full chain of execution, including actions done via RPC. Same goes for code injection, as Cortex Cloud tracks the new threads that are started as a
result of such actions and can tie anything that happens as a result to the original injecting processes and its causality chain.
Spawners
Processes that are used to spawn other sub processes are called spawners. Those processes are known to start other processes as part of the normal flow of
the operating system (OS). Examples of such processes are explorer.exe, services.exe, wininit.exe, userinitt.exe, and more. When spawner
processes are started by a non-spawner process, they are not considered spawners. In Cortex Cloud, we don’t distinguish between a Causality Group Owner
(CGO) and spawner, calling both CGO.
Example 2.
userinit.exe starts explorer.exe: explorer.exe is considered a spawner, as this is what we expect to see in the OS.
cmd.exe starts explorer.exe: explorer.exe is NOT considered as a spawner as it’s not the role of cmd.exe to start explorer.exe.
The child processes of a spawner are considered as CGOs and they start off the causality chain.
Causality Chain
When a malicious file, behavior, or technique is detected, Cortex Cloud correlates available data across your detection sensors to display the sequence of
activity that led to the alert. This sequence of events is called the causality chain. The causality chain is built from processes, events, insights, and alerts
associated with the activity. During the alert investigation, you should review the entire causality chain to fully understand why the alert occurred.
The Causality Analysis Engine correlates activity from all detection sensors to establish causality chains that identify the root cause of every alert. The Causality
Analysis Engine also identifies a complete forensic timeline of events that helps you to determine the scope and damage of an attack and provide an
The Causality Group Owner (CGO) is the process in the causality chain that the Causality Analysis Engine identified as being responsible for or causing the
activities that led to the alert. A CGO is always the child of a spawner, so it’s the first process in the operating system (OS) chain of execution that is not loaded
by default as part of what’s expected in a normal OS flow. All sub-processes started by the CGO are linked to it, and help analysts quickly identify the root
cause of why something happened.
NOTE:
There are no CGOs in the Cloud Causality View, when investigating cloud Cortex Cloud alerts and Cloud Audit Logs, or SaaS Causality View, when
investigating SaaS-related alerts for 501 audit events, such as Office 365 audit logs and normalized logs.
CID
Each causality chain gets a unique ID called a CID. All actions on this chain, such as process execution, registry changes, and network connections, receive
the same ID. This means that whenever the user queries about a given action, for example who connected to a malicious IP, the response not only includes the
process who performed it or the user, it includes all actions related to the same CID. This shows the entire chain of execution alongside all other actions
performed with the connection to the malicious IP.
This concept is important because any alert that is triggered about any action is also mapped to the same CID, meaning that one chain of execution displays
all processes and alerts associated with the relevant CID. Alerts on the same CID is also one of the methods Cortex Cloud uses to group alerts into an
incident.
Abstract
See the causality of an issue—the entire process execution chain that led up to the issue in the Cortex Cloud app.
The causality view provides an interactive visualization of a Causality Instance (CI) associated with an issue. On this view you can see the causality (cause and
effect) of events of the entire process execution chain that led up to the issue. By automating the dot-connection process, Cortex Cloud helps you to streamline
your investigations by providing immediate, actionable insights into security issues and the related processes in the causality chain.
To open the casualty, right-click on an issue in the Cases or Issues pages. The causality view comprises the causality instance chain, Information overview,
Forensics highlights, and the All Events table. Click on nodes on the causality chain to see details about each entity in the Information overview and All Events
table. You can also take actions on the processes in the chain by clicking Actions or right-clicking a specific node.
Show me more
The following sections describe the different areas of the causality view:
Includes the graphical representation of the Causality Instance (CI), built from process nodes, events, and issues. The chain presents the process execution
and might include events that the processes caused, and issues that were triggered by the events or processes.
The Causality Group Owner (CGO) is displayed on the left side of the chain. The CGO is the process that is responsible for all the other processes, events, and
issues in the chain. You need the entire CI to fully understand why the issue occurred. The process node displays icons to indicate when an RPC protocol or
Injected Node
Remote IP address
Visualization of the branch between the CGO and the actor process of the issue/event.
Displays up to nine additional process branches that reveal issues related to the issue/event. Branches containing issues with the nearest timestamp to
the original issue/event are displayed first.
Causality cards that contain more causality data display a Showing Partial Causality flag. You can manually add additional child or parent processes
branches by right-clicking on the process nodes displayed in the graph.
Navigation
You can move the chain, extend it, and modify it. To adjust the appearance of the CI chain, use the size controls on the right. You can also move the chain by
selecting and dragging it. To return the chain to its original position and size, click in the lower-right of the CI graph.
When the Identity Threat Module is enabled, Cortex Cloud displays the anomaly that triggered the issue against the backdrop of baseline behavior for some
issues. To see the profiles that are generated by the detector, Open Issue Visualization. Each tab displays the factors that triggered the issue, the event and the
baseline information in tabular format or in timeline format, depending on the type of event. The graphs display the information in full mode, covering 30 days.
The tabular view displays the baseline behavior in a table, with the anomaly highlighted and in a separate line.
The timeline view displays the highlighted atypical value, and if applicable, the minimum, maximum, and average values, for the selected period.
Actions
Hover over a process node to display a Process Information pop-up listing useful information about the process. From any process node, you can also right-
click to display additional actions that you can perform during your investigation:
Show parents and children: If the parent is not presented by default, you can display it. If the process has children, Cortex Cloud opens a dialog
displaying the Children Process Start Time, Name, CMD, and Username details.
Add to block list or allow list, terminate, or quarantine a process: If after investigating the activity in the CI chain, you want to take action on the process,
you can select the desired action to allow or block the process across your organization.
In the causality view of a Detection (Post Detected) type issue, you can also Terminate process by hash.
Information overview
If you select an issue node, you can see the issue name, source, timestamp, severity, the action taken, the tags assigned to it, and MITRE ATT&CK tactics and
techniques identified. If more than one issues is available, you can scroll through the related issues.
If you select a process node, you can see the path, parent Pid, Sha256, associated username, and MITRE ATT&CK details. You can also see the Wildfire Score
and download the Wildfire report.
Forensics Highlights
Forensics Highlights serves as the central cockpit for investigating and navigating the entire causality view, offering a comprehensive breakdown of events,
processes, and different activities to uncover and respond to potential threats with precision. In each section, you can click on data points to highlight the
related process in the CI. Forensic Highlights includes the following sections:
Script Engines: Delve into detailed activity logs of script engines to uncover potential execution of malicious scripts and code.
Issus: Gain clarity on triggered issues for the entire causality chain.
Process: Investigate process activities to identify unusual behavior or unauthorized process executions.
Network: Analyze forensic data related to network activities, highlighting potential threats in communication flows.
File: Uncover file-related forensic evidence to pinpoint suspicious file operations or unauthorized access.
System Calls: Track low-level system call activities for signs of exploitation or atypical behavior.
RPC Calls: Analyze RPC (Remote Procedure Call) forensic data to trace unauthorized remote operations.
The All Events table displays up to 100,000 related events for the process node which matches the issue criteria that were not triggered in the issues table, but
are informational. The Prevention Actions tab displays the actions Cortex Cloud takes on the endpoint based on the threat type discovered by the agent.
To continue the investigation, you can perform the following actions from the right-click pivot menu:
Add <path type> to malware profile allow list from the Process and File table. For example, target_process_path, src_process_path, file_path, or
os_parent_path.
For the behavioral threat protection results, you can take action on the initiator to add it to an allow list or block list, terminate it, or quarantine it.
Revise the event results to see possible related events near the time of an event using an updated timestamp value to Show rows 30 days prior or 30
days after.
TIP:
To view statistics for files on VirusTotal, you can pivot from the Initiator MD5 or SHA256 value of the file on the Files tab.
Abstract
The network causality view shows a chain of individual network processes that triggered an issue as part of a particular sequence of operation.
On the network causality view you can analyze and respond to stitched firewall and endpoint issues. On this view you can see the causality (cause and effect)
of events of the entire process execution chain that led up to the issue. The network causality view presents the network processes that triggered the issue,
generated by Cortex Cloud, Palo Alto Networks next-generation firewalls, and supported sources, such as 3rd party network sources.
On each node in the CI chain, Cortex Cloud provides information to help you understand what happened around the issue. The CI chain visualizes the firewall
logs, endpoint files, and network connections that triggered issues connected to a security event.
NOTE:
The network causality view displays only the information it collects from the detectors. It is possible that the CI may not show some of the firewall or agent
processes.
The following sections describe the different areas of the network causality view:
Includes the graphical representation of the Causality Instance (CI) along with other information and capabilities to enable you to conduct your analysis.
The Causality View presents a CI chain for each of the processes and the network connection. The CI chain is built from process nodes, events, and issues.
The chain presents the process execution and might also include events that these processes caused and issues that were triggered by the events or
processes. The Causality Group Owner (CGO) is displayed on the left side of the chain. The CGO is the process that is responsible for all the other processes,
events, and issues in the chain. You need the entire CI to fully understand why the issue occurred.
Yellow: Grayware.
Red: Malware.
You can view and download the WildFire report in the Entity Data section.
Navigation
You can move the chain, extend it, and modify it. To adjust the appearance of the CI chain, use the size controls on the right. You can also move the chain by
selecting and dragging it. To return the chain to its original position and size, click in the lower-right of the CI graph.
Actions
Hover over a process node to display a Process Information pop-up listing useful information about the process. From any process node, you can also right-
click to display additional actions that you can perform during your investigation:
Show parents and children: If the parent is not presented by default, you can display it. If the process has children, Cortex Cloud opens a dialog
displaying the Children Process Start Time, Name, CMD, and Username details.
Add to block list or allow list, terminate, or quarantine a process: If after investigating the activity in the CI chain, you want to take action on the process,
you can select the desired action to allow or block the process across your organization.
In the causality view of a Detection (Post Detected) type issue, you can also Terminate process by hash.
Information Overview
Summarizes information about the issue you are analyzing, including the host name, the process name on which the issue was raised, and the host IP address.
For issues raised on endpoint data or activity, this section also displays the endpoint connectivity status and operating system.
Host isolation
You can choose to isolate the host, on which the issue was triggered, from the network or initiate a live terminal session to the host to continue investigation and
remediation.
Displays all related events for the process node which match the issue criteria that were not triggered in the issue table but are informational. You can also
export the table results to a tab-separated values (TSV) file.
For the Behavioral Threat Protection table, right-click to add to allow list or block list, terminate, and quarantine a process.
TIP:
To view statistics for files on VirusTotal, you can pivot from the Initiator MD5 or SHA256 value of the file on the Files tab.
Abstract
See the causality of a cloud-type issue—the entire process execution chain that led up to the issue in the Cortex Cloud app.
On the cloud causality view you can analyze and respond to Cortex Cloud issues and cloud audit logs. On this view you can see the causality (cause and
effect) of events of the entire process execution chain that led up to the issue. The cloud causality view presents the event identity and /or IP address and the
actions performed by the identity on the cloud resource. On each node in the CI chain, Cortex Cloud provides information to help you understand what
happened around the event.
The following sections describe the different areas of the cloud causality view:
Includes the graphical representation of the Causality Instance (CI) along with other information and capabilities to enable you to conduct your analysis.
The view presents a single event CI chain. The CI chain is built from Identity and Resource nodes. The Identity node represents for example keys, service
accounts, and users, while the Resource node represents for example network interfaces, storage buckets, or disks. When available, the chain might also
include an IP address and issue that were triggered on the Identity and Cloud Resource.
Identity node: Displays the name of the identity, generated issue information, and if available the associated IP address.
1. Hover over an Identity node to display, if available, the identity Analytics Profiles.
2. Select the Identity node to display in the Entity Data section additional information about the Identity entity.
3. Select the issue icon to display additional information in the Forensic Highlights section.
Operations: Lists the type of operations performed by the identity on the cloud resources. Hover over the operation to display the original operation name
as provided by the cloud Provider.
Cloud resource node: Displays the referenced resource on which the operation was performed. For more information about the cloud resources icons,
see Key of cloud causality icons.
1. Hover over a resource node to display, if available, the resource Analytics Profiles and Resource Editors statistics.
2. Select the resource node to display in the Entity Data section additional information about the resource entity.
Navigation
You can move the chain, extend it, and modify it. To adjust the appearance of the CI chain, use the size controls on the right. You can also move the chain by
selecting and dragging it. To return the chain to its original position and size, click in the lower-right of the CI graph.
Information Overview
Summarizes information about the issue you are analyzing, including the type of Cloud Provider, Project, and Region on which the event occurred. Select View
Raw Log to view the raw log as provided by the Cloud Provider in JSON format.
Displays up to 100,000 related events and up to 1,000 related issues. In the All Events table, Cortex Cloud displays detailed information about each of the
related events. To simplify your investigation, Cortex Cloud scans your Cortex Cloud data aggregating the events that have the same Identity or Resource and
displays the entry with an aggregated icon. Right-click and select Show Grouped Events to view the aggregated entries.
Entries highlighted in red indicate that the specific event created an issue. To continue the investigation, right-click to View in XQL. To continue the investigation,
in the Issues table, right-click an issue to see the available actions.
Disk resource
General resource
Image resource
Abstract
Learn more about the SaaS causality view used to identify and investigate SaaS-specific data associated with SaaS-related issues and SaaS audit logs.
The SaaS causality view provides a powerful way to analyze and investigate software-as-a-service (SaaS) related issues for audit stories, such as Office 365
audit logs and normalized logs, by highlighting the most relevant events and issues associated with a SaaS-related issue. To help you identify and investigate
SaaS-specific data associated with SaaS-related issues and SaaS audit logs, Cortex Cloud displays a SaaS causality view, which enables you to swiftly
investigate a SaaS issue by displaying the series of events and artifacts that are shared with the issue.
A SaaS causality view is only available when Cortex Cloud is configured to collect SaaS audit logs and data. For example, this is possible by configuring an
Office 365 data collector or Google Workspace data collector with the applicable SaaS audit logs. This enables you to investigate any Cortex Cloud issue
generated from any IOC, BIOC, or correlation rules, including SaaS events. The SaaS causality view is available from the Issues table, or from the Query
Results after running a query on the SaaS related data. From both places, you can right-click to pivot to the SaaS causality view.
The scope of the SaaS causality view is the Causality Instance (CI) of an event to which this issue pertains. The SaaS causality view presents the event identity
and /or IP address and the actions performed by the identity on the SaaS resource. On each node in the CI chain, Cortex Cloud provides information to help
you understand what happened around the event.
Information Overview
Summarizes information about the issue you are analyzing, including the type of SaaS provider, project, and region on which the event occurred. Select View
Raw Log to view the raw log as provided by the SaaS provider in JSON format.
Includes the graphical representation of the SaaS Causality Instance (CI) along with other information and capabilities to enable you to conduct your analysis.
The SaaS causality view presents a single event CI chain. The CI chain is built from Identity and Resource nodes. The Identity node represents for example
keys, service accounts, and users, while the Resource node represents for example network interfaces, storage buckets, or disks. When available, the chain
can also include an IP address and issues that were triggered on the Identity and SaaS resource.
Identity node: Displays the name of the identity, generated issue information, and if available the associated IP address.
1. Hover over an Identity node to display, if available, the identity Analytics Profiles.
2. Select the Identity node to display in the Entity Data section additional information about the Identity entity.
3. Select the issue icon to display additional information in the Forensics Highlights tab.
Resource node: Displays the referenced resource on which the operation was performed. Cortex Cloud displays information on the following resources.
1. Hover over a Resource node to display, if available, the resource Analytics Profiles and Resource Editors statistics.
2. Select the Resource node to display in the Entity Data section additional information about the Resource entity.
You can move the chain, extend it, and modify it. To adjust the appearance of the CI chain, use the size controls on the right. You can also move the chain by
selecting and dragging it. To return the chain to its original position and size, click in the lower-right of the CI graph.
Displays up to 100,000 related events and up to 1,000 related issues. In the All Events table, Cortex Cloud displays detailed information about each of the
related events. To simplify your investigation, Cortex Cloud scans your Cortex Cloud data aggregating the events that have the same Identity or Resource and
displays the entry with an aggregated icon. Right-click and select Show Grouped Events to view the aggregated entries.
Entries highlighted in red indicate that the specific event created an issue. To continue the investigation, right-click to View in XQL. To continue the investigation,
in the Issues table, right-click an issue to see the available actions.
4.2.2.4.5 | Timeline
Abstract
From the Cortex Cloud tenant you can view the sequence (or timeline) of events and issues that are involved in any particular threat.
The Timeline provides a forensic timeline of the sequence of events, issues, and informational BIOCs, and correlation rules involved in an attack. While the
causality view of an issue surfaces related events and processes that Cortex Cloud identifies as important or interesting, the Timeline displays all related
events, issues, and informational BIOCs and correlation rules over time.
NOTE:
The Timeline view is not available when investigating cloud Cortex Cloud issues and cloud audit logs or SaaS-related issues for 501 audit events, such as
Office 365 audit logs and normalized logs. Only the applicable cloud causality view and SaaS causality view is available for this data.
Cortex Cloud displays the Causality Group Owner (CGO) and the host on which the CGO ran in the top left of the timeline. The CGO is the parent process in
the execution chain that Cortex Cloud identified as being responsible for initiating the process tree. In the example above, wscript.exe is the CGO and the
host it ran on was HOST488497. You can also click the blue corner of the CGO to view and filter related processes from the Timeline. This will add or remove
the process and related events or issues associated with the process from the Timeline.
Timespan
By default, Cortex Cloud displays a 24-hour period from the start of the investigation and displays the start and end time of the CGO at either end of the
timescale. You can move the slide bar to the left or right to focus on any time-gap within the timescale. You can also use the time filters above the table to
focus on set time periods.
Activity
Depending on the type of activities involved in the CI chain of events, the activity section can present any of the following three lanes across the page:
BIOCs and correlation rules: The category of the issue is displayed on the left (for example tampering or lateral movement). Each BIOC event also
indicates a color associated with the issue severity. An informational severity can indicate something interesting has happened but there were not any
triggered issues. These events are likely benign but are byproducts of the actual issue.
Event Information: The event types include process execution, outgoing or incoming connections, failed connections, data upload, and data download.
Process execution and connections are indicated by a dot. One dot indicates one connection while many dots indicates multiple connections. Uploads
and Downloads are indicated by a bar graph that shows the size of the upload and download.
The lanes depict when the activity occurred and provide additional statistics that can help you investigate. For BIOC, correlation rules, and issues, the lanes
also depict activity nodes, highlighted with their severity color: high (red), medium (yellow), low (blue), or informational (gray), and provide additional
information about the activity when you hover over the node.
Cortex Cloud displays up to 100,000 issues, BIOCs and Correlation Rules (triggered and informational), and events. Click on a node in the activity area of the
Timeline to filter the results. You also can create filters to search for specific events.
Abstract
You can investigate specific artifacts and assets on dedicated views related to IP address, Network Assets, and File and Process Hash information.
From the Cases view, open the Key Assets & Artifact tab to see the assets and artifacts that are associated with the case, including hosts, IP addresses, and
users. Icons represent properties of the artifacts and assets. Hover over an icon for more information. Click the more options icon to drill down in dedicated
views, or take actions on the asset or artifact. The Key Assets & Artifact tab shows the following information:
Artifacts
To aid you with threat investigation, Cortex Cloud displays the WildFire-issued verdict for each key artifact in a case. To provide additional verification
sources, you can integrate external threat intelligence services with Cortex Cloud.
Assets
Displays Hosts and Users details. For hosts with a Cortex XDR agent installed, click on the host name to see more information in the Details panel.
Abstract
Investigate cases, connections, and threat intelligence reports related to a specific IP address on the IP View.
Drilldown on an IP address on the IP View. On this view you can investigate and take actions on IP addresses, and see detailed information about an IP
address over a defined 24-hour or 7-day time frame. In addition, to help you determine whether an IP address is malicious, the IP View displays an interactive
visual representation of the collected activity for a specific IP address.
Right-click the IP address that you want to investigate and select Open IP View.
b. Review the location of the IP address. By default, Cortex Cloud displays information on whether the IP address is an internal or external IP
address.
External—Connection Type: Incoming displaying IP address is located outside of your organization. Displays the country flag if the location
information is available.
Internal—Connection Type: Outgoing displaying IP address is from within your organization. The XDR Agent icon is displayed if the endpoint
identified by the IP address had an agent installed at that point in time.
The color of the IP address value is color-coded to indicate the IOC severity.
Depending on the threat intelligence sources that are integrated with Cortex Cloud, the following threat intelligence might be available:
NOTE:
Recent Open Cases lists the most recent cases that contain the IP address as part of the case’s key artifacts, according to the Last Updated
timestamp. If the IP address belongs to an endpoint with a Cortex XDR agent installed, the cases are displayed according to the host name rather
than the IP address. To dive deeper into a specific case, select the case ID.
3. In the right hand view, use the filter criteria to refine the scope of the IP address information that you want to visualize in the map.
In the Type field, select Host Insights to pivot to the Asset View of the host associated with the IP address, or select Network Connections to display the
IP View of the network connections made with the IP address.
Select Recent Outgoing Connections to view the most recent connections made by the IP address. Search all Outgoing Connections to run a
Network Connections query on all the connections made by the IP address.
Abstract
Investigate host assets and view host insights on the Asset View.
Drilldown on an asset on the Asset View. On this view you can investigate host assets, view host insights, and see a list of cases related to a host.
NOTE:
The Asset view is available for hosts with a Cortex XDR agent installed.
Identify a host with a Cortex XDR agent installed and select Open Asset View.
The overview displays the host name and any related cases.
Recent Open Cases lists the most recent cases that contain the host as part of the case’s key artifacts, according to the Last Updated timestamp.
To dive deeper into a specific case, select the Case ID.
Network Connections: Pivot to the IP view displaying the IP addresses associated with the host.
Host Risk View: View insights and profiling information. Available with the the Identity Threat Module.
Select Run insights collection to initiate a new collection. The next time the Cortex XDR agent connects, the insights are collected and displayed.
Abstract
LICENSE TYPE:
The Host Risk View requires the Identity Threat Module add-on.
Drilldown on a host on the Host Risk View. On this view you can see insights and profiling information about a host. When investigating issues and cases, you
can view anomalies in the context of the host that can help you to make better and faster decisions about risks. On the Host View View you can take the
following actions:
Analyze the host's behavior over time, and compare it to peer hosts with the same asset role.
1. Right-click the host that you want to investigate and select Open Host Risk View.
TIP:
You can also see a list of all hosts under Inventory → Assets → Asset Scores.
The overview displays network operations, cases, actions, and threat intelligence information relating to the selected host. You can see the host score,
the metadata aggregated by Cortex Cloud, and review the CVEs breakdown by severity. The displayed information and available actions are context
specific.
Common Vulnerabilities and Exposures (CVE) are grouped by severity. For more information on each of the CVEs, refer to Related CVEs.
The graph is based on new cases created within the selected time frame, and updates on past cases that are still active. The straight line represents the
host score, which is based on the scores of the cases associated with the host.
The bubbles in the graph represent the number of issues and insights generated on the selected day. Bigger bubbles indicate more issues and insights,
and a possible risk.
4. Drilldown on a score for a specific day by clicking a bubble. Alternatively, review the host information for the selected timeframe (Last 7D, 30D, or
custom timeframe). The widgets in the right panel reflect the selected timeframe.
5. Review the Related Cases for the selected timeframe or score selected in the Score Trend graph. If you are drilling down on a score, you can see the
cases that contributed to the total score on the selected day. Review the following data:
The Status column provides visibility into the reason for the score change. For example, if a case is resolved, its score will decrease, bringing down
the host score.
The Points column displays the risk score that the case contributed to the host score. The points are calculated according to SmartScore or Case
Scoring Rules.
6. Review the Related Issues and Insights for the selected timeframe or score selected in the Score Trend graph.
The timeline displays all detection activities associated with the host. The issues are grouped into buckets according to MITRE ATT&CK tactics. Click on
a tactic to filter the issues in the table. To further investigate an issue, click the issue to open the Issue Panel and click Investigate.
You can see details of the related login attempts, and whether the attempts were successful. To further investigate login activity for the host, click View In
XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to refine your search.
8. Review the host's Latest Authentication Attempts during the selected timeframe or on the day selected in the Score Trend graph.
You can see details of the related authentication attempts, and whether the attempts were successful. To further investigate authentication attempts by
the host, click View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to refine your search.
9. Review the Related CVEs during the selected timeframe or on the day selected in the Score Trend graph.
You can see details of the specified CVEs. This information can help you to access and prioritize security threats on each of the endpoints. To further
investigate related CVEs, click View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to
refine your search.
10. For hosts with associated asset roles, compare the data with other peer hosts with the same asset role. In the Score Trend graph click Compare To and
select an asset role to which you want to compare the data.
The dashed line presents the average score for peers with the same asset role as the host, over the same time period. Hover over a bubble on the
dashed line to see the Average score for the selected peer, and a breakdown of the score per endpoint. Click Show x Hosts to see a full breakdown of
the score on the Peer Score Breakdown, filtered by the selected asset role. From the Peer Score Breakdown you can select any host name and pivot to
additional views for further investigation.
In the left panel, click Actions to see a list of available actions. Actions are context specific.
Abstract
Investigate cases, actions, and threat intelligence reports related to a specific file or process hash on the Hash View.
Drilldown on a file or process hash on the Hash View. On this view you can investigate and take actions on SHA256 hash processes and files, and see
information about a specific SHA256 hash over a defined 24-hour or 7-day time frame. In addition, you can drill down on each of the process executions, file
operations, cases, actions, and threat intelligence reports relating to the hash.
Identify the file or process hash that you want to investigate and select Open Hash View.
The color of the hash value is color-coded to indicate the WildFire report verdict:
Blue—Benign
Yellow—Grayware
Red—Malware
Depending on the threat intelligence sources that are integrated with Cortex Cloud, the following threat intelligence might be available:
NOTE:
IOC Rule, if applicable, including the IOC Severity, Number of hits, and Source according to the color-coded values:
Quarantined, select the number of endpoints to open the Quarantine Details view.
f. Review the recent open cases that contain the hash as part of the case's Key Artifacts according to the Last Updated timestamp. To dive deeper
into specific cases, select the Case ID.
3. In the right hand view, use the filter criteria to refine the scope of the IP address information that you want to visualize.
Filter criteria
Filter Description
Event Type Main set of values that you want to display. The values depend on the
selected type of process or file.
Primary Set of values that you want to apply as the primary set of aggregations.
Values depend on the selected Event Type.
Secondary Set of values that you want to apply as the secondary set of
aggregations.
Timeframe Time period over which to display your defined set of values.
To view the most recent processes executed by the hash, select Recent Process Executions. To run a query on the hash, select Search all Process
Executions.
Abstract
Drilldown on a user in the User Risk View or the User View. On this view Cortex Cloud aggregates all of the data collected for a user, displays the information in
graphs and tables, and provides further drilldown options for easy investigation. Cortex Cloud uses Identity Analytics to aggregate information on a user and
displays insights about the user.
LICENSE TYPE:
If the Identity Threat module is enabled you can open the User Risk View. This view displays insights and profiling information to help you investigate issues
and cases. Viewing anomalies in the context of baseline behavior facilitates risk assessment and shortens the time you require for making verdicts.
If the Identity Threat module is not enabled you can open the User View. This view displays an overview of the user and information about the user's score
and activity.
(User Risk View only) Review the user's working hours and related issues.
(User Risk View only) Analyze the user's behavior over time and compare to their peers with the same asset role.
1. Right-click a user name and select Open User Risk View or Open User Card.
TIP:
You can also see a list of all users under Assets → Asset Scores.
NOTE:
Cortex Cloud normalizes and displays case and issue times in your time zone. If you're in a half-hour time zone, the activity in the Normal Activity and
the Actual Activity charts is displayed in the whole-hour time slot preceding it. For example, if you're in a UTC +4.5 time zone, the time displayed for the
activity will be UTC +4.5, however, the visualization in the Normal Activity and the Actual Activity charts will be in the UTC +4 slot.
Review the sections of the User Risk View. Depending on your permissions, some information might be limited by your scope.
User Score: Displays the score assigned on the last day of the selected time frame and the change in the score for the selected time frame.
The score is updated continuously as new issues are associated with cases.
Common Locations: Displays the countries from which the user connected most in the past few weeks.
Common UAs: Displays the user agents that the user used most in the past few weeks.
Regular Activity Hours: This data is based on the preceding several weeks and takes into account holidays and seasonality to present an
accurate picture. Cortex Cloud leverages endpoint telemetry to provide the activity data.
The graph is based on new cases created within the selected time frame, and updates on past cases that are still active. The straight line
represents the user score, which is based on the scores of the cases associated with the user.
The bubbles in the graph represent the number of issues and insights generated on the selected day. Bigger bubbles indicate more issues and
insights, and a possible risk.
3. Drilldown on a score for a specific day by clicking a bubble. Alternatively, review the user information for the selected timeframe (Last 7D, 30D, or
custom timeframe). The widgets in the right panel reflect the selected timeframe.
4. Review the Related Cases for the selected timeframe or score selected in the Score Trend graph. If you are drilling down on a score, you can see
the cases that contributed to the total score on the selected day. Review the following data:
The Status column provides visibility into the reason for the score change. For example, if a case is resolved, its score will decrease, bringing
down the host score.
The Points column displays the risk score that the case contributed to the host score. The points are calculated according to SmartScore or
Case Scoring Rules.
5. Review the Related Issues and Insights for the selected timeframe or score selected in the Score Trend graph.
The timeline displays all detection activities associated with the user. The issues are grouped into buckets according to MITRE ATT&CK tactics.
Click on a tactic to filter the issues in the table. To further investigate an issue, click the issue to open the Issue Panel and click Investigate.
6. Review the user activity per day in the Actual Activity widget.
In this widget Cortex Cloud compares the user's actual activity data with the Regular Activity Hours, and highlights any differences or anomalies in
the user's expected activity.
The cells are marked according to the activity that took place:, and a dashed frame indicates that Cortex Cloud detected uncommon activity in the
time slot.
A dashed ribbon highlights discrepancies between regular activity hours and actual activity.
A numbered ribbon indicates the number of issues and insights that occurred on a specific day/hour.
You can see details of the related login attempts, and whether the attempts were successful. To further investigate login activity for the user, click
View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to refine your search.
8. Review the user's Latest Authentication Attempts during the selected timeframe or on the day selected in the Score Trend graph.
You can see details of the related authentication attempts, and whether the attempts were successful. To further investigate authentication
attempts by the user, click View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language you can create queries to
refine your search.
9. Review the user's SAAS Log activity during the selected timeframe or on the day selected in the Score Trend graph. You can see details of the
SaaS logs that were ingested into the platform in context of the user.
To further investigate SaaS log activity for the user, click View In XQL to link to a prefilled query in the Query Builder. Using Cortex Query Language
you can refine your search.
10. For users with associated asset roles, compare the data with other peers with the same asset role. In the Score Trend graph click Compare To and
select an asset role to which you want to compare the data.
The dashed line presents the average score for peers with the same asset role as the user, over the same time period. Hover over a bubble on the
dashed line to see the Average score for the selected peer, and a breakdown of the score per endpoint. Click Show x Hosts to see a full
breakdown of the score on the Peer Score Breakdown, filtered by the selected asset role. From the Peer Score Breakdown you can select any user
name and pivot to additional views for further investigation.
User View
Review the sections of the User View. Depending on your permissions, some information might be limited by your scope.
1. In the left panel, review the overview of the user. The displayed information is aggregated by Cortex Cloudfrom cases, Workday, and Active
Directory data.
The User Score displays the score that is currently assigned to the user and is updated continuously as new issues are associated with cases.
The graph is based on new cases created within the selected time frame, and updates on past cases that are still active. The straight line
represents the user score, which is based on the scores of the cases associated with the user.
Select a score to display in the Cases table the cases that contributed to the total user score on a specific day.
3. Click a score to drilldown on the score for a specific day. Alternatively, review the user information for the selected timeframe (Last 7D, 30D, or
custom timeframe).
4. Review the Related Cases for the selected timeframe or score selected in the Score Trend graph. If you are drilling down on a score, you can see
the cases that contributed to the total score on the selected day. Review the following data:
The Status column provides visibility into the reason for the score change. For example, if a case is resolved, its score will decrease, bringing
down the host score.
The Points column displays the risk score that the case contributed to the host score. The points are calculated according to SmartScore or
Case Scoring Rules.
Recent Login
Recent Authentications
Quarantine files.
Abstract
You can review and manage all files that have been quarantined by the agent due to a security case.
When the agent detects malware on a Windows endpoint, you can take additional precautions to quarantine the file. When the agent quarantines malware, it
moves the file from the location on a local or removable drive to a local quarantine folder (%PROGRAMDATA%\Cyvera\Quarantine) where it isolates the file.
This prevents the file from attempting to run again from the same path or causing any harm to your endpoints.
To evaluate whether an executable file is considered malicious, the agent calculates a verdict using information from the following sources in order of priority:
Local analysis
Enable the agent to automatically quarantine malicious executables by configuring quarantine settings in a Malware prevention profile. For more
information, see Set up malware prevention profiles.
Right-click a specific file from the causality view and select Quarantine. For more information, see ???.
Abstract
For each file, Cortex Cloud receives a file verdict and the WildFire Analysis Report detailing additional information you can use to assess the nature of a file.
For each file, Cortex Cloud receives a file verdict and the WildFire Analysis Report. This report contains detailed sample information and behavior analysis in
different sandbox environments, leading to the WildFire verdict. You can use the report to assess whether the file poses a real threat on an endpoint. The
details in the WildFire analysis report for each event vary depending on the file type and the behavior of the file.
WildFire analysis details are available for files that receive a WildFire verdict. The Analysis Reports section includes the WildFire analysis for each testing
environment based on the observed behavior for the file.
If you are investigating a case in the case detail view you can see artifact details on the Key Assets & Artifacts tab. Under Artifacts, identify a file with a
WildFire verdict and click Wildfire Analysis Report ( ). If you are analyzing an issue, hover over the issue and Investigate. You can open ( ) the WildFire
report of any file included in the issue's Causality Chain.
NOTE:
Cortex Cloud displays the preview of WildFire reports that were generated within the last couple of years. To view a report that was generated more
than two years ago, you can download the report.
On the left side of the report, you can see all the environments in which the Wildfire service tested the sample. If a file is low risk and WildFire can easily
determine that it is safe, only static analysis is performed on the file. Select the testing environment to review the summary and additional details. To learn
more about the behavior summary, see WildFire Analysis Reports—Close Up.
If you want to download the WildFire report as it was generated by the WildFire service, click ( ). The report is downloaded in PDF format.
If you know the WildFire verdict is incorrect, for example, WildFire assigned a Malware verdict to a file you wrote and know to be Benign, you can report an
incorrect verdict to Cortex Cloud to request the verdict change.
1. Open the WildFire report and verify the verdict that you are reporting.
4. Under Comment, enter any details that can help us to better understand why you disagree with the verdict.
The threat team will perform further analysis of the sample to determine whether it should be reclassified. If a malware sample is determined to be safe,
the signature for the file is disabled in an upcoming antivirus signature update. If a benign file is determined to be malicious, a new signature is
generated. After the investigation is complete, you will receive an email describing the action that was taken.
Abstract
The Query Builder facilitates threat detection, case expansion, and data analytics for suspected threats.
The Query Builder aids in the detection of threats by allowing you to search for indicators of compromise and suspicious patterns within data sources. It assists
in expanding cases investigations by identifying related events and entities, such as activities associated with specific user accounts or network lateral
movement. In addition, the Query Builder enables data analytics on suspected threats, helping organizations analyze large volumes of data to identify trends,
anomalies, and correlations that may indicate potential security issues.
To support investigation and analysis, you can search all of the data ingested by Cortex Cloud by creating queries in the Query Builder. You can create queries
that investigate leads, expose the root cause of an issue, perform damage assessment, and hunt for threats from your data sources.
Cortex Cloud provides different options in the Query Builder for creating queries:
You can use the Cortex Query Language (XQL) to build complex and flexible queries that search specific datasets or presets, or the entire xdr_data
dataset. With XQL Search you create queries based on stages, functions, and operators. To help you build your queries, Cortex Cloud provides tools in
the interface that provide suggestions as you type, or you can look up predefined queries, common stages and examples. For more information, see
How to build XQL queries.
Abstract
Learn more about how to build XQL queries in the Query Builder.
The Cortex Query Language (XQL) enables you to query data ingested into Cortex Cloud for rigorous endpoint and network event analysis returning up to 1M
results. To help you create an effective XQL query with the proper syntax, the query field in the user interface provides suggestions and definitions as you type.
XQL forms queries in stages. Each stage performs a specific query operation and is separated by a pipe character (|). Queries require a dataset, or data
source, to run against. Unless otherwise specified, the query runs against the xdr_data dataset, which contains all log information that Cortex Cloud collects
from all Cortex product agents, including EDR data, and PAN NGFW data. In XDM queries, you must specify the dataset mapped to the XDM that you want to
run your query against.
IMPORTANT:
Forensic datasets are not inlcuded by default in XQL query results, unless the dataset query is explicitly defined to use a forensic dataset.
In a dataset query, unless otherwise specified, the query runs against the xdr_data dataset, which contains all log information that Cortex Cloud collects from
all Cortex product agents, including EDR data, and PAN NGFW data. In a dataset query, if you are running your query against a dataset that has been set as
default, there is no need to specify a dataset. Otherwise, specify a dataset in your query. The Dataset Queries lists the available datasets, depending on
system configuration.
NOTE:
Users with different dataset permissions can receive different results for the same XQL query.
An administrator or a user with a predefined user role can create and view queries built with an unknown dataset that currently does not exist in Cortex
Cloud. All other users can only create and view queries built with an existing dataset.
When you have more than one dataset or lookup, you can change your default dataset by navigating to Settings → Configurations → Data
Management → Dataset Management, right-click on the appropriate dataset, and select Set as default. For more information about setting default
datasets, see Dataset management.
The basic syntax structure for querying datasets that are not mapped to the XDM is:
You can specify a dataset using one of the following formats, which is based on the data retention offerings available in Cortex Cloud.
Hot Storage queries use the format dataset = <dataset name>. This is the default option.
Example 3.
dataset = xdr_data
Example 4.
cold_dataset = xdr_data
NOTE:
You can build a query that investigates data in both a cold dataset and a hot dataset in the same query. In addition, as the hot storage dataset format is
the default option and represents the fully searchable storage, this format is used throughout this guide for investigation and threat hunting. For more
information on hot and cold storage, see Dataset management.
When using the hot storage default format, this returns every xdr_data record contained in your Cortex Cloud instance over the time range that you provide
to the Query Builder user interface. This can be a large amount of data, which may take a long time to retrieve. You can use a limit stage to specify how
many records you want to retrieve.
There is no practical limit to the number of stages that you can specify. See Stages for information on all the supported stages.
In the xdr_data dataset, every user field included in the raw data for network, authentication, and login events has an equivalent normalized user field
associated with it that displays the user information in the following standardized format:
<company domain>\<username>
For example, the login_data field has the login_data_dst_normalized_user field to display the content in the standardized format. To ensure the most
accurate results, we recommend that you use these normalized_user fields when building your queries.
Additional components
XQL queries can contain different components, such as functions and stages, depending on the type of query you want to build. For a complete list of the
syntax options available with example queries, see Stages and Functions.
Abstract
Learn more about some important information before getting started with XQL queries.
Before you begin running XQL queries, consider the following information:
Cortex Cloud offers features in the XQL search interface to help you to build queries. For more information see Useful XQL user interface features.
Before you run a query, review this list to better understand query behavior and results. For more information, see Expected results when querying fields.
If you have existing Splunk queries, you can translate them to XQL. For more information, see Translate to XQL.
Abstract
The user interface contains several useful features for querying data, and for viewing results:
Translate to XQL: Converts your existing Splunk queries to the XQL syntax. When you enable Translate to XQL , both an SPL query field and an XQL
query field are displayed. You can easily add a Splunk query, which is converted automatically into XQL in the XQL query field. This option is disabled by
default.
Query Results: After you create and run an XQL query, you can view, filter, and visualize your Query Results.
XQL Helper: Describes common stage commands and provides examples that you can use to build a query.
Query Library: Contains common, predefined queries that you can use or modify to your liking. In addition, there is a personal query library for saving
and managing your own queries so that you can share with others, and queries can be shared with you. For more information, see Manage your personal
query library.
Schema: Contains schema information for every field found in the result set. This information includes the field name, data type, descriptive text (if
available), and the dataset that contains the field. Contains the list of all the fields of all the datasets that were involved in the query.
Abstract
Cortex Cloud includes built-in mechanisms for mitigating long-running queries, such as default limits for the maximum number of allowed issues. The following
suggestions can help you to streamline your queries:
The default results for any query is a maximum of 1,000,000 results, when no limit is explicitly stated in the query. Queries based on XQL query entities
are limited to 10,000 results. Adding a smaller limit can greatly reduce the response time.
Example 5.
dataset = microsoft_windows_raw
| fields *host*
| limit 100
Use a small time frame for queries by specifying the specific date and time in the custom option, instead of picking the nearest larger option available.
Use filters that exclude data, along with other possible filters.
Select the specific fields that you would like to see in the query results.
Abstract
If specific fields are stated in the fields stage, those exact fields will be returned.
The _time system field will not be added to queries that contain the comp stage.
All current system fields will be returned, even if they are not stated in the query.
Each new column in the result set created by the alter stage will be added as the last column. You can specify a different column order by modifying the
field order in the fields stage of the query.
Each new column in the result set created by the comp stage will be added as the last column. Other fields that are not in the group by /
calculated column will be removed from the result set, including the core fields and _time system field.
When no limit is explicitly stated in a datamodel query, a maximum of 1,000,000 results are returned (default). When this limit is applied to results using
the limit stage, it will be indicated in the user interface.
Abstract
Build Cortex Query Language (XQL) queries to analyze raw log data stored in Cortex Cloud. You can query datasets using specific syntax.
1. From Cortex Cloud, select Investigation & Response → Search → Query Builder.
2. Click XQL.
3. (Optional) To change the default time period against which to run your query, at the top right of the window, select the required time period, or create a
customized one.
NOTE:
Whenever the time period is changed in the query window, the config timeframe is automatically set to the time period defined, but this won't be
visible as part of the query. Only if you manually type in the config timeframe will this be seen in the query.
4. (Optional) To translate Splunk queries to XQL queries, enable Translate to XQL. If you choose to use this feature, enter your Splunk query in the Splunk
field, click the arrow icon ( ) to convert to XQL, and then go to Step 6.
5. Create your query by typing in the query field. Relevant commands, their definitions, and operators are suggested as you type. When multiple
suggestions are displayed, use the arrow keys to select a suggestion and to view an explanation for each one.
You only need to specify a dataset if you are running your query against a dataset that you have not set as default. Otherwise, the query runs
against the xdr_data dataset. For more information, see How to build XQL queries.
Example 6.
dataset = xdr_data
b. Press Enter, and then type the pipe character (|). Select a command, and complete the command using the suggested options.
Example 7.
dataset = xdr_data
| filter agent_os_type = ENUM.AGENT_OS_MAC
| limit 250
Run the query by the specified date and time, or on a specific date, by selecting the calendar icon ( ).
7. (Optional) The Save As options save your query for future use:
BIOC Rule: When compatible, saves the query as a BIOC rule. The XQL query must contain a filter for the event_type field.
Correlation Rule: When compatible, saves the query as a Correlation Rule. For more information, see What's a correlation rule?.
Query to Library: Saves the query to your personal query library. For more information, see Manage your personal query library.
TIP:
While the query is running, you can navigate away from the page. A notification is sent when the query has finished. You can also Cancel the query or run a
new query, where you have the option to Run only new query (cancel previous) or Run both queries.
Abstract
Learn more about reviewing the results returned from an XQL query.
The results of a Cortex Query Language (XQL) query are displayed in a tab called Query Results.
NOTE:
It's also possible to graph the results displayed. For more information, see Graph query results.
Use the following options in the Query Results tab to investigate your query results:
Option Use
Table tab Displays results in rows and columns according to the entity fields. Columns can be filtered, using their filter icons.
More options (kebab icon ) displays table layout options, which are divided into different sections:
In the Appearance section, you can Show line breaks for any text field in the Query Results. By default, the text in these fields are
wrapped unless the Show line breaks option is selected. In addition, you can change the way rows and columns are displayed.
In the Log Format section, you can change the way that logs are displayed:
JSON: Condensed JSON format with key value distinctions. NULL values are not displayed.
TREE: Dynamic view of the JSON hierarchy with the option to collapse and expand the different hierarchies.
In the Search column section, you can find a specific column; enable or disable display of columns using the checkboxes.
Show and hide rows according to a specific field in a specific event: select a cell, right-click it, and then select either Show rows with … or
Hide rows with …
Graph tab Use the Chart Editor to visualize the query results.
Advanced Displays results in a table format which aggregates the entity fields into one column. You can change the layout, decide whether to Show
tab line breaks for any text field in the results table, and change the log format from the menu.
Select Show more to pivot an Expanded View of the event results that include NULL values. You can toggle between the JSON and Tree
views, search, and Copy to clipboard.
More options ( ) works in a similar way to how it works on the Table tab.
Show more in the bottom left corner of each row opens the Expanded View of the event results that also include NULL values. Here,
you can toggle between the JSON and Tree views, search, and Copy to clipboard.
Log format options change the way that logs are displayed:
JSON: Condensed JSON format with key value distinctions. NULL values are not displayed.
TREE: Dynamic view of the JSON hierarchy with the option to collapse and expand the different hierarchies.
Free text Searches the query results for text that you specify in the free text search. Click the Free text search icon to reveal or hide the free text
search search field.
Option Use
Filter Enables you to filter a particular field in the interface that is displayed to specify your filter criteria.
For integer, boolean, and timestamp (such as _time) fields, we recommend that you use the Filter instead of the Free text search, in order
to retrieve the most accurate query results.
Fields menu Filters query results. To quickly set a filter, Cortex Cloud displays the top ten results from which you can choose to build your filter. This
option is only available in the Table and Advanced tabs,
From within the Fields menu, click on any field (excluding JSON and array fields) to see a histogram of all the values found in the result set
for that field. This histogram includes:
A count of the total number of times a value was found in the result set.
The value's frequency as a percentage of the total number of values found for the field.
NOTE:
In order for Cortex Cloud to provide a histogram for a field, the field must not contain an array or a JSON object.
BIOC Rule: When compatible, saves the query as a BIOC rule. The XQL query must contain a filter for the event_type field.
Correlation Rule: When compatible, saves the query as a Correlation Rule. For more information, see What's a correlation rule?.
Query to Library: Saves the query to your personal query library. For more information, see personal query library.
You can continue investigating the query results in the Causality View or Timeline by right-clicking the event and selecting the desired view. This option is
available for the following types of events:
Network
File
Registry
Injection
Load image
System calls
For network stories, you can pivot to the Causality View only. For cloud Cortex Cloud events and Cloud Audit Logs, you can only pivot to the Cloud Causality
View, while software-as-a-service (SaaS) related issues for audit stories, such as Office 365 audit logs and normalized logs, you can only pivot to the SaaS
Causality View.
Add a file path to your existing Malware Profile allowed list by right-clicking a <path> field, such as target_process_path, and select Add <path type> to
malware profile allow list.
Abstract
To help you easily convert your existing Splunk queries to the Cortex Query Language (XQL) syntax, Cortex Cloud includes a toggle called Translate to XQL in
the query field in the user interface. When building your XQL query and this option is selected, both a SPL query field and XQL query field are displayed, so
you can easily add a Splunk query, which is converted to XQL in the XQL query field. This option is disabled by default, so only the XQL query field is
displayed.
IMPORTANT:
This feature is still in a Beta state and you will find that not all Splunk queries can be converted to XQL. This feature will be improved upon in the upcoming
releases to support greater Splunk query translations to XQL.
The following table details the supported functions in Splunk that can be converted to XQL in Cortex Cloud with an example of a Splunk query and the
resulting XQL query. In each of these examples, the xdr_data dataset is used.
bin index = xdr_data | bin _time span=5m dataset in (xdr_data) | bin _time span=5m
count index=xdr_data | stats count(_product) BY _time dataset in (xdr_data) | comp count(_product) by _time
ctime index=xdr_data | convert ctime(field) as field dataset in (xdr_data) | alter field = format_timestamp(
earliest index = xdr_data earliest=24d dataset in (xdr_data) | filter _time >= to_timestamp(ad
eval index=xdr_data | eval field = "test" dataset in (xdr_data) | alter field = "test"
floor index=xdr_data | eval floor_test = floor(1.9) dataset in (xdr_data) | alter floor_test = floor(1.9)
json_extract index= xdr_data | eval dataset in (xdr_data) | alter London = dfe_labels -> df
London=json_extract(dfe_labels,"dfe_labels{0}")
join join agent_hostname [index = xdr_data] join type=left conflict_strategy=right (dataset in (xdr
len index = xdr_data | where uri != null | eval dataset in (xdr_data) | filter agent_ip_addresses != nu
length = len(agent_ip_address) len(agent_ip_addresses)
lower index = xdr_data | eval field = lower("TEST") dataset in (xdr_data) | alter field = lowercase("TEST")
md5 index=xdr_data | eval md5_test = md5("test") dataset in (xdr_data) | alter md5_test = md5("test")
mvcount index = xdr_data | where http_data != null | dataset in (xdr_data) | filter http_data != null | alte
eval http_data_array_length =
mvcount(http_data)
mvexpand index = xdr_data | mvexpand dfe_labels limit = dataset in (xdr_data) | arrayexpand dfe_labels limit 10
100
pow index=xdr_data | eval pow_test = pow(2, 3) dataset in (xdr_data) | alter pow_test = pow(2, 3)
relative_time(X,Y) index ="xdr_data" | where _time > dataset in (xdr_data) | filter _time >
relative_time(now(),"-7d@d") to_timestamp(add(to_epoch(date_floor(current_time(
index ="xdr_data" | where _time > dataset in (xdr_data)| filter _time >
relative_time(now(),"+7d@d") to_timestamp(add(to_epoch(date_floor(current_time(
replace index= xdr_data | eval description = dataset in (xdr_data) | alter description = replace(age
replace(agent_hostname,"\("."NEW")
round index=xdr_data | eval round_num = round(3.5) dataset in (xdr_data) | alter round_num = round(3.5)
search index = xdr_data | eval ip="192.0.2.56" | dataset in (xdr_data) | alter ip = "192.0.2.56" | filte
search ip="192.0.2.0/24"
sha256 index = xdr_data | eval sha256_test = dataset in (xdr_data) | alter sha256_test = sha256("tes
sha256("test")
sort (ascending index = xdr_data | sort action_file_size dataset in (xdr_data) | sort asc action_file_size | lim
order)
sort (descending index = xdr_data | sort -action_file_size dataset in (xdr_data) | sort desc action_file_size | li
order)
spath index = xdr_data | spath output=myfield dataset in (xdr_data) | alter myfield = json_extract(ac
input=action_network_http path=headers.User-
Agent
split index = xdr_data | where mac != null | eval dataset in (xdr_data)\n | filter mac != null\n | alter
split_mac_address = split(mac, ":")
stats dc index = xdr_data | stats dc(_product) BY _time dataset in (xdr_data) | comp count_distinct(_product) b
sum index=xdr_data | where action_file_size != null dataset in (xdr_data) | filter action_file_size != null
| stats sum(action_file_size) by _time
table index = xdr_data | table _time, agent_hostname, dataset in (xdr_data) | fields _time, agent_hostname, a
agent_ip_addresses, _product
showperc percentfield
percentfield
upper index=xdr_data | eval field = upper("test") dataset in (xdr_data) | alter field = uppercase("test")
var index=xdr_data | stats var (event_type) by dataset in (xdr_data) | comp var(event_type) by _time
_time
2. Toggle to Translate to XQL, where both a SPL query field and XQL query field are displayed.
The XQL query field displays the equivalent Splunk query using the XQL syntax.
You can now decide what to do with this query based on the instructions explained in Create XQL query.
Abstract
Cortex Cloud enables you to generate helpful visualizations of your XQL query results.
LICENSE TYPE:
Building Cortex Query Language (XQL) queries in the Query Builder requires a Data Collection add-on.
To help you better understand your Cortex Query Language (XQL) query results and share your insights with others, Cortex Cloud enables you to generate
graphs and outputs of your query data directly from query results page.
Example 8.
dataset = xdr_data
| fields action_total_upload, _time
| limit 10
The query returns the action_total_upload, a number field, and _time, a string field, for up to 10 results.
Navigate to Query Results → Chart Editor ( ) to manually build and view the graph using the selected graph parameters:
Main
Graph Type: Type of graphs and output options available: Area, Bubble, Column, Funnel, Gauge, Line, Map, Pie, Scatter, Single Value, or
Word Cloud.
Subtype and Layout: Depending on the selected type of graph, choose from the available display options.
Data
Depending on the selected type of graph, customize the Color, Font, and Legend.
You can express any chart preferences in XQL. This is helpful when you want to save your chart preferences in a query and generate a chart every time
that you run it. To define the parameters, either:
Example 9.
view graph type = column subtype = grouped header = “Test 1” xaxis = _time yaxis = _product,action_total_upload
Select ADD TO QUERY to insert your chart preferences into the query itself.
To easily track your query results, you can create custom widgets based on the query results. The custom widgets you create can be used in your
custom dashboards and reports. For more information, see ???.
Select Save to Widget Library to pivot to the Widget Library and generate a custom widget based on the query results.
Abstract
Learn more about the Cortex Query Language (XQL) entities available in the Query Builder.
With Query Builder, you can build complex queries for entities and entity attributes so that you can surface and identify connections between them. Cortex
Cloud provides Cortex Query Language (XQL) queries for different types of entities in the Query Builder that search predefined datasets. The Query Builder
searches the raw data and logs stored in Cortex Cloud tenant and for the entities and attributes you specify, it returns up to 1,000,000 results.
The Query Builder provides queries for the following types of entities:
Process: Search on process execution and injection by process name, hash, path, command line arguments, and more. See Create process query.
File: Search on file creation and modification activity by file name and path. See Create file query.
Network: Search network activity by IP address, port, host name, protocol, and more. See Create network query.
Image Load: Search on module load into process events by module IDs and more. See Create image load query.
Registry: Search on registry creation and modification activity by key, key value, path, and data. See Create registry query.
Event Log: Search Windows event logs and Linux system authentication logs by username, log event ID (Windows only), log level, and message. See
Create event log query.
Network Connections: Search security event logs by firewall logs, endpoint raw data over your network. See Create network connections query.
Authentications: Search on authentication events by identity, target outcome, and more. See Create authentication query.
All Actions: Search across all network, registry, file, and process activity by endpoint or process. See Query across all entities.
The Query Builder also provides flexibility for both on-demand query generation and scheduled queries.
Abstract
From the Query Builder, you can investigate authentication activity across all ingested authentication logs and data.
1. From Cortex Cloud , select Investigation & Response → Search → Query Builder.
2. Select AUTHENTICATION.
By default, Cortex Cloud will return the activity that matches all the criteria you specify. To exclude a value, toggle the = option to =!.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
5. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate Windows and Linux event log attributes and investigate event logs across endpoints.
From the Query Builder you can search Windows and Linux event log attributes and investigate event logs across endpoints with a Cortex XDR agent installed.
1. From Cortex Cloud , select Investigation & Response → Search → Query Builder.
3. Enter the search criteria for your Windows or Linux event log query.
Define any event attributes for which you want to search. By default, Cortex XDR will return the events that match the attribute you specify. To exclude an
attribute value, toggle the = option to =!. Attributes are:
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
5. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific dateor Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
7. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between file activity and endpoints.
From the Query Builder you can investigate connections between file activity and endpoints. The Query Builder searches your logs and endpoint data for the
file activity that you specify. To search for files on endpoints instead of file-related activity, build an XQL query. For more information, see How to build XQL
queries.
1. From Cortex Cloud, select Investigation & Response → Search → Query Builder.
2. Select FILE.
File attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
notepad.exe|chrome.exe). By default, Cortex Cloud will return the events that match the attribute you specify. To exclude an attribute value,
toggle the = option to =!. Attributes are:
ACTION_IS_VFS: Denotes if the file is on a virtual file system on the disk. This is relevant only for files that are written to disk.
DEVICE TYPE: Type of device used to run the file: Unknown, Fixed, Removable Media, CD-ROM.
DEVICE SERIAL NUMBER: Serial number of the device type used to run the file.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Select +PROCESS and specify one or more of the following attributes for the acting (parent) process.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors—The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent
process that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about create a query to investigate the connections between image load activity, acting processes, and endpoints.
From the Query Builder, you can investigate connections between image load activity, acting processes, and endpoints.
1. From Cortex Cloud , select Investigation & Response → Search → Query Builder.
3. Enter the search criteria for the image load activity query.
Identifying information about the image module: Full Module Path, Module MD5, or Module SHA256.
By default, Cortex Cloud will return the activity that matches all the criteria you specify. To exclude a value, toggle the = option to =!.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for both the process and the Causality actor: The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the app identified as being responsible for initiating the process tree. Select this option if you want to apply the same
search criteria to the causality actor. If you clear this option, you can then configure different attributes for the causality actor.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between firewall logs, endpoints, and network activity.
From the Query Builder, you can investigate network events stitched across endpoints and the Palo Alto Networks Next-Generation Firewall logs.
1. From Cortex Cloud , select Investigation & Response → Search → Query Builder.
Network attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
80|8080). By default, Cortex Cloud will return the events that match the attribute you specify. To exclude an attribute value, toggle the = option to
=!. Options are:
PROTOCOL: Network transport protocol over which the traffic was sent.
SESSION STATUS
PRODUCT
VENDOR
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for both the process and the Causality actor: The causality actor—also referred to as the causality group owner (CGO)—is the parent
process in the execution chain that the app identified as being responsible for initiating the process tree. Select this option if you want to apply the
same search criteria to the causality actor. If you clear this option, you can then configure different attributes for the causality actor.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
Destination TARGET HOST,NAME, PORT, HOST NAME, PROCESS USER NAME, HOST IP, CMD, HOST OS, MD5, PROCESS PATH, USER ID,
SHA256, SIGNATURE, or PID
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate the connections between network activity, acting processes, and endpoints.
From the Query Builder, you can investigate connections between network activity, acting processes, and endpoints.
1. From Cortex Cloud , select Investigation & Response → Search → Query Builder.
Network traffic type: Select the type or types of network traffic issues you want to search: Incoming, Outgoing, or Failed.
Network attributes: Define any additional process attributes for which you want to search. Use a pipe (|) to separate multiple values (for example
80|8080). By default, Cortex Cloud will return the events that match the attribute you specify. To exclude an attribute value, toggle the = option to
=!. Options are:
NOTE:
When you run the query, depending on the outcome of the results, the value specified in this field might be displayed in the dst_ip field in
the query results. This occurs if an RDP event is recorded where-by a user connected from the source IP to the destination IP.
LOCAL IP: Local IP address related to the communication. Matches can return additional data if a machine has more than one NIC.
PROTOCOL: Network transport protocol over which the traffic was sent.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
Abstract
Learn more about creating a query to investigate connections between processes, child processes, and endpoints.
From the Query Builder you can investigate connections between processes, child processes, and endpoints.
For example, you can create a process query to search for processes executed on a specific endpoint.
1. From Cortex Cloud , select Investigation & Response → Search → Query Builder.
2. Select PROCESS.
Process action: Select the type of process action you want to search: On process Execution or Injection into another process.
Process attributes—Define any additional process attributes for which you want to search.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
By default, Cortex Cloud will return results that match the attribute you specify. To exclude an attribute value, toggle the operator from = to !=.
Attributes are:
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
PROCESS_FILE_INFO: Metadata of the process file, including file property details, file entropy, company name, encryption status, and
version number.
PROCESS_SCHEDULED_TASK_NAME: Name of the task scheduled by the process to run in the Task Scheduler.
DEVICE TYPE: Type of device used to run the process: Unknown, Fixed, Removable Media, CD-ROM.
DEVICE SERIAL NUMBER: Serial number of the device type used to run the process.
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
Select +PROCESS and specify one or more of the following attributes for the acting (parent) process.
CMD: Command-line used to initiate the parent process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signed, Unsigned, N/A, Invalid Signature, Weak Hash
Run search on process, Causality and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different initiator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate a process,
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
INSTALLATION TYPE can be either Cortex XDR agent or Data Collector. For more information about the data collector applet, Activate Pathfinder.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
Learn more about creating a query to investigate connections between registry activity, processes, and endpoints.
From the Query Builder you can investigate connections between registry activity, processes, and endpoints.
1. From Cortex Cloud , select Investigation & Response → Search → Query Builder.
2. Select REGISTRY.
Registry attributes: Define any additional registry attributes for which you want to search. By default, Cortex Cloud will return the events that match
the attribute you specify. To exclude an attribute value, toggle the = option to =!. Attributes are:
To specify an additional exception (match this value except), click the + to the right of the value and specify the exception value.
4. (Optional) To limit the scope to a specific source, click the + to the right of the value and specify the exception value.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
CMD: Command-line used to initiate the process including any arguments, up to 128 characters.
SIGNATURE: Signing status of the parent process: Signature Unavailable, Signed, Invalid Signature, Unsigned, Revoked, Signature Fail.
Run search for process, Causality, and OS actors: The causality actor—also referred to as the causality group owner (CGO)—is the parent process
in the execution chain that the Cortex XDR agent identified as being responsible for initiating the process tree. The OS actor is the parent process
that creates an OS process on behalf of a different indicator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiate the process, clear this option.
Specify one or more of the following attributes: Use a pipe (|) to separate multiple values.
HOST: HOST NAME, HOST IP address, HOST OS, HOST MAC ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME, PATH, CMD, MD5, SHA256, USER NAME, SIGNATURE, or PID.
6. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last 7D (days), Last 1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run to run the query immediately and view the results in the Query
Center.
While the query is running, you can always navigate away from the page and a notification is sent when the query completes. You can also Cancel the
query or run a new query, where you have the option to Run only new query (cancel previous) or Run both queries.
8. When you are ready, view the results of the query. For more information, see Review XQL query results.
Abstract
From the Cortex Cloud management console, you can search for endpoints and processes across all endpoint activity.
From the Query Builder you can perform a simple search for hosts and processes across all file events, network events, registry events, process events, event
logs for Windows, and system authentication logs for Linux.
1. From Cortex Cloud , select Investigation & Response → Search → Query Builder.
Select Add Process to your search, and specify one or more of the following attributes for the acting (parent) process. Use a pipe (|) to separate multiple
values. Use an asterisk (*) to match any string of characters.
Field Description
CMD Command line used to initiate the parent process including any arguments, up to 128 characters.
SIGNATURE Signing status of the parent process: Signed, Unsigned, N/A, Invalid Signature, Weak Hash.
Run search on The causality actor, also referred to as the causality group owner (CGO), is the parent process in the execution chain that
process, Causality the agent identified as being responsible for initiating the process tree. The OS actor is the parent process that creates an
and OS actors OS process on behalf of a different initiator. By default, this option is enabled to apply the same search criteria to initiating
processes. To configure different attributes for the parent or initiating process, clear this option.
Select Add Host to your search and specify one or more of the following attributes:
HOST: HOST NAME, HOST IP address, HOST OS, HOST ADDRESS, or INSTALLATION TYPE.
PROCESS: NAME , PATH , CMD , MD5 , SHA256 , USER NAME , SIGNATURE, or PID.
Use a pipe (|) to separate multiple values. Use an asterisk (*) to match any string of characters.
5. Specify the time period for which you want to search for events.
Options are Last 24H (hours), Last7D (days), Last1M (month), or select a Custom time period.
Select the calendar icon to schedule a query to run on or before a specific date or Run the query immediately and view the results in the Query Center.
Abstract
Learn more about viewing the results of a query, modifying a query, and rerunning queries from Query Center.
The Query Center displays information about all queries that were run in the Query Builder. From the Query Center you can manage your queries, view query
results, and adjust and rerun queries. Right-click a query to see the available options.
The Query Description column displays the parameters that were defined for a query. If necessary, use the Filter to reduce the number of queries that
Cortex Cloud displays.
Queries that were created from a Query Builder template are prefixed with the template name.
4. (Optional) Export to file to export the results to a tab-separated values (TSV) file.
Right-click a value in the results table to see the options for further investigation.
Modify a query
After you run a query, you might need to change your search parameters to refine the search results or correct a search parameter. You can modify a query
from the Results page:
For queries created in XQL, the Results page includes the XQL query builder with the defined parameters. Modify the query and Run, schedule, or save
the query.
For queries created with a Query Builder template, the defined parameters are shown at the top of the Results page. Select Back to edit to modify the
query with the template format or Continue in XQL to open the query in XQL.
If you want to rerun a query, you can either schedule it to run on or before a specific date, or you can rerun it immediately. Cortex Cloud creates a new query in
the Query Center, and when the query completes, it displays a notification in the notification bar.
To rerun a query immediately, right-click anywhere in the query and then select Rerun Query.
1. In the Query Center, right-click anywhere in the query and then select Schedule.
2. Choose a schedule option and the date and time that the query should run:
Cortex Cloud creates a new query and schedules it to run on or by the selected date and time.
4. View the status of the scheduled query on the Scheduled Queries page.
You can also make changes to the query, edit the frequency, view when the query will next run, or disable the query. For more information, see Manage
scheduled queries.
Abstract
NOTE:
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Native search has been deprecated; this field allows you to view data for queries
performed before deprecation.
COMPUTE UNIT USAGE Number of query units that were used to execute the API query and Cold Storage
query.
EXECUTION ID Unique identifier of Cortex Query Language (XQL) queries in the tenant. The
identifier ID generated for queries executed in Cortex Cloud and XQL query API.
PUBLIC API Whether the source executing the query was an XQL query API.
QUERY NAME* For saved queries, the Query Name identifies the query specified by the
administrator.
For scheduled queries, the Query Name identifies the auto-generated name
of the parent query. Scheduled queries also display an icon to the left of the
name to indicate that the query is recurring.
Field Description
Queued: The query is queued and will run when there is an available slot.
Running
Failed
Partially completed: The query was stopped after exceeding the maximum
number of permitted results. The default results for any query is a maximum
of 1,000,000 results, when no limit is explicitly stated in the query. Queries
based on XQL query entities are limited to 10,000 results. To reduce the
number of results returned, you can adjust the query settings and rerun.
Completed
SIMULATED COMPUTE UNITS Number of query units that were used to execute the Hot Storage query.
Abstract
The Scheduled Queries page displays information about your scheduled and recurring queries. From this page, you can edit scheduled query parameters,
view previous executions, disable, and remove scheduled queries. Right-click a query to see the available options.
2. Locate the scheduled query for which you want to view previous executions.
3. Right-click anywhere in the query row, and select Show executed queries.
Abstract
The table below ists the common fields in the Scheduled Queries page.
NOTE:
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Native search has been deprecated, this field allows you to view data for queries performed before
deprecation.
NEXT EXECUTION For queries that are scheduled to run at a specific frequency, this displays the next execution time.
For queries that were scheduled to run at a specific time and date, this field will show None.
PUBLIC API Whether the source executing the query was an XQL query API.
QUERY NAME For saved queries, the Query Name identifies the query specified by the administrator.
For scheduled queries, the Query Name identifies the auto-generated name of the parent query.
Scheduled queries also display an icon to the left of the name to indicate that the query is
reoccurring.
SCHEDULE TIME Frequency or time at which the query was scheduled to run.
Abstract
Cortex Cloud provides as part of the Query Library a personal library for saving and managing your own queries.
Cortex Cloud provides as part of the Query Library a personal query library for saving and managing your own queries. When creating a query in XQL Search
or managing your queries from the Query Center, you can save queries to your personal library. You can also decide whether the query is shared with others
(on the same tenant) in their Query Library or unshare it, so it is only visible to you. You can also view the queries that are shared by others (on the same
tenant) in your Query Library.
The queries listed in your Query Library have different icons to help you identify the different states of the queries:
The Query Library contains a powerful search mechanism that enables you to search in any field related to the query, such as the query name, description,
creator, query text, and labels. In addition, adding a label to your query enables you to search for these queries using these labels in the Query Library.
2. Locate the query that you want to save to your personal query library.
3. Right-click anywhere in the query row, and select Save query to library.
Query Name: Specify a unique name for the query. Query names must be unique in both private and shared lists, which includes other people’s
queries.
Labels (Optional): Specify a label that is associated with your query. You can select a label from the list of predefined labels or add your label and
then select Create Label. Adding a label to your query enables you to search for queries using this label in the Query Library.
Share with others: You can either set the query to be private and only accessible by you (default) or move the toggle to Share with others the
query, so that other users using the same tenant can access the query in their Query Library.
3. Click Save.
A notification appears confirming that the query was saved successfully to the library, and closes on its own after a few seconds.
The query that you added is now listed as the first entry in the Query Library. The query editor is opened to the right of the query.
As needed, you can return to your queries in the Query Library to manage your queries. Here are the actions available to you.
Edit the name, description, labels, and parameters of your query by selecting the query from the Query Library, hovering over the line in the query
editor that you want to edit, and selecting the edit icon to edit the text.
Search query data and metadata: Use the Query Library’s powerful search mechanism that enables you to search in any field related to the query,
such as the query name, description, creator, query text, and label. The Search query data and metadata field is available at the top of your list of
queries in the Query Library.
Show: Filter the list of queries from the Show menu. You can filter by the Palo Alto Networks queries provided with Cortex Cloud , filter by the
queries Created by Me, or filter by the queries Created by Others. To view the entire list, Select all (default).
Save as new: Duplicate the query and save it as a new query. This action is available from the query menu by selecting the 3 vertical dots.
Share with others: If your query is currently unshared, you can share with other users on the same tenant your query, which will be available in their
Query Library. This action is only available from the query menu by selecting the 3 vertical dots when your query is unshared.
Unshare: If your query is currently shared with other users, you can Unshare the query and remove it from their Query Library. This action is only
available from the query menu by selecting the 3 vertical dots when your query is shared with others. You can only Unshare a query that you
created. If another user created the query, this option is disabled in the query menu.
Delete the query. You can only delete queries that you created. If another user created the query, this option is disabled in the query menu when
selecting the 3 vertical dots.
Abstract
The Quick Launcher provides a quick, in-context shortcut that you can use to search for information, perform common investigation tasks, or initiate actions.
The Quick Launcher provides a quick, in-context shortcut that you can use to search for information, perform common investigation tasks, or initiate response
actions from any place in Cortex Cloud. The tasks that you can perform with the Quick Launcher include:
Search for host, username, IP address, domain, filename, filepath, timestamp to easily launch the artifact and assets views.
NOTE:
For hosts, Cortex Cloud displays results for exact matches but supports the use of wildcard (*) which changes the search to return matches that
contain the specified text. For example, a search of compy-7* will return any hosts beginning with compy-7 such as compy-7000, compy-7abc, and
so forth.
Search the Asset Inventory for a specific asset name or IP address. In addition, the following actions are available when searching for Asset Inventory
data.
Change search to <host name of asset> to display additional actions related to that host. This option is only relevant when searching for an IP
address that is connected to an asset.
Open in Asset Inventory is a pivot available when the host name of an asset is selected.
Begin Go To mode. Enter forward slash (/) followed by your search string to filter and navigate to Cortex Cloud pages. For example, / rules searches
for all pages that include rules and allows you to navigate to those pages. Select Esc to exit Go To mode.
Isolate an endpoint
You can open the Quick Launcher by clicking the Quick Launcher icon located in the top navigation bar, or from the application menus, or by using the default
keyboard shortcut: Ctrl-Shift+X on Windows or CMD+Shift+X on macOS. To change the default keyboard shortcut, select Settings → Configurations →
General → Server Settings → Keyboard Shortcuts. The shortcut value must be a keyboard letter, A through Z, and cannot be the same as the Artifact and
Asset Views defined shortcut.
You can also prepopulate searches in Quick Launcher by selecting text in the app or selecting a node in the Causality or Timeline Views.
Abstract
Cortex Cloud enables you to investigate any threat, also referred to as a lead, which has been detected.
This topic describes the steps you can take to investigate a lead. A lead can be:
An issue from a non-Palo Alto Networks system with information relevant to endpoints or firewalls.
Information from online articles or other external threat intelligence that provides well-defined characteristics of the threat.
1. Use threat intelligence to build a Cortex Query Language (XQL) query using the Query Builder.
For example, if external threat intelligence indicates a confirmed threat involving specific files or behaviors, search for those characteristics.
2. Review and refine the query results by using filters and running follow-up queries to find the information you are looking for.
Review the chain of execution and data, navigate through the processes on the tree, and analyze the information.
4. Open the Timeline to view the sequence of events over time. If deemed malicious, take action using one or more of the response actions.
5. Inspect the information again, and identify any characteristics you can use to create a BIOC or correlation rule.
If you can create a BIOC or correlation rule, test and tune it as needed. For more information, see Create a correlation rule and Create a BIOC rule.
Abstract
Dashboards help you to monitor system activity in your environment. Select a dashboard from the drop-down menu, or take actions on your dashboards from
the Dashboard Manager.
Dashboards offer graphical overviews of your tenant's activities, enabling you to effectively monitor your cases and overall activity in your environment. Each
dashboard comprises widgets that summarize information about your endpoint in graphical or tabular format.
When you sign in to Cortex Cloud your default dashboard is displayed. To change the displayed dashboard, in the dashboard header choose from the list of
predefined and custom dashboards. You can also manage all of your dashboards from the Dashboard Manager.
On each dashboard, you can you can see the selected Time Range on the right side of the header. An indicator shows the time that the dashboard was last
updated, and if the data is not up-to-date you can click Refresh to update all widget data. You can also select widget specific time frames from the menu on an
XQL widget. If you select a different time frame for a widget, a clock icon is displayed.
Predefined dashboard filters are displayed in the dashboard header. A filter icon on a widget indicates that the widget data is filtered. Hover over the icon to
see details of the filters applied. To change or add filters, click the filter icon in the right-hand corner of the dashboard.
To see additional options for the dashboard, including saving the dashboard as a report template and disabling the background animation, open the
dashboard menu.
Types of dashboards
Command Center dashboards provide instant visibility into your security operations, with drilldowns to additional dashboards and associated pages. For
more information, see Command Center dashboards.
Predefined dashboards
Predefined dashboards are configured for different system set-ups and use cases, and to assist SOC analysts in their investigations. You can create
reports and custom dashboards that are based on predefined dashboards. For more information, see Predefined dashboards.
Custom dashboards
Custom dashboards provide the flexibility to design dashboards that are built to your own specifications. You can base custom dashboards on the
predefined dashboards or create a new dashboard from scratch, and save your custom dashboards as reports. For more information, see Custom
dashboards.
You can see all of your predefined and custom dashboards in the Dashboard Manager, and take the following actions:
NOTE:
You cannot edit the predefined dashboards but you can create a new dashboard that is based on a dashboard template.
You can import and export dashboards in a JSON format, which enables you to transfer your configurations between environments for onboarding,
migration, backup, and sharing. You can also bulk export and import multiple dashboards at a time.
NOTE:
If you import a dashboard template that already exists in the system, the imported template will overwrite the existing template. If you do not want
to overwrite the existing template, duplicate and rename the existing template before importing the new template.
Abstract
The Command Center dashboards provide interactive overviews of your Cortex Cloud system status, including data ingestion metrics, case and issue data,
automations, and more.
From the Command Center dashboards, click on elements of interest to drill down to additional dashboards and associated pages.
NOTE:
Access to the dashboards requires RBAC View permissions for Dashboards & Reports and Command Center Dashboards.
The dashboards are available in dark mode only. They are not editable, and you can't create dashboard templates or reports from them.
Some of the dashboard’s animations are not fully supported by the Safari web browser. We recommend that you view the dashboard with an alternative
web browser.
Cortex Cloud’s Cloud Security Command Center dashboard provides you with a high-level, holistic view of your organization's cloud security posture, providing
insights into data ingestion, asset coverage, security risks, and recommended posture cases. The dashboard simplifies the process of monitoring the overall
security landscape to improve your overall cloud security posture.
A high-level overview of the cloud security environment, that you can share with leadership to view the most recent attack paths, risks and the
recommended remediation steps.
Real-time visibility into key metrics such as, assets covered, security issues detected, and actions taken.
Guided next steps leveraging Posture Cases to help resolve attack paths and risks with the most impact instead of sifting through individual issues.
Centralized, top down view of assets with security metrics across connected cloud providers.
Integrated focus on both security posture including risks, and attack paths as well as business value metrics such as time saved, and resources
protected.
NOTE:
The Cloud Security Command Center dashboard is only currently available by default to users with the Instance Administrator, and Viewer role. Custom roles
that include Cloud Security Command Center View/Edit role permissions do also have access. The Cloud Security Command Center dashboard is only
currently available by default to users with the Instance Administrator role.
Choose from a range of Command Center filters to help you focus on specific issues that have the highest return on investment. Select one of the options
below to fine tune your search:
Once you’ve selected a view you can also further narrow your search, by limiting your view by a specific Cloud Service Provider (CSP) listed under View
By.
Choose from the options listed under Applications to view the issues in assets for a specific application or a group of applications.
Data Sources
The left portion of the dashboard highlights the sources ingested and scanned to identify potential security vulnerabilities. The dashboard ingests the following
data sources to detect security issues:
Audit Log
Flow Log
Cloud Configs
Workloads
Hovering over data types such as Audit and Flow Logs reveals the volume and count of data ingested and detailed information on any ingestion issues,
whereas Cloud Configs and Workloads indicate the count of ingested data. In the event of a data ingestion error, select a data source to be redirected to the
Data Sources page to investigate the ingestion issue.
The central portion of the dashboard provides a high-level view of the total assets monitored, broken out by asset class and associated security posture,
highlighting assets with incidents, attack paths and risks. Assets are visualized as bubbles and broken down into the following classes:
AI
Compute
Data
Identity
Network
APIs
Other
Inside each asset bubble, the highlighted dots represent the assets with security issues. The dots are color coded to indicate issues of varying number and
severity as detailed below:
Assets with more than one type of issue will be color coded based on the following priority: incidents > attack paths > risks.
The Security Outcomes portion of the dashboard highlights incidents that generated Security Cases and attack paths, and risks that feed into Posture Cases.
Reference the sections below to learn more about Security and Posture Cases:
Security Cases
Security cases provide a count of open and resolved cases, and additional details on the incident including asset classes that comprise the case.
Posture Cases
Posture cases summarize security tasks, providing you with a clear direction of what problems to address, their potential impact on overall security, specific
orchestration options and outcomes. Posture cases are dynamically updated based on your current security posture. Hover over any asset bubble to view a
dynamically updated count of Posture Cases.
The steps below use one sample asset class to outline how you can get the most out of Posture Cases in the Command Center dashboard:
1. On the Cloud Security Command Center dashboard select the Identity bubble.
2. Select from one of the Asset Categories provided on the left to further narrow your search. For example, for the IAM asset class filtering options include:
Human/Non-Human Identity, Cloud Service Account, IAM Group, and Policy.
3. Select an Asset Class to view the impacted assets highlighted in the bubble view. The associated Security and Posture cases if any are also available.
Both Security and Posture cases are created by attack paths and risks that are grouped together. Some risks may not be accounted for by a Case, but
their totals are listed as shown above.
Value Metrics
The key metrics bar provides you with a closer look at the value provided by the Cortex Cloud platform. Metrics include data points such as Total Assets,
Cloud Data Ingestions, Security Issues Closed, Cloud Agents, Time Saved, and Analyst Savings.
NOTE:
The Time Saved estimate is based on the total amount of savings derived from the reduction of issues by grouping and addressing them as Posture Cases
and Attack Paths. Analyst savings are estimates based on an average Security Analyst’s wage coupled with the reduction in time spent addressing Issues on
an individual basis.
The Cloud Security Operations dashboard helps you rapidly assess your security posture and resolve issues with the largest impact. As a security architect or
engineer you can leverage the dashboard to assess the efficiency with which your team responds to security issues on an ongoing basis, without spending
any extra time gathering and grouping issue details, identifying owners, and kickstarting the remediation process. Contextual views also link to other areas of
the Cortex Cloud platform for deeper security context. With the Cloud Security Operations dashboard you can:
Improve situational awareness and visibility: The dashboard interface helps you learn about your security estate, identify security gaps, and track
progress against key performance indicators such as Issue Burn Down and Mean Time To Remediation (MTTR)
Customize your view: The dashboard provides a default view for each of the widgets while giving you the option to customize views to capture the
insights you need.
Dashboard Widgets
The Cloud Security Operations dashboard provides the widgets described below to help you rapidly remediate the issues that require immediate attention.
Widget Description
Issue Resolution Snapshot Provides a count of the total number of issues you have resolved over the selected time period across
all issue categories and compares it with the number of issues resolved over the previous equivalent
time period. By default, the count reflects the number of Critical and High severity issues you have
resolved over the last 7 day period, while the percentage change indicates the relative change from
the previous 7-day period.
Issues are based on rule violations on a specified scope of resources. Select any portion of the issues
highlighted to see a list view of resolved issues.
Open Issues Provides a cumulative snapshot count of the total number of issues that remain unresolved in your
environment and tracks the relative change in this count over the selected time frame. By default, the
count reflects the total number of Critical and High severity issues still unresolved in your environment,
while the percentage change indicates the relative change in this count over the last 7 days.
Choose from one of the time-ranges specified in the filter options to narrow your search.
Widget Description
Issues Burndown Provides a trendline of the total number of open and resolved issues over time across all issue
categories. By default, the trendlines track the number of open and resolved issues over the last 7
days.
This daily point in time snapshot captured can be adjusted by severity level. Select the filter option to
narrow issues displayed by Issue Type (Attack Paths, Configuration, Data etc.) or Time Range.
Mean Time to Remediation Issues (MTTR) Provides a graphical view of the Mean Time to Remediation (MTTR) for issues across all categories
over a selected time range. By default, the chart displays the MTTR trends for Critical and High
severity issues, as well as, the combined MTTR across both severities over the last 7 day period.
Switch to the table view to compare the 7-day average MTTR with the average across the previous 7-
day period. The severity level displayed in the list view is set by the levels selected in the global filter.
This can be adjusted on the View MTTR Insights side-panel. The View MTTR Insights side-panel also
lists the top ten Accounts/Issues with the highest MTTR for further analysis. Select the filter option to
narrow down issues displayed by Issue Type (Attack Paths, Configuration, Data etc.) or time range.
Top 3 Posture Cases Top 3 unresolved Posture Cases based on the count of issues within the cases with domain as
posture. Click on any Posture Case to be redirected to a detailed view of the case. Select the filter
option to narrow down issues displayed by Posture Cases status or time range.
Open Issues by Type Provides a breakdown of all open issues listed by Issue Type (Attack Paths, Configuration, Data etc.)
and severity. Click on any issue to be redirected to the Issues view. Select the filter option to narrow
down issues by a specific time range. You can also toggle between graph and table view here.
Top Impacted Assets Displays the top five assets with the highest number of Issues. Additional account and asset details
are also provided. Click on any asset to view more details in the Assets side panel. Assets can be
filtered by type, category, and time range. Graph and table toggle is also available to customize your
view
Top Impacted Accounts Lists the account with the highest number of unresolved Issues, sorted by issue count and broken
down by severity. Select a filter to narrow your search by time range or issue type.
NOTE:
The Last updated time indicated on each widget may differ as widget data is gathered at varying intervals.
Generate Reports
You can also share Cloud Security Operations dashboard reports with stakeholders to keep them abreast of the security status of your cloud assets. Select the
Save as a report template to create a shareable template. Next, navigate to Report Templates to Edit, Delete, or Generate a Report that can be scheduled for
wider distribution.
Filter Options
Use one of the multiple filter options provided to further focus on the most impactful issues. Filter options include:
Severity Filter: Select a severity level from the drop-down to apply the filter globally across all widgets. Click Run to update all existing widgets to the
selected severity level. Severity can also be adjusted individually at the widget level. Filter settings at the widget level are saved, global filters are
however not saved.
Time Range Filter: By default the time range is set to 7 days. This can be updated to 24 hours, 7 or 30 days, and a Custom time frame and applied
across all widgets.
Abstract
Predefined dashboards are set up to help you monitor different aspects of your environment.
To change your default dashboard, go to Dashboards & Reports → Dashboard Manager. In the Dashboard Manager you can also create custom dashboards
based on existing dashboards, and save dashboards as report templates.
API security Provides an overview of your API security landscape. You can view all the information and statistics applicable to
Management threats and vulnerabilities of APIs across the cloud and services in your environment. Using this information, you
can manage and implement security measures to safeguard the APIs running in your environment.
The predefined dashboard for API security management, you can view data for:
Attacks by region
Number of APIs
Application Security Provides an overview of application security posture with asset and code/pipeline issue insights.
KSPM Provides insights into your Kubernetes environment, including clusters, assets, and resources. Receive critical
security information related to vulnerabilities, malware, secrets, and other available scanners. Identify areas
lacking protection and take action to secure your clusters.
For more information on onboarding your Kubernetes environment, see Onboard the Kubernetes Connector
My Dashboard Provides an overview of the cases and MTTR for the logged-in user.
5.1.1.3 | Reports
Abstract
Create, edit, and customize reports in Cortex Cloud. Schedule reports with Cron expressions.
Reports contain statistical data in the form of widgets, which enable you to analyze data from inside or outside Cortex Cloud, in different formats such as
graphs, pie charts, or text from information. After generating a report, it also appears in the Reports tab, so you can use the report again.
Abstract
On the Report Templates page, you can view, delete, import, export, create, and modify report templates. You can also select and generate multiple reports.
Abstract
Custom dashboards and reports can support your day-to-day operations by providing options that are tailored to your unique workflow.
Dashboards and reports are built from widgets. You can drag any widgets from the Widget Library on to your dashboard or report and arrange them. Cortex
Cloud provides predefined widgets for you to use. In addition, you can create custom XQL widgets that are built on Cortex Query Language (XQL) queries and
provide the flexibility to query specific data, and select the graphical format you require (such as table, line graph, or pie chart).
Abstract
Build customized dashboards to display and filter the information that is most relevant to you.
You can base your custom dashboards on predefined dashboard templates, or you can build a new dashboard from scratch.
1. Select Dashboards & Reports → Customize → Dashboard Manager → + New Dashboard, or in the Dashboard Manager right-click an existing
dashboard and select Save as new.
2. In the Dashboard Builder, under Dashboard Name enter a unique name for the dashboard.
3. Under Dashboard Type, choose a built-in dashboard template or a blank template, and click Next.
To get a feel for how the data will look, Cortex Cloud provides mock data. To see how the dashboard would look with real data in your environment,
change the toggle to Real Data.
5. Add widgets to the dashboard. From the widget library, drag widgets on to the dashboard.
NOTE:
For agent-related widgets, you can apply an endpoint scope to refine the displayed data to only show results from specific endpoint groups.
Select the menu on the top right corner of the widget, select Groups, and select one or more endpoint groups.
For case-related widgets, you can refine the displayed data to only show results from cases that match a case starring configuration. A purple star
indicates that the widget is displaying only starred cases. For more information, see Case starring.
Add fixed filters to your dashboard to provide dashboard users with useful dashboard filters that are based on predefined or dynamic input values. Any
defined filters are displayed in the dashboard header.
Fixed filters are based on XQL widgets with dynamic parameters. If a dashboard contains these widgets, the Add Filters & Inputs option is displayed. For
more information, see Configure filters and inputs for custom XQL widgets.
7. (Optional for dashboards with custom XQL widgets) Configure dashboard drilldowns.
Add drilldowns to your dashboard to provide interactive data insights when clicking on data points, table rows, or other visualization elements.
Dashboard drilldowns are based on XQL widgets. To add a drilldown to an XQL widget, click on the widget menu, and select Add drilldown. For more
information, see Configure dashboard drilldowns.
By default, the widgets use the dashboard time frame. You can change the widget time frame from the widget menu.
10. To set the custom dashboard as your default dashboard, select Define as default dashboard.
11. To keep this dashboard visible only for you, select Private.
Otherwise, the dashboard is public and visible to all Cortex Cloud users with the appropriate roles to view dashboards.
Abstract
Create, search, and view custom widgets in Cortex Cloud, or use predefined widgets.
From the Widget Library you can take the following actions:
Create custom widgets based on XQL search queries. For more information see Create custom XQL widgets.
Search for custom and predefined widgets. Widgets are grouped by category.
NOTE:
Any dashboards or reports that include the widget are affected by the changes.
Abstract
Fine tune your custom dashboards and report by adding custom XQL widgets, fixed filters and inputs, and dashboard drilldowns.
You can fine-tune your custom dashboards and reports to tailor them to suit your specific needs. The following dashboard features enhance the functionality of
your dashboards, by refining the data displayed and enabling dashboard users to filter and manipulate the displayed data. Click on the tabs to see each
feature.
Personalize the information that you display on your custom dashboards and reports by creating custom XQL widgets and adding them to your dashboard.
These widgets can query specific information that is unique to your workflow, and define the graphical format you require (such as table, line graph, or pie
chart). In addition, you can add variables to your custom XQL widget that provide dynamic input values for dashboard filters.
Enable dashboard users to alter the scope of a dashboard by selecting from predefined or dynamic input values.
Dashboard drilldowns
Enable dashboard users to access interactive data insights when clicking on data points in widgets. Drilldowns can trigger contextual changes on the
dashboard, or they can link to an XQL search, a custom URL, another dashboard, or a report. Users can hover over a widget to see details about the
drilldown, and click a value to trigger the drilldown.
Abstract
You can create custom XQL widgets based on a Cortex Query Language (XQL) query, and add parameters that you can configure as fixed filters or
dashboard drilldowns.
With custom XQL widgets you can personalize the information that you display on your custom dashboards and reports. You can build widgets that query
specific information that is unique to your workflow, and define the graphical format you require (such as table, line graph, or pie chart).
All of your predefined and custom XQL widgets are available in the Widget Library under Dashboards & Reports → Customize → Widget Library. From the
Widget Library, you can browse all widgets by category, create new XQL widgets, and edit and delete existing XQL widgets.
3. Define an XQL query that searches for the data you require. Select XQL Helper to view XQL search and schema examples. For more information, see
How to build XQL queries.
TIP:
You can create a generic dashboard for multiple views of the same dataset by defining the dataset in the XQL widget as dataset =
<dataset_name>*. The placement of the asterisk (*) in the dataset name ensures that any view containing this prefix text is displayed in the results.
Example 10.
If there are multiple datasets that begin with amazon_aws_raw in their name, such as amazon_aws_raw_eu_view and
amazon_aws_raw_us1_view, these views will be included.
NOTE:
Cortex Query Language (XQL) queries generated from the Widget Library do not appear in the Query Center. The results are used only for creating the
custom widget.
5. In the Widget section, define how you want to visualize the results.
You can use parameters to filter widget data on a dashboard or report, and create drilldowns on dashboards. Base your filters on fields and values in the
query results.
To specify parameters with a single predefined value, use the = operator. To specify parameters with multiple values (predefined or dynamic), use
the IN operator.
The following query defines the $domain parameter for filtering dashboard data by domain, based on the domain field in the agent_auditing
dataset.
Single value parameters are based on static predefined values. In this example, the dashboard user will be able to select a domain from a list of
predefined domains.
The following query defines the $endpointname parameter for filtering dashboard data by one or more endpoint names, based on the
endpoint_name field in the agent_auditing dataset.
You can configure this parameter with static predefined values, or dynamic values that are pulled from an XQL query.
3. (Optional) Under Assign Parameters (default values), define default values for the parameters. When you add the widget to a dashboard or report,
the data will be automatically populated. Alternatively, you can configure all input values when you set up a dashboard or report.
Abstract
Configure fixed filters that enable dashboard users to alter the scope of the dashboard by selecting predefined and dynamic values.
LICENSE TYPE:
Fixed dashboard filters are supported in Cortex XDR Pro and Cortex XSIAM only.
Define fixed filters on your dashboards to enable dashboard users to alter the scope of the dashboard by selecting from predefined or dynamic values. You
can define filters with free text, single select, and multiple select input values. After configuration, anyone who views your dashboard can use the fixed filters in
the dashboard header.
PREREQUISITE:
Fixed filters are based on parameters that are defined in custom XQL widgets. Before you can configure fixed filters, take the following steps:
1. Create custom XQL widgets with parameters. For more information, see Create custom XQL widgets.
2. Add the widgets to a Custom dashboard. For more information, see Build a custom dashboard.
This option only appears if the dashboard contains custom XQL widgets with defined parameters.
4. On the FILTERS & INPUTS panel, click +Add an input and select one of the following options:
Guidelines
Select an option that corresponds with the parameter configured in the XQL widget. Parameters with single predefined or free text values use the =
operator, and parameters with multiple values, use the IN operator.
Predefined values are most suitable for filtering fields that have static values, such as status fields with a limited number of available options.
Dynamic values help you to filter with values that change often. You can configure an XQL query that extracts all of the values that are available for
that field. For example, in the endpoints dataset, the endpoint_name field values can change frequently.
5. Click Parameter and select the parameter that you want to configure.
The parameters are extracted from the XQL queries of the widgets on the dashboard. You can define up to four parameter filters on a report or
dashboard.
6. If you selected Single Select or Multi Select values, click Dropdown Options and specify input values. When you generate the dashboard, these input
values appear in a dropdown list for selection.
To configure Predefined inputs for Single Select and Multi Select values, manually type the list values.
Guidelines
The values must support the parameter type. For example, for $name specify characters and for $num specify numbers.
If you uploaded numbers in a string, specify each number in quotes, for example "500".
To configure Dynamic inputs for Multi Select values, click XQL Query to fetch dynamic values.
Guidelines
In the XQL Query Builder, configure a query that includes the field stage and the name of the column from which to take the dropdown values.
All values in the specified field will be available for selection, and the values are dynamically updated.
In this example, the endpoint_name field is configured. The dashboard user will be able to filter by one or more values from the endpoint_name
field.
NOTE:
If you specify more than one field, only the first field value is used.
7. Under Default Value, select a value from the list of defined values. Specifying a default value ensures that the widget is automatically populated when
you open the dashboard.
TIP:
After the initial setup, when you access your dashboard the filters and inputs might need further refinement. You can make changes to the configured
parameters in the XQL widgets, and update the Filters & Inputs on your dashboard until you are satisfied with the results.
Abstract
Configure drilldowns on custom dashboards to provide users with interactive data insights when clicking on data points in a widget
LICENSE TYPE:
Dashboard drilldowns can trigger contextual changes on the dashboard, or they can link to an XQL search, a custom URL, another dashboard, or a report.
You configure drilldowns on individual widgets. After a drilldown is configured, clicking the widget triggers the drilldown.
PREREQUISITE:
To configure drilldowns your dashboard must contain custom XQL widgets. In addition, if you want to configure in-dashboard drilldowns your custom XQL
widget must contain one or more parameters. For more information about configuring parameters in custom XQL widgets, see Create custom XQL widgets.
2. Identify the widget to which you want to apply a drilldown, click on the widget menu, and select Add drilldown.
Field Action
Parameters Select the parameter by which to filter. You can choose any parameter that is defined in the XQL query of the widget.
NOTE:
If the selected parameter is configured in other XQL widgets on the dashboard, these widgets are also affected by the
drilldown.
Value When a user clicks the widget, the dashboard is filtered by this value.
Select a variable from which to capture the clicked value, for example, the $y-axis.value in a chart. For more information,
see Variables in drilldowns.
Field Action
(Optional) Select parameters by which to filter the data on the target dashboard. Parameters are only available if there are
Parameter parameters defined in the widgets on the target dashboard.
(Optional) Value When a user clicks the widget, this value is configured as a parameter on the target dashboard.
Select a variable from which to capture the clicked value in the source dashboard, for example, the $y-axis.value in
a chart. For more information, see Variables in drilldowns.
Open XQL Search: Runs an XQL query based on the clicked value.
Field Action
In the following example two parameters are passed from a table widget to an XQL query. The first parameter with the cell value that the user
clicked on, and a second parameter with the cell value in the request_url column in the row that the user clicked.
dataset=xdr_data
|filter event_type=$y_axis.value and requestUri=$row.request_url
|fields action_download, action_remote_ip as remote_ip,
actor_process_image_name as process_name
|comp count_distinct(action_download) as total_download by process_name,
remote_ip, remote_hostname
|sort desc total_download
|limit 10
|view graph type=single subtype=standard xaxis=remote_ip yaxis=total_download
Field Action
In the following URL, the $x_axis.value parameter represents cortex products names. On drilldown, the $x_axis.value is replaced with the
clicked product name in the pie chart.
https://www.paloaltonetworks.com/cortex/cortex-$x_axis.value
Abstract
Learn about the widget variable values that you can use in dashboard drilldowns.
The following tabs are organized according to widget type and describes the widget variables that are available in drilldowns. The variable defines the value to
capture in the drilldown, according to the element that is clicked. The captured value is then configured as a parameter by which to filter data on drilldown.
Chart
(Area, Bubble, Column, Funnel, Line, Map, Pie, Scatter, or Word Cloud)
$y_axis.name: Selects the y-axis name that the single value represents.
Table
$row.<field_name>: Selects the field (column) from the clicked table row.
Abstract
You can run reports that are based on dashboard templates, or you can create reports from scratch.
You can generate reports using pre-designed dashboard templates, or create custom reports from scratch with widgets from the Widget Library. You can also
schedule your reports to run regularly or just once. All reports are saved under Dashboards & Reports → Reports.
To take actions on existing report templates, go to Dashboards & Reports → Customize → Report Templates. On this page you can also import and export
report templates in a JSON format, which enables you to transfer your configurations between environments for onboarding, migration, backup, and sharing.
You can bulk export and import multiple report templates at a time.
NOTE:
If you import a report template that already exists in the system, the imported template will overwrite the existing template. If you do not want to
overwrite the existing template, duplicate and rename the existing template before importing the new template.
2. Right-click the dashboard from which you want to generate a report, and select Save as report template.
3. Enter a unique name for the report and an optional description, and click Save.
To run the report without make any modifications, hover over the report name, and select Generate Report.
To modify or schedule the report, hover over the report name, and select Edit.
6. After your report completes, you can download it from the Dashboards & Reports → Reports page.
You can base your report on an existing template, or you can start with a blank template.
NOTE:
The report name and description will be displayed in the report header and are not editable during customization.
3. Under Data Timeframe, select the time frame from which to run the report. Custom time frames are limited to one month.
4. Under Report Type select the report template on which to base the report, or select a blank template to build the report from scratch and click Next.
Cortex Cloud offers mock data to help you visualize the data's appearance. To see how the report would look with real data in your environment, switch
to Real Data. Select Preview in A4 to see how the report is displayed in an A4 format.
6. Add or remove widgets to the report. From the widget library, drag widgets on to the report.
NOTE:
Select the menu on the top right corner of the widget, select Groups, and select one or more endpoint groups.
For case-related widgets, you can refine the displayed data to only show results from cases that match a case starring configuration. A purple star
indicates that the widget is displaying only starred cases. For more information, see Case starring.
7. (Optional) Add filters to the report. Adding filters and inputs to the report gives you the flexibility to filter report data based on default values that you
define.
If you selected a report template with default filters, the filters are displayed at the top of the dashboard. To edit the filters, click + Add Filters & Inputs.
You can configure basic filters that provide predefined static values, as explained in the following steps. Alternatively you can define dynamic filters that
are based on predefined parameters in custom XQL widgets, as explained in Configure filters and inputs for custom XQL widgets.
2. On the FILTERS & INPUTS panel, select a parameter for which to configure a filter.
If no values are selected, the filter name shows an error symbol and you cannot save the filter.
4. Add more filters as required. You can drag the filters to change the priority.
8. When you have finished customizing your report template, click Next.
9. If you are ready to run the report select Generate now, or define options for scheduling the report.
10. (Optional) Under Email Distribution and Slack workspace add the recipients that you want to receive a PDF version of your report.
Select Add password used to access report sent by email and Slack to set password encryption. Password encryption is only available in PDF format.
11. (Optional) Select Attach CSV to attach CSV files of your XQL query widgets to the report.
From the menu, select one or more of your custom widgets to attach to the report. The CSV files of the widgets are attached to the report along with the
report PDF. Depending on how you selected to send the report, the CSV file is attached as follows:
Email: Sent as separate attachments for each widget. The total size of the attachment in the email cannot exceed 20 MB.
Slack: Sent within a ZIP file that includes the PDF file.
13. After your report completes, you can download it from the Dashboards & Reports → Reports page.
In the Name field, icons indicate the number of attached files for each report. Reports with multiple PDF and CSV files are marked with a zip icon.
Reports with a single PDF are marked with a PDF icon.
You can receive an email alert if a report fails to run due to a timeout or fails to upload to the GCP bucket.
2. Enter a name and a description for your rule, and under Log Type, select Management Audit Logs.
3. Use a filter to select the Type as Reporting, Subtype as Run Report, and Result as Fail.
4. Under Distribution List, select the email address to send the notification to.
5. Click Done.
Abstract
A basic overview of the Cortex Cloud AI Security overview page, assets inventory, risks, and benefits.
Cortex Cloud AI Security provides a comprehensive overview of the AI assets within an organization. It is designed to ensure AI security by offering tools to
review and prioritize AI risks effectively.
Comprehensive Visibility: Obtains a full picture of AI components, including models, agents, data flows, and infrastructure across all cloud environments.
This broad visibility ensures that every AI asset is accounted for and continuously monitored, reducing blind spots in the AI ecosystem.
Full supply chain protection: Maps the dependencies between data, models, and cloud resources to remediate risks such as poisoned datasets or
unsanctioned models. Maintains the integrity of your AI bill of materials (AI-BOM).
Detailed asset inventory: Access an in-depth inventory of all AI assets, enriched with contextual details. This deep insight into each asset’s specifics and
functionalities facilitates a better understanding and more effective management of these resources.
Advanced risk assessment: Proactively identifies and issues alerts on misconfigurations and security flaws in AI assets. Cortex Cloud AI Security
employs sophisticated detection mechanisms to tackle risks associated specifically with AI, managing permissions, and ensures robust security
practices are upheld throughout the AI supply chain.
Dynamic risk prioritization: Utilizes insights into data sensitivity and the broader security context to effectively understand and prioritize risks. This
strategic approach enables organizations to target and mitigate the most critical threats swiftly, thereby enhancing the overall security landscape.
Governance and control: Implements comprehensive guardrails and controls for AI models both during development and in production. Ensures that AI
assets operate within defined security parameters, reducing the likelihood of security breaches and data leaks.
Compliance assurance: Regularly tests AI systems against emerging AI regulations and industry standards, such as the OWASP Top 10 for Large
Language Models (LLMs). Gets clear guidelines on corrective actions needed to achieve full compliance and ensures that AI assets align with both
current and future regulations.
These benefits ensure that using Cortex Cloud AI Security can maintain a robust security posture across your AI environment, proactively manage risks, and
align with compliance and internal security policies.
The Cortex Cloud AI Security overview dashboard serves as the central hub for information on the AI ecosystem within the organization. It provides a
comprehensive overview of AI security posture and is designed to help users quickly access relevant information. The layout and organization of the
dashboard are tailored to guide you in understanding the AI environment and determining the next steps to take for effective AI governance.
AI assets inventory
You can view all AI assets in your environment, regardless of deployment mode or cloud provider. Connected assets are discovered, contextualized, and
presented with detailed information. You can dive deeper into the asset context as required.
Cortex Cloud AI Security provides visibility into how sensitive data is being utilized and potentially impacted by AI systems. By identifying the AI assets that
interact with sensitive data, the platform helps ensure that appropriate protection protocols are applied where most needed, thereby enhancing overall data
security and reducing the risk of data breaches and leakage.
AI security issues
Cortex Cloud AI Security provides risk assessment for the supported AI assets, with risk rules created by the research team. These risk rules are designed to
detect misconfigurations and security flaws in AI assets and send alerts about them. In addition to the provided default risk rules, Cortex Cloud AI Security also
supports custom risk rule creation, so you can codify and integrate internal policies into the Cortex Cloud AI Security risk engine, streamlining your remediation
efforts.
When insecure models and deployments are used, several types of attacks can occur, such as the following:
Data Poisoning Attacks: In "Training Data Poisoning", malicious actors manipulate the training data to introduce biases or vulnerabilities into the model,
causing it to make incorrect or harmful predictions.
Model Inversion Attacks: Attackers can infer sensitive information about the training data by querying the model, potentially leading to data breaches and
loss of intellectual property.
Adversarial Attacks: Crafted inputs can deceive the model into making incorrect predictions, which is particularly dangerous in critical applications like
autonomous driving or medical diagnosis.
Evasion Attacks: Evasion attacks are a prevalent threat to machine learning models during inference. This type of attack involves crafting inputs that
appear normal to humans but are misclassified by machine learning systems. For instance, an adversary might alter a few pixels in an image prior to
submission, causing an image recognition system to misidentify it.
Model Extraction Attacks: Attackers can approximate a model's functionality by repeatedly prompting it, effectively stealing the intellectual property and
potentially using it for malicious purposes.
Data Leakage: If a model unintentionally reveals sensitive information it was trained on or data that is used in inference, it can lead to breaches of
confidential or personal data.
Model Manipulation: Unauthorized access to the model can allow attackers to alter its parameters or behavior, leading to compromised functionality and
trustworthiness.
Inference Attacks: Attackers exploit the model to deduce whether specific data was part of the training set, potentially exposing sensitive information.
Abstract
A list of platforms and services that are compatible with Cortex Cloud AI Security.
The following lists the various services that are compatible with Cortex Cloud AI Security, detailing the specific platforms and services where Cortex Cloud AI
Security can be effectively used to ensure security and compliance:
AWS: Bedrock
GCP: Vertex AI
Abstract
Introduction to AI applications
The AI application ecosystem comprises several critical components that work together to enable the functionality of AI-driven applications. The following
explains the main concepts and shows some examples.
Model
The model is the core component of the AI ecosystem. It is the trained machine learning model that takes input data, processes it, and produces output. In the
context of large language models (LLMs), this involves understanding and generating human-like text based on the given input.
Example 14.
OpenAI GPT-4 model, which can generate coherent and contextually relevant text, answer questions, and perform various other natural language processing
tasks.
Model endpoint
The model endpoint is the interface through which applications interact with the AI model. It acts as an access point for sending inputs to the model and
receiving outputs. The endpoint is responsible for managing requests, routing them to the appropriate model instance, and returning the results to the
application.
Example 15.
A Microsoft Azure OpenAII deployment using OpenAI GPT-4 , which you can use to integrate natural language processing capabilities into your applications by
sending text prompts and receiving generated text in response.
Example 16.
Amazon Web Services (AWS) EC2 instances with GPU acceleration running Llama2 by Meta, which supports an application that communicates with the EC2
instance.
Plugin
A plugin is an auxiliary but highly capable model or tool that acts as a helper to the primary AI model. Plugins extend the functionality of the main model by
providing specialized capabilities, such as accessing inference datasets, performing specific computations, or interfacing with other services. This approach,
known as retrieval-augmented generation (RAG), enhances the primary model's ability to generate more accurate and contextually relevant outputs. For more
information, see Inference datasets and Retrieval-Augmented Generation.
Example 17.
A weather plugin integrated with an AI chatbot that allows the chatbot to fetch and provide real-time weather updates based on user queries. Another example
is a language translation plugin that helps the main model translate text between different languages.
Training datasets
Training is a fundamental stage in the AI development process where the model learns to perform its tasks by processing large amounts of data. During this
phase, the model is exposed to various examples and adjusts its internal parameters to minimize errors in predictions or classifications. The dataset is the
integral part of the process, with the insights learned by the model influenced by the training data.
Example 18.
Training a model like GPT-4 involves using vast text corpora from various sources to help the model understand language patterns, context, and nuances,
enabling it to generate coherent and contextually relevant text.
Inference datasets
Inference datasets are specialized collections of data used during the inference phase of AI models, which is the stage where the model makes predictions or
generates outputs based on new input data. Unlike training datasets, which are used to teach the model how to understand and process information, inference
datasets help improve the model's performance by providing realistic, real-world data inputs for better contextual answering.
Example 19.
When building a chatbot for customers to learn more about their spending habits, financial institutions use customer transactions as inference data to provide
contextually accurate answers.
Fine-tuning
Fine-tuning in machine learning refers to the process of adapting a pre-trained model to perform specific tasks or cater to particular use cases. This technique
has become essential in deep learning, especially for training foundation models used in generative AI. Fine-tuning leverages data (similarly to training) in
order to adjust the responses of the model to certain inputs, making it more suitable for the intended business case.
Retrieval-Augmented Generation
Retrieval-Augmented Generation (RAG) enhances large language model (LLM) responses by incorporating information from knowledge bases and other
sources. This allows the model to reference up-to-date inference data before generating a response, improving contextual accuracy. This approach is cost-
effective and ensures the output remains relevant, accurate, and useful across different contexts.
To illustrate how these components work together, consider an AI-powered customer support chatbot:
Model: The GPT-4 model receives the user's query, processes it, and generates a relevant and contextually appropriate response based on the
information and nuances provided in the query.
Plugin: The chatbot integrates a customer database plugin that allows it to fetch user-specific inference data, such as order status or account details, to
provide more personalized and accurate support. The customer database used by the plugin is the Inference Dataset.
Training dataset: The chatbot undergoes fine-tuning using a dataset of previous customer interactions and support tickets, making it adept at handling
common inquiries and issues in the specific industry.
Application: The customer support platform integrates the chatbot with a user-friendly interface.
Abstract
Learn about use cases that are relevant for Cortex Cloud AI Security.
Understanding your AI ecosystem is crucial for identifying potential vulnerabilities and ensuring the robustness of your AI operations. A comprehensive view of
your AI landscape helps in pinpointing where sensitive data is processed and stored, as well as how data flows between systems.
To understand your AI ecosystem, use the AI Security Dashboard, which provides visibility into all the AI components. You can also see how your AI assets
relate to any other asset in the environment using the Graph Search. The complete list of your AI assets can be found under AI Inventory, where you can
investigate each individual asset.
Investigate an AI asset
To understand a specific component of your AI ecosystem and identify any findings or security issues related to it, use its asset card and links to findings,
issues, and cases created for the asset. When you select an asset, you can review all the tabs on its asset card. These tabs on the asset cards provide
information about the following: overview, access, data, vulnerabilities, applications, and AI ecosystems.
Detecting the AI security issues early is pivotal to safeguarding AI-powered applications and the sensitive data they handle. AI systems, due to their
complexity, can often be opaque, making it difficult to identify vulnerabilities using traditional methods. To detect security issues in your AI ecosystem, use the
AI Security Issues page.
Securing the data utilized by AI systems is critical. Cortex Cloud AI Security helps you identify the data that is impacted by your AI ecosystem, whether it's
training data, data used for RAG (Retrieval Augmented Generation) or any other related data such as prompt logs. It also classifies this data, using Cortex
Cloud Data Security. Data classification across your AI ecosystem allows you to identify models that are trained on sensitive data and to prioritize all identified
risks and issues based on their data impact. For example, missing guardrails on a sensitive model should be treated differently due to its context.
Abstract
The Cortex Cloud Data Security solution is an agentless multi-cloud data security platform that discovers, classifies, protects, and governs sensitive data.
When you start managing your data assets in the cloud, this requires the implementation of comprehensive data security capabilities. The mission of Cortex
Cloud Data Security is to provide you with such capabilities, in order to ensure complete visibility and real-time control over potential security risks to your data.
Capabilities
As a cloud-native data security solution, Cortex Cloud Data Security utilizes several technologies to discover, contextualize, monitor, and protect your cloud
data assets in real time. Cortex Cloud Data Security collects data from a variety of cloud deployments and data servers, both managed (such as buckets, file
storage, databases) and self-hosted (such as MongoDB and MySQL running on virtual machines). The Cortex Cloud Data Security platform also discovers
data analytic environments (DBaaS) such as Snowflake, offering you a complete data landscape view. By using cloud-native APIs and methods, Cortex Cloud
Data Security collects the metadata of the monitored assets and administrative logs such as CloudTrail, activity logs, and audit logs. Using this information,
Cortex Cloud Data Security can detect and remediate the following issues or risks:
Shadow data: An example of shadow data is database snapshots and backups created by development teams as they make changes to files or move
them around the cloud. This type of shadow data is not protected by existing data governance frameworks, and security teams often do not even not
know it exists, even though it may contain sensitive information.
Compliance violations: The flexibility of cloud infrastructure makes it harder for you to stay compliant with security regulations such as HIPAA, GDPR, PCI,
and so on, therefore making it more difficult to be able to prove it to auditors. Cortex Cloud Data Security provides your compliance teams with an easy
way to classify data under these regulations and ensure it is handled properly, and intervene when a violation is detected.
Data exfiltration or theft: Cortex Cloud Data Security enables you to easily detect exposures in the data element layer and limit access to them in a way
that prevents cybersecurity attacks and data breaches.
Ransomware: The real-time threat detection tools of Cortex Cloud Data Security enable you to stop ransomware attacks early in the kill chain.
Data misuse: While typically not malicious, data misuse can lead to unintentional data compromise. Cortex Cloud Data Security can prevent such data
misuse by enforcing security policies across multi-cloud architectures, which prevents users and developers from storing files in inappropriate places.
Benefits
Using the data detection and security capabilities of Cortex Cloud Data Security enables you to:
Discover and visualize all your data assets across the different cloud services, which will help you understand where the sensitive data is, how it is used
and how it is moving across the organization.
Reduce the attack surface on your sensitive data by identifying and eliminating the data threat vector early in the kill chain.
Reduce cost due to detection of unused, duplicated, and stale data which allows for better data hygiene and operation.
Combine different technology sets such as DSPM and DDR capabilities to provide the highest level of data protection. See Cortex Cloud Data Security
use cases for further elaboration on these capabilities.
Create a centralized view of all data exposure issues by applying a single policy across multiple cloud deployments.
Reduce cloud costs by identifying orphaned snapshots, shadow backups, and stale assets that contribute to unnecessary storage expenses.
Abstract
A basic summary of the supported assets in the Cortex Cloud Data Security module.
The Cortex Cloud Data Security solution helps you discover, classify, protect, and govern your data across multi-cloud environments. With Cortex Cloud Data
Security, you can reduce data misuse, achieve compliance, and prevent ransomware attacks and data breaches.
Cortex Cloud Data Security offers data classification for the following assets and services:
Azure
GCP
Analytics: BigQuery
NOTE:
The list above refers to only data classification; however, Cortex Cloud Data Security discovers and monitors all cloud assets and services for usage and
misconfigurations.
Abstract
The following is a list of the basic concepts related to Cortex Cloud Data Security:
Database: A data asset that contains structured data in tables and columns. It can also contain semi-structured data (non-tabular data).
Disk: A type of data asset that is a VM disk in cloud environments such as EBS for AWS, managed disks in Microsoft Azure, and Persistent Disk in
Google Cloud Platform (GCP). These can host files, folders, and databases.
Data classification: The process of scanning data for sensitive records and identifying the class and quantity of sensitive records within a data asset.
Object: An instance of either files or columns, in a storage asset or database asset respectively.
Data pattern: The basic structure of data that is discovered in an object such as email address, IP address, phone number, name, credit card number,
and bank account number.
Data profile: A group or category of multiple data patterns sharing similar attributes. For example, personally identifiable information (PII) is a data profile
that could include an mail address, phone number, or name. Another example of a data profile is developer secrets, which might include a token, AWS
key, or certificate.
Sensitive Record: A sensitive record is defined by having a data pattern that is matched with a data object.
False positive: A case where certain data is detected as being a specific data pattern but actually matches a different data pattern or possibly should
not match any data pattern at all.
Data security finding: Findings are security-related insights that are generated as part of data scanning but are not necessarily actionable. For example,
"shadow backups found” is an example of a finding that can be generated by the Cortex Cloud Data Security scanner.
Data security issues: Issues reflect actionable security risks that are generated by a Data Policy. For example, “sensitive public object in private asset” is
an issue referencing a scenario where an object is publicly accessible while the asset configuration does not make it entirely public.
Abstract
Learn more about the main use cases in Cortex Cloud Data Security.
Discover and visualize all your data assets across the different cloud service providers, SaaS applications, and on-premise data stores, which will help
you understand where the sensitive data is, how it is used and how it is moving across the organization.
Classify and identify sensitive information stored in data assets, including personally identifiable information (PII), regulated data (PHI, PCI, SOX), and
corporate secrets defined in structured, semi-structured, and unstructured data.
To understand your data, use the Data Security Dashboard, which provides visibility into all the data assets. The complete list of your data assets can be
found under Data Inventory, where you can investigate each individual asset.
Utilizing the Cortex Cloud Identity Security module, understand which entities, human and non-human, have access to your sensitive information and
how it is used within the organization and outside of the organization.
Reduce the attack surface on your sensitive data by identifying and eliminating the data threat vector early in the kill chain, such as publicly accessible
sensitive data, insecure data movement, lack of backup or versioning, lack of encryption, and more.
Create a centralized view of all data exposure issues by applying a single policy across multiple cloud deployments.
Ensure compliance
Address your compliance requirements and avoid penalties. for security standards such as GDPR, PCI DSS, NIST, HIPAA, and others.
Significantly reduce cloud costs by identifying orphaned snapshots, shadow backups, and stale assets that contribute to unnecessary storage
expenses. It automates data retention policy analysis, ensuring compliance while preventing excessive data storage costs.
By managing large asset inventories and prioritizing financial efforts on critical resources, you can optimize cloud utilization. Additionally, its data
freshness analysis helps remove outdated assets, further cutting expenses.
Abstract
Cortex Cloud Identity Security can help you address the security challenges of managing identity in cloud environments.
Cortex Cloud Identity Security is a set of tools, providing you the following necessary capabilities to improve your identity estates security posture:
Cloud Infrastructure Entitlement Management (CIEM): Provides full and clear visibility into identities and permissions in your cloud environments, and
helps with rightsizing permissions to achieve least privilege. The main idea behind the principle of least privilege is to make sure that only those who
should have access to a cloud resource and actually must use it are granted that access. All unused and unnecessary permissions expose your
organization to additional risk, and therefore these need to be eliminated. When all users and applications have been granted only the specific
permissions they need, your organization has achieved least privilege access. Core CIEM capabilities also include removing unused permissions,
monitoring administrators, and reducing risky permissions, such as human and non-human identities, third-party vendors, cross-account and cross-
cloud access.
Identity Security Posture Management (ISPM): Helps you prevent identity misconfigurations by analyzing all identities across your cloud providers,
identity providers (IdPs), and SaaS applications. By collecting and analyzing information from various services, ISPM creates advanced insights about
your identity estate, helping you monitor and mitigate issues such as identity misconfigurations, shadow admins, and excessive permissions.
Data Access Governance (DAG): By combining access information with data-related insights generated by Cortex Cloud Data Security, Cortex Cloud
Identity Security detects and identifies which identities can access sensitive data, which sensitive data types can be accessed, and where specifically
this data is stored. DAG capabilities are used to remove unnecessary or unintentional access to sensitive data in order to reduce the risk of sensitive
data exposure.
Identity Threat Detection and Response (ITDR): Collects and analyzes real-time events from your cloud providers and IdPs in order to establish usage
and access patterns. ITDR detects identity-related anomalies in real time and triggers automatic responses to keep any unwanted party away from your
environment.
Managing access and entitlement is an essential step in reducing your cloud attack surface. This includes mitigating identity misconfigurations in order to
eliminate infiltration risk, and implementing least privilege access in order to minimize lateral movement, privilege escalation, or attack impact possibilities.
Cortex Cloud Identity Security can assist you with discovering your entire identity estate, fixing security gaps, and removing unused, excessive and risky
permissions to achieve the principle of least privilege. Additionally, you can use Cortex Cloud Identity Security to ensure that your environment meets any
relevant compliance standards.
Cortex Cloud Identity Security can correlate identity information with configuration data, giving you the required depth of visibility and control. For example, if
you use the Amazon S3 storage service, Cortex Cloud Identity Security can discover and identify sensitive data, the Cloud Network Analyzer (CNA) module
can calculate true internet exposure, and Cortex Cloud Identity Security can provide granular insights into exactly who has access to the data and make
appropriate recommendations to enforce least-privilege access.
You can use Cortex Cloud Identity Security to evaluate the effective permissions assigned to users, workloads, human identities, groups, roles, cloud service
accounts, applications, identity providers (IdPs), and external accounts on your cloud provider so that you can properly administer identity and access
management (IAM) policies and enforce access using the principle of least privilege.
Visibility: Discover your entire cloud identity estate and get a detailed inventory of all the identity assets in your environment. You can also get a detailed
and precise modeling of who has permissions for which actions, and on which assets.
Posture: Using a set of detection rules, find all privilege and misconfiguration security risks, with detailed reports of where exactly the issues are
occurring and why they are important.
Detection and response: Detect identity-related security events in real time and trigger automatic responses to make sure attackers do not gain access to
your environment.
Compliance: Test your identity estate against a wide set of compliance standards, and get a detailed report of what needs to be fixed in order for your
assets to be 100% compliant.
Remediation: Use Cortex Cloud Identity Security to create fixes for all your security and compliance issues.
Abstract
The following explains the principle concepts of Cortex Cloud Identity Security:
Asset categories
The Cortex Cloud Identity Security inventory is organized according to these asset categories:
Human identities: All cloud, identity provider (IdP), and platform users.
Non-human identities: Machine identities that can assume permissions and perform cloud Identity and Access Management (IAM) actions such as VMs
and functions.
Cloud service accounts: A category unifying AWS roles, Microsoft Azure service accounts and managed identities, and Google Cloud Platform (GCP)
service accounts.
Policies: Permission documents, such as AWS policies, Azure roles, and GCP roles.
One of the core capabilities of Cortex Cloud Identity Security is its ability to analyze cloud permissions. Once you have onboarded your cloud organization,
Cortex Cloud Identity Security gathers all the relevant information necessary for calculating permissions and then calculates the net effective permissions in
your environment, which is a precise depiction of which identity can perform which specific actions and where can they be performed. Net effective
permissions are used to provide context for other product features such as access tables, access to cloud resources and services count, as well as detection
rules and issues.
Example 20.
Source: The identity that can perform an action. This can be a human identity or a non-human identity. In certain cases, permissions for cloud service
accounts are calculated as sources as well. In some special cases, such as where public access is granted, or when permissions are granted to an
entire specific cloud account, a source can be named “all”. In such cases, notice the source’s cloud account to identify whether the permissions are
granted to the public or to all entities within an account.
Destination: The resource on which the source can perform actions. In cases of wildcards, the relevant wildcard appears.
Policy: The document where permissions are written, such as an IAM policy, Azure or GCP role, resource-based policy, or inline policy.
Granter: The asset that connects the source and the policy. Traditionally, this can be a group or a cloud service account. In the case of a direct
attachment, or the use of inline policy, the granter and the source would be the same asset. In the case of a resource-based policy, the granter and the
destination would be the same asset.
Once you have onboarded to Cortex Cloud Identity Security, information about permission usage is collected. Cortex Cloud Identity Security keeps track of the
last time each permission was used, creating a live picture of which permissions are used, and which are not. This information is presented in various features
in Cortex Cloud Identity Security, helping you quickly identify permissions that are unused for 90 days or longer, and should therefore be considered for
removal.
Defining AWS policies, Azure roles, or GCP roles to grant excessive permissions is considered a deviation from identity and permissions best practices.
Excessive policies are defined as granting permissions on a very wide range of resources or allowing the performance of any action on resources.
A full wildcard in a resource, meaning that an action can be done on all resources of a service
Azure: A policy is considered excessive when a role contains a wildcard and is bound to an entity on a management group scope, or grants an action
wildcard and is bound to an identity at the subscription level.
GCP: A policy is considered excessive when binding a specific predefined role on the organization, folder or project level.
NOTE:
When a policy is analyzed and categorized as excessive, a relevant finding and highlight is attached to that policy and the various identities being granted
excessive permissions.
Access categorization
Access categorization provides you with a high-level view of the type of access each identity has without your having to analyze each permission one by one.
Read: Permits users to read data found in various cloud resources. For example, users with Read permissions may open and read files in object storage,
and run queries on databases.
Write: Permits users to write data into cloud resources, including writing or deleting files and editing rows in databases.
Config: Permits users to edit and configure cloud resources, such as editing firewall rules and adding users to groups.
Administrative: Permits users to perform highly privileged actions such as creating a new group or deleting an organization policy.
NOTE:
The administrative tag is an additional tag for administrative actions, along with one of the other access level types. For example, the AWS action
iam:CreateGroup is categorized as config and has the administrative tag as well.
We recommend reviewing and removing all dormant identities from your environment, as well as reducing unused permissions. Inactive identities and unused
permissions in your environment pose an unnecessary risk and widen your security gaps against potential attacks.
Unused permissions: Using the last access feature, you may find unused permissions, which are mentioned throughout the various features of Cortex
Cloud Identity Security, such as the overview page and access tables. A permission is considered unused when at least 90 days have passed since it
was last used.
Inactive identities: Cortex Cloud Identity Security user login information and cloud service account usage to detect inactive users and inactive cloud
service accounts. Per the industry standard, users and cloud service accounts that are considered inactive are those that have not been active for at
least 90 days. When an asset is considered inactive, it is highlighted on the asset page and a relevant attribute appears on the asset, allowing you to
query for it with the inventory.
Account access
Two capabilities of Cortex Cloud Identity Security are analyzing and identifying cross accounts and external access.
Getting information about permissions to your environment and access to sensitive data, and being informed about cross accounts or external access helps
you determine and remove unauthorized access.
In Cortex Cloud Identity Security, access is labeled in one of the following ways:
Same account access: When the source and the destination are in the same cloud account.
Internal known access: When the source and the destination are in different accounts, while both these accounts have been onboarded to Cortex Cloud
Identity Security. Additionally, permissions are categorized as internal known access when the source is from an onboarded SAML/OIDC provider; for
example, an onboarded Okta account.
Internal unknown access: When the source and destination are in different accounts, while one of the accounts is not onboarded but is part of the
onboarded organization (when not all accounts under the organization have been onboarded). Additionally, permissions are categorized as internal
unknown access when the source is from a non-onboarded SAML/OIDC provider; for example, an onboarded Okta account.
3rd-party access: When the source account is a 3rd-party vendor whose account is familiar to Cortex Cloud Identity Security.
External unknown access: When the source and destination are in different accounts, and the source is in a non-onboarded account . Another example
of this type of access is when the source is from an unknown web identity provider.
Public access: When permission is granted to the public; for example, when an AWS role can be assumed by all users, or when an Amazon S3 bucket
has a widely permissive resource-based policy
Abstract
A wide variety of tools are used by security teams to grant or revoke permissions. Cortex Cloud Identity Security takes the following parameters into
consideration when calculating effective permissions.
Organization policy
Using organization policies, you can enable and turn off various types of policies across accounts and organization units. Cortex Cloud Identity Security
analyzes organization policies as part of the permission calculation for:
Deny statements
Deny statements can appear in the various permission tools in different cloud providers and are taken into consideration in the effective permissions
calculation, when they appear in the following:
GCP: Block actions are supported in the scope of organizations, folders, and projects.
"NotAction" element
In a policy, actions specified under the NotAction policy element are subtracted from the permissions that are granted in that policy.
You can use the IAM permissions boundary tool to limit the amount or scope of permissions granted to a principal.
Resource-based policies
Resource-based policies are permissions that are configured on the destination resource. You can use resource-based policies to grant or deny permissions
on a resource that is based on multiple parameters; for example, allowing or preventing a certain principal from acting on a resource.
AWS: Resource-based policies are calculated for the following services: AWS Lambda, Amazon S3, Amazon Simple Queue Service (SQS), Amazon
Simple Notification Service (SNS), AWS Secrets Manager, AWS Key Management Services (KMS), and Amazon Elastic Container Registry (ECR).
Azure: Permissions granted of any scope are considered for effective permissions calculation.
GCP: Resource-based policies are calculated for the following Google Cloud services: Cloud Storage, BigQuery, Cloud Pub/Sub, Cloud Key
Management (KMS), Cloud Spanner, Cloud Run, Compute Engine, Cloud Functions, and Dataproc.
The Cortex Cloud Identity Security dashboard provides an overview of various aspects of identity posture, which you can use to quickly assess your identity
posture and the main identity security gaps. Each widget on the dashboard provides you with a specific context that you can use to learn about your
environment, understand the most pressing issues, accessing the relevant pages in the product so you can remediate them.
Identity inventory
The row of identity inventory boxes across the top of the dashboard show you how many assets have been discovered, according to each asset category:
Human Identities: Displays all cloud, identity provider (IdP), and platform users.
Non-human Identities: Machine identities that can assume permissions and conduct cloud IAM actions. This includes, for example, VMs and functions.
Cloud Service Accounts: A category unifying AWS roles, Azure service principals and managed identities, and GCP service accounts.
Policies: Permission documents. AWS policies, and Azure and GCP roles.
See the Top Critical Issues to Address area to view a refined list of the most important identity issues to take note of in your environment. Issues are prioritized
based on the severity of detection rules, the number of violating assets, and their association with MITRE tactics and techniques. Use this view to quickly focus
on the most critical security gaps and strengthen your security posture.
Use the Identity Security Findings widget to find common misconfigurations in your environment:
The Identities with excessive policies section shows identities such as users, machines, groups, and cloud service accounts with excessive policies
attached. The definition of excessive policy includes:
Amazon AWS: A policy is considered excessive if it includes a full wildcard (resource and scope are: *), or a service-level action wildcard (such as
Amazon S3:*) , or a full wildcard in resource (meaning an action can be performed on all the resources of a service).
Microsoft Azure: When a role contains a wildcard and is bound to an entity on a management group scope, or grants an action wildcard and is
bound to an identity at the subscription level.
GCP infrastructure platform: When binding a specific predefined role on the organization, folder, or project level.
Admins Summary
The Admins Summary widget provides you with valuable insight regarding the identities that are granted administrative permissions, in any way possible, in
your environment. Admins are not necessarily found only in your administrators group, because identities can be granted administrative permissions in
numerous ways, both intentionally or unintentionally. Use this widget to analyze how many administrators there are in your environment, and on which level they
are granted administrative permissions.
In the Top Identities at Risk widget, you can view identities with the highest amount and severity of risks, in order to identify the riskiest identities in your
environment.
3rd-Party Access
Use the 3rd-Party Access widget to analyze third-party access to your environment. This is supported only in GCP and AWS.
For each vendor that has access to your environment, you can see:
Which level of access the vendor has in your environment according to access levels.
Use case 2: Use identity issues to improve your identity posture
You can find your identity issues in one of the following ways:
By filtering the issues page according to issues detected by Cortex Cloud Identity Security.
By following the identity issues link under the Cortex Cloud Identity Security module menu.
When opening each identity security issue, you are provided with relevant evidence, such as an explanation about the issue, what's causing it, why it is
important and what you can do to fix it. Additionally, you are provided with detailed information about the risky permissions, misconfigurations, or any other
detail relevant to understanding and solving the issue. For more information, see Use case 5: Remediate identity security and compliance issues.
The identity inventory is built on scanning and discovering all your cloud assets after having completed your onboarding to Cortex Cloud Identity Security. Your
identity inventory comprises five categories:
Non-human Identities: Machine identities that can assume permissions and conduct cloud IAM actions. This includes, for example, VMs and functions.
Cloud Service Accounts A category unifying AWS roles, Azure service accounts and managed identities, and GCP service accounts.
Policies: Permission documents, AWS policies, and Azure and GCP roles.
Each asset has related attributes that you can use to analyze your inventory data to gain insights about your environment. Attributes can be information
collected directly from the cloud, or calculated by Cortex Cloud Identity Security to provide you with more context about your assets.
There is an Access to Resources attribute for identities that sums up the amount of resources that an identity has access to, that a cloud service account
grants access to, or that a group grants access to. For wildcard permissions, only resources that are considered assets on the platform are counted. Wildcard
permissions for other resources are counted as one resource.
You can review an identity's access by using the access table, which is found on the Access tab of an asset, and appears for the following assets:
Groups
Since each asset plays a different role in a permission (a human identity performs actions, while a group grants permissions), each asset is therefore
associated with a slightly different table, showing relevant information about the permission relevant to the asset.
Note that some assets can have more than one role in permissions. For example, a non-human identity can be both a source and a destination. For that
reason, such assets get two tables. For example, a cloud service account has the following two tables:
What can this Cloud Service account do, and to whom does it grant permissions?
For each row, which represents a direct relationship between a source, a destination and a granter, the access table shows additional information about the
relevant permission, such as:
How many unused and excessive permissions are granted. Clicking on the number of unused or excessive permissions opens a new window, showing a
detailed list of all these permissions.
Which sensitive data labels are found on the permissions destination (where relevant)?
Additionally, the destination assets overview page shows a list of identities that can access them. This table displays the top 100 most permissive identities for
the asset, ordered according to the latest access that was performed on this asset.
Use case 5: Remediate identity security and compliance issues
Identity issues include specific evidence, meant to provide you with the relevant information to help you understand:
Where is it occurring?
Which MITRE tactics and methods are involved with this issue?
Follow the evidence information and instructions to analyze and remediate the issues.
For various reasons, you may choose to grant access to your account to a third-party vendor. However, are you monitoring for over-privileged third-party
vendors? Are you removing unused permissions granted to third-party vendors? Use the following capabilities to ensure that no excess access is granted to
third-party vendors, using the following capabilities:
Dashboard widget: Use the 3rd-Party Access dashboard widget to view all vendors with access to your environment. You can see how many resources
each vendor can access, and at which access levels.
Access tables: Filter a destinations access table on the “third-party vendor” account access to view all third-party vendors that can access the
destination. You can now see what each vendor can do with the asset, whether each vendor has been granted excessive access, and if there are any
unused permissions that can be removed.
Abstract
Vulnerability management helps you identify, assess, prioritize, and remediate security vulnerabilities across your entire IT infrastructure including endpoints,
code, and cloud.
Managing vulnerabilities effectively is crucial to proactively maintaining the security, integrity, and availability of IT infrastructure. Cortex Cloud provides a
comprehensive vulnerability management platform, helping you identify, assess, prioritize, and remediate security vulnerabilities across your entire IT
infrastructure including endpoints, code, and cloud.
Cortex Cloud leverages advanced detection techniques, real-time threat intelligence, and automated workflows to streamline the vulnerability management
process. This allows your security team to focus on the most critical issues, reduce risk exposure, and ensure compliance with industry standards and
Cortex Cloud helps identify and prevent vulnerabilities across the entire application lifecycle, while prioritizing risk for your cloud-native environments. Integrate
vulnerability management into any CI process, while continuously monitoring, identifying, and preventing risks to all the hosts and images in your environment.
Cortex Cloud combines vulnerability detection with an always up-to-date threat feed and knowledge about your runtime deployments to prioritize risks
specifically for your environment.
NOTE:
Cortex Cloud vulnerability management provides the ability to identify and assess runtime vulnerabilities in every asset across traditional IT and cloud
environments. For vulnerabilities detected in your software development lifecycle through application security scans, refer to the Cortex Cloud Application
Securitydocumentation.
Abstract
Vulnerability
A vulnerability is a CVE or other known software security weakness that can occur in a network or system. Vulnerabilities are typically defined by the National
Vulnerability Database (NVD) and other established security information sources, such as Github Security Advisory or RedHat Security Advisory.
NOTE:
CVE is an acronym for Common Vulnerabilities and Exposures, which is a list of publicly disclosed security threats. We often use the term "CVE" to refer to a
vulnerability that has been a assigned a CVE ID. Cortex Cloud identifies CVEs and non-CVE vulnerabilities.
Vulnerability findings
A vulnerability finding is a specific instance of a vulnerability that was discovered in your system through a vulnerability scan. Findings include both actionable
and informational context, including information about the asset on which the vulnerability was discovered. Some findings might be critical and should be
addressed as soon as possible, others are less important and won’t require any action at all. Cortex Cloud applies vulnerability policies to findings to prioritize
them and create issues for the ones that are most critical to remediate.
Vulnerability issues
Cortex Cloud creates a vulnerability issue when a specific instance of a vulnerability in your environment matches a vulnerability policy. Each issue has a
priority, assignee, progress status associated with it. Issues also provide contextual information about the asset on which the issue is found, exploitability, and
other information required for remediation and mitigation.
Abstract
Visualize your most pressing risks, changes to risk over time, and remediation progress on the Vulnerability Management dashboard.
Vulnerability management analysts and managers can use the Vulnerability Management dashboard to visualize their most pressing risks, changes to risk over
time, and remediation progress.
Navigate to Dashboards & Reports and select Vulnerability Management from the dropdown menu.
Abstract
A vulnerability policy defines the action you want to take for a specific set of vulnerability findings.
A vulnerability policy defines the action you want to take for a specific set of vulnerability findings that match your policy criteria. Cortex Cloud provides a set of
predefined vulnerability policies based on CVSS severity, EPSS severity, and vulnerabilities confirmed through Attack Surface Testing. You can also create
custom policies based on your unique business requirements. Custom policies allow you to focus on the risks that matter most to your organization. Some
examples of custom vulnerability policies include the following:
A policy that creates issues with a severity of low for findings that appear on dev servers
A policy that specifies not to create issues for findings on assets in the asset group Leased to customers
A policy which creates issues with a severity of critical for vulnerabilities that appear on the CISA KEV list and are in the asset group called Production
Servers, regardless of CVSS score.
A policy that prevents an image that contains code with a CVE with an EPSS score greater than 90% from being deployed to the Kubernetes cluster
Each time a new vulnerability finding is discovered, the system compares that finding to your vulnerability policies to determine whether one of the policies is a
match. Vulnerability policies have an evaluation order, which means the system starts by evaluating the finding against the first policy. If it does not match, the
second policy is evaluated for a match. As soon as a finding matches a policy, no further policies are evaluated for that finding.
The following sections describe the elements that make up a vulnerability policy:
Vulnerability policy conditions and scope define the specific set of findings that a policy applies to. You define the conditions by configuring a filter with criteria
for including and excluding findings. You define scope by creating one or more Asset Groups and adding assets to those groups in the Assets view. Once the
Asset Groups are created you may select one or more of them in the policy creation process, this will limit the scope of that policy to only the assets in the
chosen asset groups.
Policy actions
Policy actions are the actions the policy will perform automatically on vulnerability findings that match the policy conditions and scope. There are two types of
policy actions, issue creation and prevention.
Issue creation actions either create an issue and set the issue severity for matching findings or or ignore matching findings and do not create an issue.
Prevention actions
Prevention actions prevent vulnerabilities from being introduced into your systems by failing a build or blocking deployment. Available actions are described in
the table below.
Kubernetes pod actions Block new deployments New deployments are blocked by the Kubernetes Admission Controller when
vulnerabilities matching the policy conditions are detected in an image. This
requires that the agent be installed and activated.
Kubernetes pod actions Do nothing No action will be taken for matching findings on Kubernetes clusters with
Kubernetes Admission Controller activated.
Build actions Fail the build Fails the build in your CI/CD system when an attempt is made to check in
code that includes a vulnerability that matches the policy conditions. This
requires that the agent be installed and activated on your CI/CD system.
Build actions Do nothing No action will be taken for matching findings from code repository assets
where the agent is activated.
Policy order
The order of policies in the policy list is important. Policies are executed in order from top to bottom, and the first policy that matches a finding determines the
action on that finding. After that first match, no other policies are evaluated. We recommend placing your most important and most specific policies toward the
top of the list and wider-reaching, more generic policies towards the bottom of the policy list.
Policy 0 is the Globally Ignored CVEs, Assets, and Asset Groups policy. It includes a list of CVEs and assets for which Cortex Cloud will not create vulnerability
issues. You can update the Globally Ignored CVEs, Assets, and Asset Groups policy by adding or removing CVEs, asset groups, and assets, but you cannot
move the policy down list to change order.
Before creating a policy, be sure to review the information in the Vulnerability Policies section.
3. Add a Policy Name and, optionally, a Description, and then click Next.
4. Set the policy conditions by creating a query that defines the specific findings for which the policy will create issues. Your policy can specify which
findings to include and which to exclude.
Preview the list of findings that match your policy. If the results look correct, click Next.
5. Define the policy scope by selecting one or more asset groups from the dropdown menu. If you don't choose an asset group, the policy will apply to all
assets.
If you want to create a new asset group, click Create New Asset Group to open the Asset Groups page in a new browser tab. Click + Add Group and
follow the instructions in the wizard. After you've created the new asset group, go back to your original tab and finish creating your policy with new asset
group.
Click Next.
6. Choose the action that will be executed on the findings that match the policy. If you select Create an issue for each matching finding, you must also
select the issue severity that will be applied to those issues. You can base the severity of the issue on the severity of the underlying CVE by selecting
Use Default CVE Severity in the dropdown menu.
7. Click Done.
The policy wizard will close, and you will be redirected back to the Vulnerability Policies page.
By default new policies are added to bottom of the policy list. To move a policy up or down in the list, click and hold the arrows in the Name column and
drag the policy to the desired position in the list.
We recommend placing wider-reaching, more generic policies towards the bottom of the policy list, and more specific policies towards the top of the list.
Click Save.
6.4.2.2 | Update the Ignored CVEs, Asset Groups, and Assets policy
Policy number 0 in the policy list is the Ignored CVEs, Asset Groups, and Assets policy. This policy contains a list of vulnerabilities and assets for which Cortex
Cloud will not create vulnerability issues. Findings will still be created for these vulnerabilities and assets, and you can review those on the Vulnerabilities and
Vulnerable Assets pages. You can update the Ignored CVEs, Asset Groups, and Assets policy at any time by using the following steps to add or remove
assets, asset groups, and CVEs.
2. The first policy in the policy list is the Ignored CVEs, Asset Groups, and Assets policy. Click on that policy to open the policy wizard.
3. Add or remove vulnerabilities, asset groups, and assets as needed. Click Next.
To add CVEs, asset groups, or assets, use the search bar in each section to find the value you are looking for, and select it to add it to the list.
To remove CVEs, asset groups, or assets, click the X to the right of each item in the list.
4. Review the Results Preview to see the list of findings that will not generate issues. If the list looks correct, click Done.
2. Select either the Issue Creation or Prevention tab, depending on the type of policy you want to modify.
3. Click on the name of the policy in the policy list to open the policy wizard. You can also right-click anywhere in the row and select Edit.
2. Select either the Issue Creation or Prevention tab, depending on the type of policy you want to modify.
3. Right-click anywhere in the row for that policy and select Enable or Disable.
Cortex Cloud provides several ways to view and track vulnerability data so you can monitor, investigate, and remediate vulnerabilities in your environment.
Abstract
The Vulnerabilites page displays all your vulnerabilities grouped by CVE or other vulnerability ID.
The Vulnerabilites page displays all your vulnerabilities grouped by CVE or other vulnerability ID. This view shows you how prevalent each vulnerability is in
your environment. The Vulnerabilities page includes key information about each vulnerability, with links to the related lists of instances (also called findings),
related issues, and impacted assets.
Abstract
The Vulnerability Issues page displays all your vulnerability issues along with critical vulnerability intelligence and context.
The Vulnerability Issues page displays all vulnerability issues along with critical vulnerability intelligence and context so you can assign an issue to an owner,
investigate, remediate, and track progress. Click on an issue in the table to display the issue details.
Abstract
The All Vulnerability Findings page displays every instance of every vulnerability that was discovered in your environment.
A vulnerability finding is a specific instance of a vulnerability that was discovered in your environment. The All Vulnerability Findings page lists every instance of
every vulnerability that was discovered in your environment.
Go to Posture Management → Vulnerability Management → Vulnerability Issues and click the All Vulnerability Findings button.
Abstract
The Vulnerable Assets page displays all assets with a vulnerability finding.
The Vulnerable Assets page displays all assets with a vulnerability finding. This view enables you to prioritize vulnerabilities by asset and asset type and focus
on assets most critical to fix. The Vulnerable Assets list provides links to the findings and issues for each asset. Click on an asset in the table to see the asset
details.
Abstract
Vulnerability Intelligence is an in-product, real-time feed that provides vulnerability data and threat intelligence from a variety of certified upstream sources.
Vulnerability Intelligence is a real-time feed that contains vulnerability data and threat intelligence from a variety of certified upstream sources. This feed
continuously pulls data from known vulnerability databases, official vendor feeds and commercial providers to provide the most accurate vulnerability detection
results.
CVE metadata, such as description, impact, severity, and CVSS v2/v3 Scores
Exploit intelligence, such as Exploit Availability, maturity, exploitability and EPSS scores
2. (Optional) Click a row in the Vulnerability Intelligence table to view detailed information about the vulnerability.
The details page has an Overview tab, which provides information about the vulnerability and an Affected Software tab, which shows information about
all the software packages impacted by the vulnerability.
Abstract
Customize CVSS scores and CVSS severities in the platform to align your risk management approach with your organizational context and priorities.
In some situations, you might decide that a specific vulnerability poses a different level of risk to your environment than what is reflected in the original CVSS
score or CVSS severity. In Cortex Cloud you can override the CVSS score or severity within the platform. Customizing CVSS scores and severities enables you
to align your risk management approach with your unique context and priorities.
When a CVSS score or severity is recast, the change is applied platform-wide, updating both existing and new vulnerability findings. This ensures consistency
in how vulnerabilities are assessed and managed across the organization. After the CVSS score or severity is updated, the system automatically updates all
affected findings within about one hour.
You can view the original CVSS score and severity and new values on the vulnerability details page in Vulnerability Intelligence.
2. Use the filters to find the vulnerability in the Vulnerability Intelligence table.
3. Click in the row for the vulnerability to open the vulnerability details panel.
4. Click the Options icon in the upper right corner and select Override Severity or CVSS.
5. Enter the new severity and score, and then click Save.
Perform these steps to display the complete list of vulnerabilities with overridden CVSS severities and CVSS scores.
2. Click the Show Overridden CVSS button in the upper right corner.
You could also use the filter Severity Source Contains Custom Override to display the list of vulnerabilities with overrides.
Cortex compliance provides a snapshot of your overall compliance posture for various compliance standards.
Cortex enables you to determine asset vulnerabilities and risk by checking whether assets adhere to industry standards or your organization's best practices
for compliance.
You can view all compliance related details under Posture Management → Compliance in the tenant.
The following steps describe the flow for evaluating asset compliance.
Step 1. Decide which compliance standard to use. Choose compliance standards from the compliance Catalog
Step 2. Create a compliance assessment. Use an assessment profile to run compliance checks on your assets
Step 3. Review the results. View and manage compliance assessment reports
The compliance Catalogs provides a list of available compliance standards and controls.
Cortex provides lists of available standards and controls in the Standards and Controls catalogs under Posture Management → Compliance → Catalogs.
Standards are guidelines that organizations follow in order to comply with industry best practices and regulations, as well as internal organizational policies
and procedures. They improve security and quality in operational practices.
Standards consist of controls, which are measures related to the standard that ensure compliance and mitigate risks. Controls are built from one or more rules,
the specific checks that run on an asset. Controls can be grouped into categories, for example RBAC and Pod security.
The Standards and Controls catalogs include built-in industry standards and controls and custom organizational standards and controls.
Abstract
Review the list of all the built-in and custom compliance standards to monitor and audit your organization’s performance.
The Standards Catalog page shows cards of the available standards and their details, including:
Links to the controls, assessment profiles, and labels associated with the standard. Clicking these links opens pages with more details.
You can use a built-in industry standard, create a custom standard, or edit a custom standard. A custom standard can be either a copy of a built-in standard or
a custom standard created from scratch.
Cortex provides built-in industry approved regulatory compliance standards, for example GDPR. These standards cannot be edited or deleted, you can
duplicate them to create a custom standard.
You can create a custom compliance standard that is tailored to your own business needs and organizational policies.
Standard name
Description (optional)
Labels (optional)
3. Click Next.
You can use the filter to search for a specific control. For more information about choosing a control, see Controls catalog.
5. Click Create.
You can edit a copy of a built-in industry standard or edit an existing custom standard. You can also delete a custom standard.
1. In the Standards catalog, click on the built-in standard you want to edit and click Save as new.
To edit a custom standard, click on the custom standard and click Edit.
Description (optional)
Labels (optional)
3. Click Next.
4. Under Controls, assign one or more controls to the compliance standard. For more information about choosing a control, see Controls catalog.
5. Click Create.
Abstract
Review the list of all the built-in and custom compliance standards to monitor and audit your organization’s performance.
The Controls Catalog page shows a list of the available controls and their details, including:
Clicking a control opens a side panel that displays all the control details in the Overview tab, and the list of rules associated with the control in the Rules tab.
You can use a built-in control, create a custom control, or edit a custom control.
Cortex Cloud provides built-in controls that cannot be edited or deleted. When you edit or create a custom standard you can add the built-in control.
You can create a new control that is tailored to your own business needs, standards, and organizational policies to use in a custom standard.
Control name
Description (optional)
Category
3. Click Create.
You can edit a copy of a built-in control or edit an existing custom control. You can also delete a custom control.
1. In the Controls catalog, click on the built-in control you want to edit and click Save as new.
To edit a custom control, click on the custom control and click Edit.
2. Click Next.
Control name
Description
Category
Sub category
4. Click Save.
5. If the control does not already contain a rule, assign a custom detection rule to the control as follows.
When creating or editing a custom control, you can assign a custom detection rule to one or more custom controls to tailor compliance checks to your
organization's needs.
NOTE:
Only custom detection rules (not built-in) can be assigned to custom controls.
2. Search for the rule you want to add and then click it.
5. In the Edit Custom Detection Rule pane Compliance Controls field, click Add.
6. In the Controls table, use the filter to find the control you want to assign the rule to.
The Compliance Controls number increases by one (the number of custom controls the custom detection rule is assigned to).
The compliance Assessment Profiles are configurations that define which standard to run on which asset group.
An assessment profile runs scans on asset groups to check whether the assets adhere to a specific standard.
To create a new assessment profile, select a compliance standard and one or more asset groups you want to run it on.
1. Under Posture Management → Compliance → Assessment Profiles, click Create New Assessment.
Profile name
Description (optional)
1. Enter one or more report email recipients, clicking enter or between each entry.
3. Click Next.
6. Click Next.
The assessment profile evaluates the compliance posture and generates a report at the optionally defined cadence, and sends it to the defined emails.
Create a compliance assessment report based on a Cortex compliance standard for immediate viewing or download, or schedule recurring reports to continue
monitoring compliance over time.
A compliance assessment provides you with a consolidated view of your organization's compliance with a selected standard. Compliance status is
automatically updated in the Assessments results page for you to view.
You can generate PDF or CSV reports and optionally receive them via email when you configure your assessment profile. You can also view a list of
compliance reports and download them from the Reports page.
Compliance score
The compliance score is calculated for each assessment profile. The score is based on the number of assets that passed or failed the rules in the standard,
represented as a percentage of controls that failed/passed.
7.4.1 | Assessments
Abstract
The Assessment page shows the latest compliance assessment profile results. It provides an up to date high level compliance view.
The Assessment by Score widget, showing the score for each profile assessment (red, orange, or green)
The Assessment by Label widget, showing the score for each label (for example, AWS or Azure)
You can right click a specific assessment profile and select View Profile Report, which opens the report generated by the assessment profile. The report
contains two tabs, Controls and Assets. You can also access this page by hovering over the end of the row and selecting the view arrow.
Controls tab
Compliance Score widget Displays the overall compliance score for the assessment profile and when it was last
checked.
Controls by Status widget A pie chart indicating which controls passed, failed, or were not assessed for a specific asset
group.
Controls by Severity widget A pie chart indicating control severity level for an asset group. Possible values:
Critical
High
Medium
Low
Informational
Table showing controls and their rules grouped by Displays rules grouped by controls and categories, including:
category
Score: The category/control/rule score score.
Asset status: Each number links to the Asset tab, filtered by control/rule with the status.
Issues: Links to the Issues table in a new tab, filtered for relevant issues.
Clicking the row for a specific control opens the Control Details side panel that shows information about the control in the Overview tab and the Rules tab.
Tab Details
General Details: Includes the standards, category, sub category, created at, and automation status associated with the control.
Standard Mitigation Action: A predefined measure or step to address and reduce risk related to the control.
Assessment Results: Includes the asset group, linked issues, and linked findings.
Tab Details
Rules The Rules tab shows the following information about the rules in the control.
Rule name
Rule ID
Type
Severity
Clicking the row for a specific rule opens the Rule Details side panel.
General Details: Rule name, rule ID, type, and severity, and scanned asset categories.
Remediation steps: Actions from the standards provider or from custom controls to correct or resolve asset non-compliance identified during the
assessment.
Assessment Results: Includes the asset group, linked issues, and linked findings.
Assets tab
Compliance Score Displays the overall compliance score for the compliance assessement and when it was last checked.
widget
Assets by Status widget A pie chart indicating which assets in an asset group passed for all rules, failed one or more rules, or were not assessed.
Table showing assets The distinct checks run for every asset covered by the assessment profile. Every row in the table represents a rule per asset
for this standard.
Asset type: For example, storage bucket, endpoint, VM instance, human identity.
Clicking the row for a specific asset opens a side panel showing asset details organized under the following tabs:
Overview
SBOM
Access
Vulnerabilities
View Control Side Panel: Opens the Control Details side panel.
View Rule Side Panel: Opens the Rule Details side panel.
7.4.2 | Reports
Abstract
The Posture Management → Compliance → Results → Reports page shows a table listing compliance assessment report files.
Standard name
Assessment profile
Asset group
Score
Controls status
Evaluation time
You can download a report by right clicking a report file in the table and selecting Export to PDF or Export to CSV. You can optionally delete reports.
You can also generate PDF or CSV reports and optionally receive them via email when you configure your assessment profile. For more information, see Use
an assessment profile to run compliance checks on your assets.
Exported File Type Information Included File Retention After Report Generation
Asset Group
Standard details
Example 21.
ASPM groups assets within your environment such as repositories, CI pipelines, container images, and cloud runtime assets, allowing for focused monitoring
and safeguarding of high-priority assets. This enables you to identify, prioritize, and remediate critical issues that could impact key business systems and
critical infrastructure, ensuring continuous monitoring and maintenance of an application's security posture and effectively reducing the risk of vulnerabilities
and breaches.
Applications: insights into the SDLC of your critical business and infrastructure applications
Code to cloud: Visualize and understand the relationship between your source code and deployed cloud resources, enabling you to identify and
prioritize risks associated with your deployments
CI/CD systems: Monitor and analyze the security configurations and activities within your CI/CD pipelines to identify and mitigate risks introduced
during the build and deployment processes
Third party ingestion: Integrate security findings third-party scanners and security tools to gain a centralized view of your security posture and
correlate findings across your development lifecycle
Supply Chain security: Secure your SDLC by gaining visibility into and controlling third-party dependencies, open-source components, build artifacts,
and CI/CD pipeline configurations and activities. Identify and mitigate risks introduced throughout the build and deployment processes, ensuring a
secure software delivery
Contextual risk prioritization and proactive detection: Prioritize remediation efforts based on a data-driven risk assessment that combines code-level
vulnerabilities, runtime behaviors, and infrastructure configurations. Proactively detect critical issues such as exposed application secrets, IaC
misconfigurations, SCA CVE vulnerabilities, package operational risks, and license miscompliance, all within the context of your application's architecture
and potential impact on business operations. This allows you to focus on the most impactful threats and address them before they are exploited,
ensuring a robust security posture
Effective prevention: Enforce security policies and prevent security risks from impacting your applications
Actionable remediation: Improve your security posture with actionable remediation guidance for identified security risks. The platform offers automated
remediation for IaC misconfigurations and CVE vulnerabilities, in addition to clear steps for manual fixes for all categories of detected risks
To access application inventories, under Inventory, select All Assets → select an option under Application.
The All Applications asset inventory provides visibility into all applications and their interconnected assets generated throughout your software development
lifecycle (SDLC), serving as a centralized repository for application inventory management. Additionally, the interface details the risks detected in your
applications, allowing you to prioritize, manage, and mitigate potential threats based on business or infrastructure criticality.
To access the All Applications asset inventory, under Inventory, select All Assets → All Applications (under Assets).
The All Applications asset inventory includes a dashboard and a table including a list of applications. The dashboard include the following widgets:
Provider: A grouping of applications based on their primary provider, either VCS (version control systems) or Cloud Provider
Class: Grouping of applications according to industry standards. For example, Compute, Network, and Storage
Category: Grouping of applications based on their primary function or purpose. The two main categories are Infrastructure and Business
Read more...
Field Description
Cases Breakdown The categories of cases detected in the application, including a count of each type of case
Critical Cases Cases that require immediate attention to ensure timely mitigation and minimize potential damage
Issues Breakdown The categories of issues detected in the application, including a count of each type of issue
Critical Issues Issues that require immediate attention to ensure timely mitigation and minimize potential damage
First Observed The date and time when the application was first detected during a scan
Last Observed The date and time when the application was last detected during a scan
Click on an asset in the inventory table to open a detailed Asset card, which provides additional, more in-depth information about the asset. The information is
organized into tabs, including an Overview tab (default display) which provides highlights and a general summary, while contextual tabs focus on particular
properties of the asset. Additionally, the card includes details about detected risks, allowing you to further explore them directly from the asset inventory. You
can also perform actions on the asset using the Actions menu.
Asset summary
Displayed at the top of the side panel, the asset summary provides concise details of the asset's key attributes, including the name of the application and its
assigned categories.
Overview
Highlights include:
Critical issues: An aggregation of critical issues across the associated assets of the application
Publicly exposed (internet exposed image): Whether the application or any associated asset is exposed to the internet
Repository visibility: Whether a repository associated with the application is exposed to the internet
Asset details, including Asset Id, Asset Types and Asset Groups associated with the application
Application risks:
Risk summary: The amount of risks associated with the application assets grouped by category (cases, issues and findings) and their severity
level. For more information about risks, refer to Cortex Cloud Application Security code scanners
Risk Score: A value representing the overall security risk of an application, based on various underlying metrics. This helps in assessing and
prioritizing the application's security posture and potential vulnerabilities
Business or Infrastructure Criticality: As defined when creating the application. See How to build an application for more information
Application owners: includes the business or infrastructure owner, such as product, DevOps, and development teams
Configuration
The Configuration tab displays an inventory of IaC misconfiguration across all application assets:
Severity level (icon): Indicates the level of severity of the IaC misconfiguration
Asset Name: The name of the IaC resource in which the misconfiguration occurred
Assigned To: The person or team responsible for addressing the vulnerability
Vulnerabilities
The Vulnerabilities tab displays an inventory of SCA CVE vulnerabilities across all application assets.
CVSS Score: The Common Vulnerability Scoring System score that quantifies the severity of the vulnerability
Assigned To: The person or team responsible for addressing the vulnerability
Package Integrity
The Package Inventory tab includes an inventory of package operational risk issues and package license miscompliance issues, offering a comprehensive
view of the package's overall health and compliance.
Severity level (icon): Indicates the level of severity of the package license miscompliance
License Name: The name of the license associated with the package. This indicates the specific license agreement that is potentially being violated
Asset Name: The name of the asset that uses the package with the license miscompliance. This identifies where the license issue occurs
Branch: The branch of the codebase where the asset with the license issue is located
Asset Name: The name of the asset that uses the package with the package operational risk
Branch: The branch of the codebase where the asset with the package operational risk is located
Assigned To: The person or team responsible for addressing the package operational risk
Creation Date: The date when the package operational risk was initially detected
Code Weaknesses
The Code Weakness tab provides an inventory of ingested SAST (Static Application Security Testing) CWEs (Common Weakness Enumerations) identified
within the application and its associated assets. Each CWE is listed with its corresponding severity level, allowing you to prioritize remediation efforts based on
the potential impact on the application's security posture.
Severity level (icon): Indicates the level of severity of the CWE issue
Assigned To: The person or team responsible for addressing the CWE issue
Creation Date: The date when the CWE issue was detected
For more information about about third party SAST ingestion, refer to Overview.
Secrets
The Secrets tab displays an inventory of exposed Secrets across all applications.
Severity level (icon): Indicates the level of severity of the exposed Secrets
Assigned To: The person or team responsible for addressing the Secrets
Creation Date: The date when the Secrets were initially detected
Application Topology
The Application Topology tab visually maps the interconnected assets of the application, providing a representation of their relationships and their position
within the software development lifecycle (SDLC). You can view the topology either as a visual representation or as an asset inventory by selecting the Graph
(default) or Inventory tabs respectively. The application inventory provides these details of the application assets:
Provider: The version control systems associated with the application, such as GitHub, GitLab, and Azure Bitbucket
Repository Org.: Repository management entities such as Organizations for GitHub, Groups for GitLab, and Workspaces for Azure Bitbucket
CI Instances: The application components providing continuous integration and delivery processes:
Instance: The specific instance within the CI system that is responsible for executing the integration and delivery processes for the application
CI File Path:The file path or location within the repository where the continuous integration process is initiated. This path specifies the code or
configuration files that trigger the CI pipeline
Repository: The location where the application code and related files are stored and managed
CI Instance: CI tools such as Jenkins and CircleCI used to execute the application continuous integration process
Provider: The version control system that offers the CI/CD services or platform being used for integration and delivery
CI/CD pipelines: Define the sequence of steps and actions necessary to build, test, and deploy application changes automatically. Pipelines
include GitLab CI, GitHub Actions, Azure Pipelines and more
Registries: Manages the application artifacts generated from the source code, such as container images, binaries, libraries, and other deployable assets
Right-click on an asset in an inventory table to access the Actions menu, where you can perform the following actions:
Open in new tab: Opens the description tab of the asset for detailed analysis of the issue
View asset data: Opens a new pop-up window displaying the data retrieved for the asset during the most recent scan in either JSON (default) or tree
view. This raw data provides a comprehensive and unformatted view of the asset's properties and attributes as they were initially ingested
Show/hide rows: Stand on data in a row and filter the entire inventory to show or hide assets based on the selected attribute
The Business Application asset inventory provides visibility into all business applications and their interconnected assets generated throughout your software
development lifecycle (SDLC), serving as a centralized repository for business application inventory management. Additionally, the interface details the risks
detected in your business applications, allowing you to prioritize, manage, and mitigate potential threats based on business or infrastructure criticality.
To access the Business Application asset inventory, under Inventory, select All Assets → Business Application (under Assets).
The Business Application asset inventory includes a dashboard with a widget of all issues detected in the application by severity level and a table including a
list of applications.
Read more...
Field Description
Risk Represents the overall assessed risk level for the application
Field Description
Business Criticality The importance of the application to the business as defined when creating the application
Compliance The compliance categories assigned to the application as defined when creating the application
Click on an asset in the inventory table to open a detailed Asset card, which provides additional, more in-depth information about the asset. The information is
organized into tabs, including an Overview tab (default display) which provides highlights and a general summary, while contextual tabs focus on particular
properties of the asset. Additionally, the card includes details about detected risks, allowing you to further explore them directly from the asset inventory. You
can also perform actions on the asset using the Actions menu.
The Business Application expanded asset details are identical to the All Applications expanded asset details, except that they are scoped to business
application assets and risks. For more information about expanded asset details, refer to All Applications expanded asset details.
The Infrastructure Application asset inventory provides visibility into all Infrastructure applications and their interconnected assets generated throughout your
software development lifecycle (SDLC), serving as a centralized repository for infrastructure application inventory management. Additionally, the interface
details the risks detected in your infrastructure applications, allowing you to prioritize, manage, and mitigate potential threats based on business or
infrastructure criticality.
To access the Infrastructure Application asset inventory, under Inventory, select All Assets → Infrastructure (under Assets).
The Infrastructure Application asset inventory includes a dashboard with a widget of all issues detected in the application by severity level and a table
including a list of applications.
Read more...
Field Description
Risk Represents the overall assessed risk level for the application
Business Criticality The importance of the application to the business as defined when creating the application
Compliance The compliance categories assigned to the application as defined when creating the application
Field Description
Click on an asset in the inventory table to open a detailed Asset card, which provides additional, more in-depth information about the asset. The information is
organized into tabs, including an Overview tab (default display) which provides highlights and a general summary, while contextual tabs focus on particular
properties of the asset. Additionally, the card includes details about detected risks, allowing you to further explore them directly from the asset inventory. You
can also perform actions on the asset using the Actions menu.
The Infrastructure application expanded asset details are identical to the All Applications expanded asset details, except that they are scoped to infrastructure
application assets and risks. For more information about expanded asset details, refer to All Applications expanded asset details.
Business Application: Select this option for applications related to business processes
Description: A description of the application. Provides context for users interacting with the application
Business criticality (required): The level of importance of the application to your organization. This helps prioritize resources and attention
based on the application's impact to your business objectives
Business owner: The individual or team responsible for the application from a business perspective
Compliance: Select the specific compliance standards the application must adhere to from the menu
b. Click Create.
5. On the Applications page: Add assets to your application, starting with either code or cloud assets:
1. Select a version control system (VCS) from the list that is displayed.
2. Choose one or more of the following from their respective dropdown lists (multiple selections allowed): a specific VCS instance, an
organization, or a repository.
NOTE:
These filters are represented by icons displayed on the Code pane after selecting a VCS.
After connecting your VCS, Prisma: Cortex automatically identifies and associates all build-time (such as source code, build scripts,
Dockerfiles, CI tools), deploy (such as Kubernetes manifests, Helm charts, Terraform scripts), and runtime assets (such as running
containers, virtual machines, cloud instances).
1. Select an option:
2. Select an instance of your connected provider from the list that is displayed.
3. Optional: Select Kubernetes Namespace, Kubernetes Cluster, VPC, Organization or Resource tag to automatically populate the application
with runtime assets associated with those entities. These filters are represented by an icon displayed under Run (after selecting a cloud
provider):
After connecting your cloud service provider, Cortex Cloud automatically identifies, associates and displays all Code (VCS repositories),
build-time (such as source code, build scripts, Dockerfiles, CI/CD pipelines) and deploy assets (such as Kubernetes manifests, Helm
charts, Terraform scripts).
The application is displayed on both the All Applications and its dedicated asset page (business or infrastructure).
NOTE:
To edit application assets, click the Clear All icon before clicking Finish. This clears all application data, allowing you to restart the application building process
from the beginning.
Application asset inventories provide comprehensive coverage of your application assets. The All Applications inventory offers a consolidated view of all
application assets detected in your environment. Additionally, dedicated asset inventories are available for Business and Infrastructure applications,
providing more focused views that allow you to analyze and manage assets specific to those domains. Refer to Manage application assets for more
information
Dedicated Application tabs under Cortex Cloud Application Security asset categories (such as the Application tab under IaC resource assets - see
Applications)
By ingesting Static Application Security Testing (SAST) findings directly from third-party sources, Cortex Cloud Application Security expands its security
coverage, allowing you to gain visibility into and effectively manage SAST violations across your organization. The platform enriches the findings with context,
and provides suggested remediation steps to help you quickly address identified code weaknesses. These capabilities, combined with Cortex Cloud
Application Security's native IaC, SCA and Secrets scanner management and other domains, provide a comprehensive security management platform for your
applications.
Cortex Cloud Application Security provides a dedicated findings inventory for the analysis and auditing of ingested SAST data. Cortex Cloud Application
Security analyzes and enriches these findings, elevating them to issues when necessary, allowing for efficient contextualization, prioritization and remediation
of SAST code violations.
NOTE:
SAST CWE violations refer to security vulnerabilities identified by SAST tools that align with the Common Weakness Enumeration (CWE) list. Refer to
the MITRE CWE website for more information about CWEs
PREREQUISITE:
Before you begin, integrate your third party data sources to ingest SAST data findings. Refer to Ingest third party static application security testing (SAST)
data for more information. Supported third parties and data ingest formats include:
Veracode
SonarQube
Semgrep
SARIF
NOTE:
It is recommended to use supported third-party vendor integrations for data ingestion over manual SARIF file uploads. Native integrations provide automated
synchronization of periodic scan data and increased data precision.
Cortex Cloud Application Security default policies enrich and categorize ingested Critical and High SAST findings detected in your organization's environment
as issues (also known as Code Weaknesses). Issues represent the smallest unit for remediating SAST-identified CWEs.
NOTE:
Users can customize policies to define which findings are categorized as issues.
To access SAST code violation issues, under Modules, select Application Security → Issues → Code Weaknesses.
TIP:
You can also view SAST CWE issues in dedicated tabs under other sections when available:
On the Code Weaknesses tab under the Repositories asset inventory. Refer to Expanded repository asset information for more information
Under the All Code asset inventory: Select an asset from the table → Code Weaknesses
In the Application asset inventory: navigate to Inventory → All Assets → Applications → select an option from the Applications menu → select an
application from the inventory → Code Weaknesses tab
In Cases and Issues; perform a query. Select Issues → AppSec Issues (under the All Domains menu) → SAST Scanner (as the Detection Method
value)
The SAST code weakness issues inventory includes the following fields:
Read more...
Field/Attribute Description
Severity Severity level of the CWE issue (such as Critical, High, Medium, Low)
Name Short, descriptive name of the CWE issue (such as "SQL Injection," "Cross-
Site Scripting")
CWE(s) Common weakness enumeration (CWE) identifiers associated with the issue
Asset Name Name of the repository affected by the CWE issue (such as library name,
file name)
Field/Attribute Description
Language Programming language in which the CWE issue was detected (such as
Java, Python, JavaScript)
Branch The specific branch or version of the code where the CWE issue was
detected
File Path Path to the file or location within the code where the CWE issue was
detected
Prioritization Labels Labels or tags assigned to the CWE issue to aid in prioritization and triaging
Data Source The 3rd party data source for the code weakness issue
Selecting an issue in the table opens a card with tabs including additional information about the issues, including suggested remediation.
Summary
A summary of the code weakness including the severity level, the CWE identity and the type of engine that detected the weakness.
Overview
Timestamps: Provides the date the issue was created and last updated
Assignee: Assign the CWE issue to the appropriate team member remediation using the dropdown menu
Affected Assets: Identifies the version control system and repository containing the CWE
Evidence:
Details and the location in the codebase of the code containing the CWE, including vulnerability classifications (such as OWASP) specific code
lines and functions
Commit details: Includes the commit hash, committer, and the assigned user responsible for remediation
Weakness Details: The CWE identifier with a link to the weakness in the MITRE database
Actions
The Actions tab displays suggested steps to mitigate the CWE issue.
SAST CWE findings are based on ingested third party (such as Semgrep) data. Findings are potential security vulnerabilities identified within your source code
based on common weakness enumerations (CWEs). These insights help assess and analyze the security posture of your applications by identifying
weaknesses in your codebase.
NOTE:
Findings on the Cortex platform are not intended for direct action; but rather represent data collected by the platform. They must be promoted to issues to
enable mitigation efforts to secure your codebase.
To access code weakness findings, navigate to code weakness issues (see SAST code weaknesses (CWEs)) and click the Findings tab.
Field/Attribute Description
Name Short, descriptive name of the CWE finding (such as "SQL Injection,"
"Cross-Site Scripting")
CWE(s) CWE identifier(s) associated with the finding (such as CWE-79, CWE-119)
OWASP Categories Relevant Top 10 OWASP categories associated with the finding (but can be
from different years)
Language Programming language in which the CWE finding was detected (such as
Java, Python, JavaScript)
Branch The specific branch or version of the code where the CWE finding was
detected
File Path Path to the file or location to the code wherein the CWE finding was
detected
Git User Username of the Git user who last modified the file containing the finding
Overview: Includes when the finding was last updated, the category associated with the finding, and the name and link to the asset where the finding
was detected
Details: The location of the finding, the third party data source that detected the finding, the CWE category, the initial hash and commit, and rule ID
Integrates Application Security findings across the code and build phases of the software development lifecycle (SDLC) by connecting to various
VCS platforms, CI tools, and data from third-party scanners
Code to Cloud visibility: Map your code to your cloud resources. This allows you to understand how your code translates into running cloud
infrastructure and workflows, understand the impact of risks, enabling effective prioritization and reduce subsequent remediation efforts
Native scanners:
Software Composition Analysis (SCA) scans: Gain visibility into your application's open-source dependencies. SCA identifies vulnerabilities,
package integrity risks and license miscompliance, enabling you to prioritize and remediate SCA vulnerabilities
Infrastructure as Code (IaC) scans: Secure your cloud infrastructure before deployment. IaC scanning analyzes your Terraform, CloudFormation,
ARM templates, and other IaC configurations to identify misconfigurations, security best practice violations, and compliance issues, preventing
costly mistakes in production
Secrets detection scans: Prevent accidental exposure of sensitive credentials. The Cortex Cloud Application Security secrets detection engine
automatically scans your codebase, configuration files, and other artifacts to identify API keys, passwords, certificates, and other secrets, allowing
you to revoke compromised credentials and protect sensitive data
Enforcement
Policies: Define and enforce security standards. Create custom policies to codify your organization's security requirements and ensure compliance
across all your applications and infrastructure. These policies drive consistent security practices
Rules: Provide granular control over security checks. Define specific rules to detect particular vulnerabilities, misconfigurations, or other security
issues. Customize rules to meet your organization's unique needs and compliance requirements, enabling highly targeted and precise security
checks
Remediation
Shift-left security: Enables developers to own security from the beginning of the SDLC by integrating security tools into their IDE and CLI workflow,
reducing the burden on security teams and accelerating remediation
Automated remediation: Accelerate remediation workflows by automatically generating pull requests and providing developers with the necessary
context to fix issues quickly
DevSecOps Integration: Provides a framework for accelerating software development by embedding security throughout your SDLC. This results in a
stronger security posture. Scan management, a component of this integration, offers centralized scan management with support for periodic branch
scans, CI pipeline integrations, and pull request scans. This functionality provides clear visibility into the engineering ecosystem and infrastructure,
enabling you to gain insights into application security posture, identify vulnerabilities early in the development lifecycle, and respond proactively to
threats
Cortex Cloud Application Security provides native code scanning and integrations with third-party scanners to identify security vulnerabilities and
misconfigurations within your code, ensuring comprehensive Application Security software development lifecycle (SDLC) coverage.
IaC misconfiguration scanner: Detects incorrect or insecure settings within the code that defines your cloud infrastructure. For more information, refer to
IaC scans
Secrets scanner: Detects sensitive data embedded in code, such as API keys, encryption keys, OAuth tokens, certificates, PEM files, passwords, and
passphrases. These secrets, when exposed, can compromise the security of your cloud infrastructure and applications. For more information, refer to
Secrets scans
SCA scanners: Provide software composition analysis (SCA) capabilities, identifying Common Vulnerabilities and Exposures (CVE) vulnerabilities within
your application's software components and dependencies, package operational risks, and license miscompliance issues. For more information, refer to
Software Composition Analysis (SCA ) scans
Static Application Security Testing (SAST) ingested data: Cortex Cloud Application Security supports SAST data ingestion, enriching it and providing context.
This allows you to analyze, prioritize and remediate SAST code weaknesses detected by third parties on the Cortex Cloud platform. For more information on
third-party data ingestion for SAST scans, refer to Third-party findings ingestion and issue management
Cortex Cloud Application Security scans generate comprehensive information about assets and code anomalies (findings) in your software development
lifecycle (SDLC). This data is organized into the following categories.
The Cortex Cloud Application Security asset inventory provides a comprehensive view of code assets detected in scans across your engineering environment,
generating IaC Resources, Software Packages and Repositories asset inventories. These inventories provide details and insights, enabling you to understand
the composition and context of your Application Security assets.
NOTE:
Application and CI Pipelines asset inventories are tracked outside Cortex Cloud Application Security. Application assets fall under Application Security Posture
Management (ASPM), while CI Pipelines assets are part of CI Security.
Dedicated inventories provide detailed information about detected findings, allowing you to analyze and assess your code environment. Findings detected
during scans cannot be mitigated. In order to mitigate anomalies, findings must be promoted to issues.
Issues are the fundamental unit for remediating code violations detected during scans. Issues are derived from findings, which are enriched by Cortex Cloud
Application Security with additional context. Only when mitigation is required are these enriched findings classified as Issues.
Dedicated inventories provide detailed information, allowing you to analyze, assess and mitigate your code issues. You can prioritize issues for effective
security management. By focusing on the most critical threats, you can optimize resource allocation, minimize risks, and improve your overall security posture.
You can view and remediate dedicated scan category issues (such as IaC misconfigurations and Secrets issues) through the Issues section of the Application
Security module. Additionally, you can explore issues within the context of Assets (Repositories, IaC, software packages) or scan types (branch periodic scans
and PR scans).
Open fix pull request: Supported for IaC and CVE vulnerability issue remediation
Change severity: Adjust the severity level of an issue/finding based on its potential impact. Values: Critical, High, Medium, Low
Change status: Update the status of an issue/finding. Values: New, Known issue, Duplicate issue, False positive, In progress
Change assignee: Modify the user or identity assigned to remediate the issue
9.1.4 | Concepts
This section provides explanations of key terms and ideas within the Cortex Cloud Application Security framework.
Read more...
Concept Description
Application Security The practices and tools used to protect applications from external threats
and ensure their confidentiality, integrity, and availability. This includes
identifying and addressing security vulnerabilities in the application code,
as well as ensuring that security controls are implemented throughout the
application development lifecycle
Static Application Security Testing (SAST) Security testing that analyzes an application's source code, bytecode, or
binary code to identify vulnerabilities and weaknesses. This type of testing
is performed early in the development lifecycle, allowing developers to
detect and remediate security flaws before the application is deployed
Infrastructure as Code (IaC) IaC (Infrastructure as Code) manages and provisions infrastructure (such
as, servers and storage) through code instead of manual configuration,
using files to define and automate the setup of infrastructure components
IaC resource A defined entity of infrastructure that is managed and provisioned through
code
Concept Description
IaC resource tags Allow you to manage and organize cloud resources, and enable traceability
of a resource throughout the SDLC
Yor trace tag A unique identifier automatically generated by the Yor tool to establish a
traceable link between an IaC resource defined in code and its
corresponding runtime instance in the cloud
IaC misconfigurations Inaccuracies or deviations in IaC definition files that can lead to security
vulnerabilities
Software Composition Analysis (SCA) SCA analyzes an application's code to identify and assess the risks
associated with its open-source dependencies, including vulnerabilities,
operational risks and license compliance issues
Direct dependency A package that your application explicitly depends on and includes in its
code.
Secret Sensitive information, such as API keys, encryption keys, OAuth tokens,
certificates, passwords, and passphrases, that enable applications to
securely communicate with other services
CI/CD pipeline A set of practices and tools used to automate the building, testing, and
deployment of software applications, facilitating the software development
process
CI/CD risks Risks in the CI/CD pipeline detected in the organization’s VCS, CI and
artifacts, as well as cross-system risks.
Supply Chain The organizations and processes involved in the creation, development,
and delivery of software
SBOM (Software Bill of Materials) A detailed inventory of all components (including open-source and third-
party) used in a software application
Cortex Cloud Application Security supports the following technologies and frameworks.
Go Go Modules go.mod ✔️ ✔️
npm-shrinkwrap.json
yarn yarn.lock ✔️ ✔️
Bower bower.json ✔️ ✔️
pipfile pipfile.lock ✔️ ✔️
Limitation: Cortex Cloud Application Security SCA frameworks do not scan development or test dependencies, such as devDependencies in JavaScript.
Cortex Cloud Application Security capabilities support a wide range of Cloud DevSecOps systems integrations. After integrating with your systems, out-of-the-
box policies are automatically applied, enabling immediate scanning and detection of security issues, ensuring comprehensive security checks for your cloud
infrastructure. Additionally, you can on board third-party data findings to further strengthen your security posture.
PREREQUISITE:
Grant the Cortex Cloud user performing the onboarding third-party data sources with Cortex Cloud view/edit permissions for data sources, regardless of
the role assigned to the user
Third-party user permissions (for example permissions in GitHub) are specified in the dedicated onboarding guide
1. Select Settings → Data Sources (under Data Collection) → + Add Data Source.
2. Enter a specific data source in the search bar to quickly locate the required item (and jump to step 4 below) or click Show More.
3. Click Code Repositories for a list version control systems and third party data sources.
For third party ingestion, refer to Ingest third party static application security testing (SAST) data
While Cortex Cloud Application Security provides guidance during integration and explain the steps involved when you are redirected to third party version
control systems (such as GitHub SaaS , GitLab SaaS and so on), Cortex Cloud does not assume responsibility for changes or variations in these platform
processes. Always refer to the official documentation of the third party to ensure you are following their most current and precise instructions.
Manage instances to align with evolving requirements and ensure they remain current.
TIP:
To quickly find your data source integration, use the search bar.
Edit instance: Redirects to the Select Repositories step of the integration wizard, where you can modify configurations for the selected instance.
For more details, refer to the relevant integration guide
Delete instance: When confirmed, deletes the instance, including data from previous scans
Connect Cortex Cloud with your version control systems (VCS) to gain comprehensive visibility into, and monitor, the systems, technologies, configurations,
and pipelines that make your VCS platform.
These integrations trigger both periodic scans and scans on pull requests (PRs) via a webhook, enabling security scans to identify and remediate
Infrastructure-as-Code (IaC) misconfigurations, Software Composition Analysis (SCA) vulnerabilities, exposed secrets and license non-compliance in your VCS
environment. Scan results are displayed directly in PR comments and reports, allowing you to analyze, prioritize and fix issues as soon as they are detected.
NOTE:
Cortex Cloud Application Security (which includes IaC, SCA and Secrets scanning), is an add-on to a license (such as Posture Security) that must be
purchased separately.
Cortex Cloud Application Security currently supports the following VCS data source integrations:
AWS CodeCommit
Azure Repos
Bitbucket Cloud
GitHub Cloud
GitHub Server
GitLab SaaS
GitLab self-managed
VCS data sources are listed in the Cortex data source catalog.
1. Navigate to Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
TIP:
Navigate to Settings → Data Sources (under Data Collections) → + Data Source → enter your VCS data source in the search bar.
2. Select a data source and follow the instructions in its integration wizard to complete the integration process.
NOTE:
1. Access How to onboard a VCS data source above, hover over a data source and click Connect another instance.
2. On the Data Sources page, select the menu in a Data Source row, and click + Add New Instance.
To view the connectivity status of instances of a data source and their repositories:
Additional details about a data source instance, including its status and the amount of connected repositories, are displayed.
NOTE:
Additionally, when browsing the Data Source catalog on the Add Data Source page, you can hover over a data source and select the Instances Configured
link to navigate to the detailed view of that instance.
You can manage VCS data source instances. Hover over an instance and right-click to access the following actions:
2. Hover over a data source and click View Details to see a list of the connected instances of the data source.
Details: View details of the data source instance, including a list of connected repositories and organization, connectivity status, last scan date,
and when initially connected.
Edit instance: Opens the Select Repositories step of the integration wizard, allowing you to edit connected repositories. You can also edit the
instance configuration by navigating back to the previous step of the wizard and modifying relevant details
Remove a connected repository: Right-click on a repository in the list, and click Remove Repository
Integrate Cortex Cloud Application Security with your AWS CodeCommit version control system (VCS) to enable security scans for exposed secrets,
infrastructure-as-code (IaC) misconfigurations, vulnerabilities, package operational risks, and license compliance issues in your repositories. This integration
allows you to analyze, prioritize, and resolve detected issues efficiently.
During integration, Cortex Cloud Application Security provides you with a CloudFormation (CFN) template for deploying your AWS account. Once deployed,
the CFN template will establish the following components in your environment:
A Simple Notification Service (SNS) topic with an HTTP subscription to the Cortex Cloud Application Security webhook URL
PREREQUISITE:
Read more...
codecommit:GetFolder: Required to view the contents of a specified folder in a repository from the CodeCommit console
codecommit:GetFile: Required to view the encoded content of an individual file and its metadata in a repository from the CodeCommit console
codecommit:PostCommentReply: Required to create a reply to a comment on a comparison between commits or on a pull request
codecommit:GetTree: Required to view the contents of a specified tree in a repository from the CodeCommit console. This is an IAM policy
permission only, not an API action that you can call
codecommit:BatchGetPullRequests: Required to return information about one or more pull requests in a repository. This is an IAM policy
permission only, not an API action that you can call
codecommit:GetCommentsForComparedCommit: Required to return information about comments made on the comparison between two
commits in a repository
codecommit:PostCommentForComparedCommit: Required to create a comment on the comparison between two commits in a repository
codecommit:ListPullRequests: Required to return information about the pull requests for a repository
codecommit:DeleteCommentContent: Required to delete the content of a comment made on a change, file, or commit in a repository.
Comments cannot be deleted, but the content of a comment can be removed if the user has this permission
codecommit:PutFile: Required to add a new or modified file to a repository from the CodeCommit console, CodeCommit API, or the AWS CLI
codecommit:GetApprovalRuleTemplate: Required to return information about an approval rule template in an Amazon Web Services account
Create an SNS topic for each required region if the customer’s Cortex Cloud account and stack are in different regions. This ensures compliance with
CloudFormation constraints, which mandate that SNS events for creations, deletions, and other actions must reside in the same region as the stack
NOTE:
In the context of AWS, a 'stack' refers to a collection of AWS resources that are created, updated, and deleted together as a single unit. This allows
for the management of related resources as a cohesive unit, making it easier to provision and manage complex infrastructure deployments.
a. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
c. To create an additional AWS CodeCommit instance: Hover over the AWS CodeCommit card in the catalog and click Connect Another.
a. Click the CloudFormation link in the wizard or copy and send the link to your administrator.
NOTE:
The Resource Name Prefix and ExternalID fields are pre-populated. You can modify the Resource Name Prefix, but DO NOT change
the ExternalID!
On the Cortex Cloud console: On Data Sources, select Code Providers → AWS CodeCommit → View more and confirm that the status of your
integrated AWS CodeCommit instance is 'Connected'.
On AWS: Open CloudFormation → Stacks. Verify that the integration is displayed with a success status.
NOTE:
To create an additional AWS CodeCommit instance: Hover over the AWS CodeCommit card in the catalog and click Connect Another.
Integrate Cortex Cloud Application Security with your Azure Repos version control system (VCS) to enable security scans for exposed secrets, infrastructure-
as-code (IaC) misconfigurations, vulnerabilities, package operational risks and license compliance issues in your repositories. This integration allows you to
analyze, prioritize, and resolve detected issues efficiently.
Multi-token integration
Cortex Cloud Application Security supports multiple Azure Repos accounts for a single tenant using multiple OAuth user tokens, without having to change any
permission settings in Azure Repos. You can connect multiple organizations from the same Azure Repos account (using a single VCS user token), or use
multiple tokens to connect multiple organizations, regardless whether they belong to the same Azure Repos account. This capability increases your
organization’s readiness and scale.
PREREQUISITE:
Read more...
Permission Description
Organization Owner Grants full administrative control over the Azure Repos organization
Repository Administrator In order to scan pull requests (PRs), the user performing the integration
must have administrative privileges for the repositories. This enables
Cortex Cloud to set up subscription webhooks for the selected
repositories
Scope Description
Identity (read) [vso.identity] This permission grants read access to identity-related information or
configurations within Azure DevOps. It allows the user to view details
about users, groups, or other identity-related entities
Build (read) [vso.build] This permission grants read access to information related to builds in
Azure DevOps. It allows the user to view details about build pipelines,
build definitions, and build execution status
Packaging (read) [vso.packaging] This permission grants read access to information related to package
management in Azure DevOps. It allows the user to view details about
packages, feeds, and package versions stored in Azure Artifacts
Extensions (read) [vso.extension] This permission grants read access to information related to extensions
in Azure DevOps. It allows the user to view details about installed
extensions, extension configurations, and extension marketplace
Release (read) [vso.release] This permission grants read access to information related to release
pipelines in Azure DevOps. It allows the user to view details about
release definitions, release environments, and release execution status
Project and team (read) [vso.project] This permission grants read access to information related to projects
and teams in Azure DevOps. It allows the user to view details about
projects, teams, team membership, and project settings
Graph (read) [vso.graph] This permission grants read access to the Azure DevOps Graph API. It
allows the user to query and retrieve information about users, groups,
and other entities using the Graph API
User profile (write) [vso.profile_write] This permission grants write access to the user’s profile information in
Azure DevOps. It allows the user to update their own profile details such
as display name, email address, and profile picture
Work items (read and write) [vso.work_write] This permission grants read and write access to work items in Azure
DevOps. It allows the user to view, create, update, and delete work
items such as user stories, bugs, tasks, and epics
Code (read and write) [vso.code_write] This permission grants read and write access to source code
repositories in Azure DevOps. It allows the user to view, create, modify,
and delete source code files, branches, and pull requests
Task Groups (read, create) [vso.taskgroups_write] This permission grants read and create access to task groups in Azure
DevOps. It allows the user to view existing task groups and create new
ones for use in pipelines
Scope Description
Code (status) [vso.code_status] This permission grants access to the status of source code repositories
in Azure DevOps. It allows the user to view the status of commits,
branches, and pull requests, including build and test status
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
a. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
b. Enable Third-party application access via OAuth to configure integration for both single organization and multiple organizations using a single user
token.
CAUTION:
c. Navigate to Project Settings → Settings → General, and set Limit job authorization scope to current project for non-release pipelines to Off.
This step ensures that Cortex Cloud Application Security has access to your Azure repositories.
NOTE:
For information on Cortex Cloud Application Security access to all organizations associated with your user token, refer to the Azure Third party
application access via OAuth documentation.
a. Click Authorize.
Choose from repository list and select repositories from the list
c. Click Save.
4. Verify integration and confirm that the your integrated Azure Repos instance has a status of Connected.
NOTE:
To create an additional Azure Repos instance: Hover over the Azure Repos card in the catalog and click Connect Another.
Subscribed events
Below is a comprehensive list of events to which Cortex Cloud Application Security is subscribed. These events encompass various actions and changes
occurring within your Azure Repos environment that trigger notifications and integrations with Cortex Cloud Application Security.
Read more...
Repositories — —
Organizations — —
Integrate Cortex Cloud Application Security with your Bitbucket Cloud version control system (VCS) to enable security scans for exposed secrets,
infrastructure-as-code (IaC) misconfigurations, vulnerabilities, package operational risks, and license compliance issues in your repositories. This integration
allows you to analyze, prioritize, and resolve detected issues efficiently.
PREREQUISITE:
Read more...
NOTE:
For write access, go to Bitbucket > Repository Settings and grant the user write access to the relevant repositories.
Read more...
Permission levels
Organization Owner: Grants full administrative control over the Bitbucket Cloud organization
Repository Administrator: To scan pull requests (PRs), the user performing the integration must have administrative privileges for the
repositories. This enables Cortex Cloud Application Security to set up subscription webhooks for the selected repositories. Additionally,
these permissions allow the user to retrieve a comprehensive list of all available repositories
Scopes
project: Provides access to view the project or projects. This scope implies the repository scope, giving read access to all the repositories
in a project or projects
Administrator repository permissions: In order to scan pull requests (PRs), the user performing the integration must have administrative
privileges for the repositories. This enables Cortex Cloud to set up subscription webhooks for the selected repositories
repository: Provides read access to a repository or repositories. Note that this scope does not give access to a repository’s pull requests.
Includes 'access to the repo’s source code', 'clone over HTTPS', 'access the file browsing API', 'download zip archives of the repo’s
contents', 'the ability to view and use the issue tracker on any repo (created issues, comment, vote, etc)', 'the ability to view and use the
wiki on any repo (create/edit pages)'
repository:write: Provides write (not admin) access to a repository or repositories. No distinction is made between public and private
repositories. This scope implicitly grants the repository scope, which does not need to be requested separately. This scope alone does not
give access to the pull requests API. Includes 'push access over HTTPS' and 'fork repos'
pullrequest: Provides read access to pull requests. This scope implies the repository scope, giving read access to the pull request’s
destination repository. Includes 'see and list pull requests', 'create and resolve tasks' and 'comment on pull requests'
pullrequest:write: Implicitly grants the pullrequest scope and adds the ability to create, merge and decline pull requests. This scope also
implicitly grants the repository:write scope, giving write access to the pull request’s destination repository. This is necessary to allow
merging. Includes 'merge pull requests', 'decline pull requests', 'create pull requests' and 'approve pull requests'
issue: The ability to interact with issue trackers the way non-repo members can. This scope doesn’t implicitly grant any other scopes and
doesn’t give implicit access to the repository. Includes 'view, list and search issues', 'create new issues', 'comment on issues', 'watch issues'
and 'vote for issues'
issue:write: This scope implicitly grants the issue scope and adds the ability to transition and delete issues. This scope doesn’t implicitly
grant any other scopes and doesn’t give implicit access to the repository. Includes 'transition issues' and 'delete issues'
webhook: Gives access to webhooks. This scope is required for any webhook-related operation.
This scope gives read access to existing webhook subscriptions on all resources the authorization mechanism can access, without needing
further scopes. For example, a client can list all existing webhook subscriptions on a repository. The repository scope is not required.
Existing webhook subscriptions for the issue tracker on a repo can be retrieved without the issue scope. All that is required is the webhook
scope.
To create webhooks, the client will need read access to the resource. Such as: for issue:created, the client will need to have both the
webhook and the issue scope. Includes 'list webhook subscriptions on any accessible repository, user, team, or snippet' and
'create/update/delete webhook subscriptions'
snippet: Provides read access to snippets. No distinction is made between public and private snippets (public snippets are accessible
without any form of authentication). Includes 'view any snippet' and 'create snippet comments'
email: Ability to see the user’s primary email address. This should make it easier to use Bitbucket Cloud as a login provider for apps or
external applications
user-related APIs: Gives read-only access to the user’s account information. Note that this doesn’t include any ability to change any
of the data. This scope allows you to view the user’s: email addresses,language, location, website, full name, SSH keys, user groups
workspace-related APIs: Grants access to view the workspace’s: users, user permissions, projects
pipeline: Gives read-only access to pipelines, steps, deployment environments and variables
pipeline:write: Gives write access to pipelines. This scope allows a user to: stop pipelines, rerun failed pipelines, resume halted pipelines
and trigger manual pipelines
For more information on Bitbucket Cloud permissions refer to the Bitbucket Authentication methods documentation.
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
Log in to Bitbucket Cloud with the correct user credentials before integrating with Cortex Cloud Application Security, as Cortex Cloud Application
Security uses OAuth for authorizing access.
a. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
You are redirected to Bitbucket Cloud to authorize Cortex Cloud Application Security access.
2. Authorize Cortex Cloud Application Security on Bitbucket Cloud: Review the requested permissions and then select Grant access.
You are redirected to the Select Repositories step of the integration wizard.
Choose from repository list and select repositories from the list
4. Select Save to confirm the repository selection and then Close on the final step of the wizard.
NOTE:
Ensure that you receive the Instance Successfully Created message on this step, indicating successful instance creation.
5. Verify integration and confirm that the your integrated Bitbucket Cloud instance has a status of Connected.
To create an additional Bitbucket Cloud instance: Hover over the Bitbucket Cloud card in the catalog and click Connect Another.
Subscribed events
Below is a comprehensive list of events to which Cortex Cloud Application Security is subscribed. These events encompass various actions and changes
occurring within your Bitbucket Cloud environment that trigger notifications and integrations with Cortex Cloud Application Security.
Read more...
repo:push: This event is triggered whenever a push operation occurs within a repository, indicating that new commits have been added or existing
commits have been updated
repo:fork: This event occurs when a repository is forked, creating a copy of the original repository within the same or a different workspace
repo:updated: This event is triggered when there are updates or changes made to the repository settings or configuration
repo:commit_comment_created: This event occurs when a new comment is created on a commit within the repository
repo:commit_status_created: This event is triggered when a new status or check is created for a commit within the repository
repo:commit_status_updated: This event occurs when the status or check of a commit within the repository is updated
issue:created: This event is triggered when a new issue is created within the repository
issue:comment_created: This event occurs when a new comment is added to an existing issue within the repository
issue:updated: This event is triggered when an existing issue within the repository is updated or modified
pullrequest:created: This event occurs when a new pull request is created within the repository
pullrequest:updated: This event is triggered when an existing pull request within the repository is updated or modified
pullrequest:fulfilled: This event occurs when a pull request is fulfilled or merged into the target branch
pullrequest:rejected: This event is triggered when a pull request is rejected or closed without being merged
Integrate Cortex Cloud Application Security with your Bitbucket Data Center version control system (VCS) to enable security scans for exposed secrets,
infrastructure-as-code (IaC) misconfigurations, vulnerabilities, package operational risks, and license compliance issues in your repositories. This integration
allows you to analyze, prioritize, and resolve detected issues efficiently.
PREREQUISITE:
Read more...
Authorize the user integrating Cortex Cloud Application Security with your Bitbucket Data Center instances with the following permissions:
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
a. Navigate to Bitbucket Server → Manage account → Account settings → Personal access tokens.
NOTE:
By default, the permissions of the access token are set according to your current access level. It is essential to define two levels of
permissions, Project and Repository permissions. The Repository permissions inherit from Project permissions, requiring Repository
permissions to match or exceed Project permissions
Providing read and write permissions to the necessary repositories enables Cortex Cloud Application Security to copy files for scanning and
access repository settings. This enables automated responses to pull requests, including creating fix PRs and adding comments
NOTE:
For additional security, it is recommended to set an expiry automatically. The expiry date of a token cannot be changed after it is created. You can
see the expiry dates for all your tokens on Profile picture → Manage account → Personal access tokens.
e. Click Create.
IMPORTANT:
Always refer to the Bitbucket documentation for information relating to creating a PAT.
a. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
c. Enter your domain in the Configure Domain step of the wizard and click Next.
d. Paste the Bitbucket PAT generated in step 1 above in the provided field, and click Next.
Choose from repository list and select repositories from the list
Click Save.
NOTE:
3. Verify integration and confirm that the your integrated Bitbucket Data Center instance has a status of Connected.
a. On Data Sources, search for Bitbucket Data Center in the search bar.
c. Verify that the status of your Bitbucket Data Center instance is Connected.
To create an additional Bitbucket Data Center instance: Hover over the Bitbucket Data Center card in the catalog and click Connect Another.
To manage Bitbucket Data Center integrations, refer to Manage data source instances.
{
"token": "new token"
}
3. Select the required instance from the list and retrieve the cas_connector_id from the URL.
Subscribed events
Below is a comprehensive list of events to which Cortex Cloud Application Security is subscribed. These events encompass various actions and changes
occurring within your Bitbucket Data Center environment that trigger notifications and integrations with Cortex Cloud Application Security.
Read more...
pr:merged: This event occurs when a pull request is successfully merged into the repository
pr:updated: This event happens when the reviewer list for a pull request is updated
repo:refs_changed: This event happens when references in the repository are changed
pr:needs_work: This event happens when a reviewer marks a pull request as needing work
Integrate Cortex Cloud Application Security with your GitHub SaaS version control system (VCS) to enable security scans for exposed secrets, infrastructure-
as-code (IaC) misconfigurations, vulnerabilities, package operational risks, and license compliance issues in your repositories. This integration allows you to
analyze, prioritize, and resolve detected issues efficiently.
PREREQUISITE:
Authorize the user integrating Cortex Cloud Application Security with your GitHub SaaS instances with the following permissions:
Read access to Dependabot alerts, actions, actions variables, administration, deployments, discussions, metadata, packages, repository hooks,
secret scanning alerts, secrets, and security events
Read and write access to checks, code, commit statuses, issues, and pull requests
NOTE:
In contrast to GitLab SaaS , GitLab self-managed and Azure Repos , there is no individual record of each token used for authentication on the
integrations page. However, Cortex Cloud Application Security retains and uses these tokens for necessary actions. Removing an integration
will delete all associated tokens.
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
a. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
c. Click Authorize on the Configure account step of the GitHub SaaS onboarding wizard.
You are redirected to your GitHub SaaS account in order to install and authorize Cortex AppSec), the GitHub App application handling the Cortex
Cloud Application Security functionality.
You are redirected to the Select Repositories step of the GitHub SaaS installation wizard on the console.
Refer to the GitHub documentation for more on authorizing and installing GitHub SaaS Apps.
Choose from repository list and select repositories from the list
b. Click Save.
4. Verify integration: On Data Sources, select Code Providers → GitHub SaaS → View more and confirm that the status of your integrated GitHub instance
is 'Connected'.
5. Verify integration and confirm that the your integrated GitHub SaaS instance has a status of Connected.
NOTE:
To create an additional GitHub SaaS instance: Hover over the GitHub SaaS card in the catalog and click Connect Another.
Subscribed events
Below is a comprehensive list of events to which Cortex Cloud Application Security is subscribed. These events encompass various actions and changes
occurring within your GitHub SaaS environment that trigger notifications and integrations with Cortex Cloud Application Security.
Read more...
Event Description
Pull request Represents actions related to pull requests, including assignment, enabling
or disabling auto merge, closing, conversion to draft, demilestoning,
dequeuing, editing, enqueuing, labeling, locking, milestone assignment,
opening, readiness for review, reopening, removal of review requests,
request for review, synchronization, unassignment, unlabeling, and
unlocking
Pull request review comment Indicates the creation, editing, or deletion of a comment on a pull request’s
diff
Integrate Cortex Cloud Application Security with your GitHub Server version control system (VCS) to enable security scans for exposed secrets, infrastructure-
as-code (IaC) misconfigurations, vulnerabilities, package operational risks, and license compliance issues in your repositories. This integration allows you to
analyze, prioritize, and resolve detected issues efficiently.
PREREQUISITE:
Read more...
Permissions
Administrator repository permissions: In order to scan pull requests (PRs), the user performing the integration must have administrative
privileges for the repositories. This enables Cortex Cloud to set up subscription webhooks for the selected repositories
Scopes
repo: Grants full access to public and private repositories, including read and write access to code, commit statuses, repository invitations,
collaborators, deployment statuses, and the capability to subscribe the repository to receive new webhook notifications or events
NOTE:
In addition to repository-related resources, the repository scope also grants access to manage organization-owned resources, including
projects, invitations, team memberships, and webhooks. This scope also grants the ability to manage projects owned by users
read:repo_hook: Grants read and ping access to hooks in public or private repositories
read:org: Provides read-only access to organization membership, organization projects, and team membership
workflow: Provides the ability to add and update GitHub Actions workflow files. Workflow files can be committed without this scope if the
same file (with both the same path and contents) exists on another branch in the same repository. Workflow files can expose
GITHUB_TOKEN, which may have a different set of scopes. For more information, refer to the GitHub Actions Automatic token
authentication token authentication documentation
admin:org_hook: Grants read, write, ping, and delete access to organization hooks. Note: OAuth tokens will only be able to perform these
actions on organization hooks created by the OAuth app. Personal access tokens will only be able to perform these actions on organization
hooks created by a user
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
a. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
c. Enter your domain in the Configure Domain step of the wizard and click Register.
NOTE:
The domain is the hostname associated with your GitHub Server instance.
You are redirected to your GitHub Server instance to register Cortex AppSec as an OAuth application. Additionally, the Register OAUTH App step
of the integration wizard is displayed.
d. Copy the Application Name, Homepage URL and Authorization Callback URL values from their respective fields.
2. On the Register a new OAuth application screen of the GitHub Server console:
c. Once created, copy and save the the Client ID and Client Secret values for the new Cortex AppSec application.
a. Select Next on the the Register OAUTH App step of the integration wizard.
b. Paste the Client ID and Client Secret values copied in step 2c above, and click Authorize.
c. Under Selection Options of the Select Repositories step of the wizard, choose the repositories to be connected to the instance:
Choose from repository list and select repositories from the list
d. Click Save.
NOTE:
Ensure that you receive the Instance Successfully Created message on this step, indicating successful instance creation.
4. Verify integration: On Data Sources, select Code ProvidersGitHub Server, and confirm the status is 'Connected'.
To create an additional GitHub Server instance: Hover over the GitHub Server card in the catalog and click Connect Another.
Subscribed events
The following list describes events that Cortex Cloud Application Security monitors on your GitHub Server, covering actions and changes that trigger
notifications and integrations.
Integrate Cortex Cloud Application Security with your GitLab SaaS version control system (VCS) to enable security scans for exposed secrets, infrastructure-
as-code (IaC) misconfigurations, vulnerabilities, package operational risks, and license compliance issues in your repositories. This integration allows you to
analyze, prioritize, and resolve detected issues efficiently.
PREREQUISITE:
Read more...
Authorize the user integrating Cortex Cloud Application Security with your GitLab SaaS instances with the following permissions:
Maintainer permissions. Grants sufficient permissions to configure external integrations, manage repository access, and adjust CI/CD settings
api: Grants full read and write access to the API, including all groups and projects, as well as permissions to interact with the container registry,
the dependency proxy, and the package registry
Administrator repository permissions: In order to scan pull requests (PRs), the user performing the integration must have administrative
privileges for the repositories. This enables Prisma: product_name to set up subscription webhooks for the selected repositories
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
a. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
c. Click Authorize on the Configure account step of the GitLab SaaS onboarding wizard.
You are redirected to your GitLab SaaS account in order to install and authorize Cortex AppSec, the GitLab App application handling the Cortex
Cloud Application Security functionality.
2. On GitLab SaaS: Review the requested permissions and click Authorize Cortex AppSec.
You are redirected to the Select Repositories step of the installation wizard on the console.
b. Click Save.
A repository can only be integrated with a single instance. The first instance that connects with the repository will be the one that the repository is
assigned to. This means that if multiple integrations attempt to connect to the same repository, only the first integration to establish the connection
will be associated with that repository.
4. Verify integration and confirm that the your integrated GitLab SaaS instance has a status of Connected.
To create an additional GitLab SaaS instance: Hover over the GitLab SaaS card and click in the catalog and Connect Another.
Subscribed events
Below is a comprehensive list of events to which Cortex Cloud Application Security is subscribed. These events encompass various actions and changes
occurring within your GitLab SaaS environment that trigger notifications and integrations with Cortex Cloud Application Security:
Read more...
Projects - –
- merge_requests_events This event is triggered when merge or pull requests are created, updated,
merged, closed, or have changes made to them
- push_events This event occurs whenever code changes are pushed to a repository,
indicating new commits being added to the version control history
- tag_push_events This event is triggered when new tags are pushed to a repository
- note_events This event is generated when comments or notes are added to various
objects within GitLab, such as issues, merge requests, or commits
- issues_events This event is triggered when issues are created, updated, closed, or have
changes made to them
- confidential_issues_events Similar to issues_events, but specifically for confidential issues that are
restricted to certain users or groups
- job_events This event occurs when jobs defined in CI/CD pipelines are created,
updated, started, finished, or have changes made to them
- pipeline_events This event is generated when pipelines are created, updated, started,
finished, or have changes made to them
- wiki_page_events This event occurs when changes are made to wiki pages within GitLab,
including creation, updates, and deletions
deployment_events This event is triggered when deployments are created, updated, started,
finished, or have changes made to them
- releases_events This event occurs when releases are created, updated, published, or have
changes made to them
Groups - –
- subgroup_events This event is specific to GitLab groups and occurs when changes are made
to subgroups within a group hierarchy
Integrate Cortex Cloud Application Security with your GitLab self-managed version control system (VCS) to enable security scans for exposed secrets,
infrastructure-as-code (IaC) misconfigurations, vulnerabilities, package operational risks, and license compliance issues in your repositories. This integration
allows you to analyze, prioritize, and resolve detected issues efficiently.
PREREQUISITE:
Authorize the user integrating Cortex Cloud Application Security with your GitLab self-managed instances with the following permissions:
Maintainer permissions. Grants sufficient permissions to configure external integrations, manage repository access, and adjust CI/CD settings
api: Grants full read and write access to the API, including all groups and projects, as well as permissions to interact with the container registry,
the dependency proxy, and the package registry
Administrator repository permissions: In order to scan pull requests (PRs), the user performing the integration must have administrative
privileges for the repositories. This enables Cortex Cloud Application Security to set up subscription webhooks for the selected repositories
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
a. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
c. Enter your domain in the Configure Domain step of the wizard and click Register.
NOTE:
The domain is the hostname associated with your GitLab self-managed instance.
You are redirected to your GitLab self-managed instance register Cortex AppSec as an application. Additionally, the Register OAUTH App step of
the integration wizard is displayed.
d. Copy the Application Name, Homepage URL and Authorization Callback URL values from their respective fields.
d. Once created, copy and save the generated Application ID and Secret values for the new Cortex AppSec application.
b. Paste the GitLab self-managed Application ID and Secret values copied in step 2d above and click Next.
c. Under Selection Options of the Select Repositories step of the wizard, choose the repositories to be connected to the instance:
d. Click Save.
NOTE:
Ensure that you receive the Instance Successfully Created message on this step, indicating successful instance creation.
4. Verify integration and confirm that the your integrated GitLab self-managed instance has a status of Connected.
To create an additional GitLab self-managed instance: Hover over the GitLab self-managed card in the catalog and click Connect Another.
Subscribed events
Below is a comprehensive list of events to which Cortex Cloud Application Security is subscribed. These events encompass various actions and changes
occurring within your GitLab self-managed environment that trigger notifications and integrations with Cortex Cloud Application Security.
Read more...
Projects — —
Groups — —
By ingesting SAST findings from supported third-party vendors as well as tools that support SARIF output, Cortex Cloud consolidates and enriches your
security vulnerability data within a single management platform, providing a deeper understanding of SAST violations through contextual enrichment.
SonarQube
Semgrep
SARIF
NOTE:
It is recommended to use supported third-party vendor integrations for data ingestion over manual SARIF file uploads. Native integrations provide
automated synchronization of periodic scan data and increased data precision
You can view and manage ingested SAST data on the Code Weaknesses issues page and tabs in Repositories and Applications asset inventories. For
information about managing ingested third party data on the Cortex Cloud Cortex Cloud Application Security platform, refer to Third-party findings ingestion
and issue management.
9.3.2 | Semgrep
You can ingest SAST findings directly from Semgrep into Cortex Cloud Application Security. This allows you to use Cortex Cloud Application Security's analysis
and visualization tools to identify critical vulnerabilities, prioritize remediation efforts, and improve your application code security.
PREREQUISITE:
Permissions: Ensure you have Instance Administrator permissions. For more information about Instance Administrator permissions, refer to Dedicated
user roles for Cortex Cloud Application Security
Ensure that you have a connected version control system (VCS) and repositories
NOTE:
To create a Semgrep API token, in Semgrep, navigate to Settings → Tokens → API tokens.
1. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
TIP:
3. On the Configure Integration step of the integration wizard: Provide your Semgrep API Secrets value and click Next.
a. Select an option:
Accept the displayed mapping as detected by Cortex Cloud Application Security. This does not require any action on your part
Manually configure mapping if Cortex Cloud Application Security could not match a project to a repository: Select Set in the Cortex Cloud
Application Security Repository column, and select a repository from the list that is displayed
Manually modify mapping: Click Replace next to the existing mapped Cortex Cloud Application Security repository. This will open an option
to select a different repository from the displayed list, allowing you to update the mapping
NOTE:
Mapping establishes relationships between Semgrep Projects Applications and Cortex Cloud Application Security code repositories,
simplifying access management and enabling risk analysis at the repository level, including displaying findings on the tenant
b. Click Save.
5. Verify integration and confirm that the your integrated Semgrep instance has a status of Connected.
You can view SAST code weaknesses generated from ingested Semgrep findings:
On the dedicated Code Weaknesses page under Cortex Cloud Application Security Issues
9.3.3 | SonarQube
You can ingest SAST findings directly from SonarQube into Cortex Cloud Application Security. This allows you to use Cortex Cloud Application Security's
analysis and visualization tools to identify critical vulnerabilities, prioritize remediation efforts, and improve your application code security.
PREREQUISITE:
Permissions: Ensure you have System Admin, AppSec Admin or GRBAC permissions. For more information on AppSec Admin permissions, refer to
Dedicated user roles for Cortex Cloud Application Security
Ensure that you have a connected version control system (VCS) and repositories
Generate and copy a SonarQube API token. Ensure to assign Web API scope to the API token. Refer to the SonarQube documentation for more
information
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
NOTE:
1. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
2. Select SonarQube under the ‘3rd Party Ingestion’ section in the catalog.
URL and Port: Provide the URL of your SonarQube instance. Port is optional
Organization: The SonarQube organization to be associated with the data ingestion. Required for SonarQube Cloud
b. Click Accept.
a. Select an option:
Accept the displayed mapping as detected by Cortex Cloud Application Security . This does not require any action on your part
Manually configure mapping if Cortex Cloud Application Security could not match a project to a repository: Select Set in the Cortex Cloud
Application Security Repository column, and select a repository from the list that is displayed
Manually modify mapping: Click Replace next to the existing mapped Cortex Cloud Application Security repository. This will open an option
to select a different repository from the displayed list, allowing you to update the mapping
NOTE:
Mapping establishes relationships between SonarQube Applications and Cortex Cloud Application Security code repositories, simplifying
access management and enabling risk analysis at the repository level, including displaying findings on the tenant
5. Select Close on the Status step of the wizard to complete the integration, initiating an automatic ingestion of data from the integrated SonarQube
projects.
NOTE:
Verify that the Connector Created Successfully message is displayed on the page.
6. Verify integration and confirm that the your integrated SonarQube instance has a status of Connected.
On the dedicated Code Weaknesses page under Cortex Cloud Application Security Issues
9.3.4 | Veracode
You can ingest SAST findings directly from Veracode into Cortex Cloud Application Security. This allows you to use Cortex Cloud Application Security's
analysis and visualization tools to identify critical vulnerabilities, prioritize remediation efforts, and improve your application code security.
PREREQUISITE:
Cortex Cloud: Instance Admin, AppSec Admin or GRBAC permissions. For more information on AppSec Admin permissions, refer to Dedicated
user roles for Cortex Cloud Application Security
Ensure that you have a connected version control system (VCS) system and repositories
Generate and copy a Veracode access key. The access key includes a key ID and secret
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
1. Select Settings → Data Sources (under Data Collections) → + Data Source → Show More → Code Repositories.
2. Select Veracode under the ‘3rd Party Ingestion’ section in the catalog.
Enter the Veracode key ID and secret from step 1b into their respective fields
b. Click Authorize.
The integrationSelect Applications step of the integration wizard is displayed, including a list of Veracode applications automatically mapped to
Cortex Cloud Application Security repositories.
Manually map Veracode applications to Cortex Cloud Application Security repositories: Click on a Cortex Cloud Application Security repository
and select the required repository
NOTE:
NOTE:
This is the recommended option to ensure complete coverage and successful operation of all features.
Only selected applications, and then select the applications from the menu
b. Click Next.
a. Select an option:
Accept the displayed mapping as detected by Cortex Cloud Application Security . This does not require any action on your part
Manually configure mapping if Cortex Cloud Application Security could not match a project to a repository: Select Set in the Cortex Cloud
Application Security Repository column, and select a repository from the list that is displayed
Manually modify mapping: Click Replace next to the existing mapped Cortex Cloud repository. This will open an option to select a different
repository from the displayed list, allowing you to update the mapping
NOTE:
Mapping establishes relationships between Veracode projects and Cortex Cloud Application Security code repositories, simplifying access
management and enabling risk analysis at the repository level, including displaying findings on the tenant
b. Click Next.
6. Select Done on the Status step of the wizard to complete the integration, initiating an automatic ingestion of data from the integrated Veracode projects.
NOTE:
Verify that the Connector Created Successfully message is displayed on the page.
7. Verify integration and confirm that the your integrated Veracode instance has a status of Connected.
Limitations
Currently, Veracode SAST ingestion supports Veracode periodic and CLI scans. Pull Request scans and other types are not supported
History, deduplication and DevEx features such as PR comments, IDE, CLI and enforcement are not supported
You can view SAST code weaknesses generated from ingested Veracode findings:
On the dedicated Code Weaknesses page under Cortex Cloud Application Security Issues
Manage connections
Upload SAST data from third-party tools (for vendors that support SARIF output) to the Cortex Cloud platform using SARIF (Static Analysis Results Interchange
Format) to analyze and remediate SAST issues directly within the platform. This feature is particularly useful for vendors that are not directly supported by
Cortex Cloud. When SARIF files are uploaded, they are parsed to create code findings. These findings can be elevated to issues either manually or
automatically, depending on the configured policies.
Supported SARIF versions include 2.0 and 2.1, accepting .zip, .json, and .sarif file formats.
You can upload SARIF files using either Cortex Cloud or via an API.
PREREQUISITE:
Grant the user ingesting the SARIF data Instance Administrator privileges
Onboard the repository into the system before SARIF findings for that repository can be uploaded
Only upload findings relevant to the repository, excluding unrelated data such as CVEs
Create an egress path to establish the designated route for outbound data transmission from Cortex Cloud to third party services. For more information
about configuring egress paths, refer to Egress configurations
2. Select a repository, then right-click Actions → Ingest SARIF from the side panel.
4. Browse and select the SARIF file from the open file dialog.
SARIF files can be uploaded programmatically via the API, among other uses, for CI/CD automation. This is a two-step process: first, obtain a pre-signed URL,
then upload the file to the URL.
curl --fail --upload-file output.sarif \ -H "Content-Type: application/octet-stream" \ "${{ url from step 1 }}"
Parameter Description
Cortex API Token An API token for the tenant is required. It must be a Standard key, not an
Advanced key. For more information about API keys, refer to Get started
with Cortex XSIAM APIs
x-xdr-auth-id Authorization ID. To retrieve the API key authorization ID, select Settings →
Configurations → API Keys (under the Configurations section)
Parameter Description
repositoryId The asset ID of the target repository for SARIF ingestion. To retrieve the
repository ID,
2. Retrieve the Asset ID from the Asset ID column of the repository you
wish to map.
NOTE:
IMPORTANT:
Set the parameter secrets described in the table above as repository secrets in your secret management system for your CI pipelines
For GitHub only, set the necessary repository secrets in the Secrets and Variables section of your repository settings: Navigate to Settings → Secrets
and Variables (under the Security section) → Actions → New repository secret
Example 22.
This example describes a GitHub Actions pipeline configured to scan every push to the 'main' or 'master' branch, subsequently sending those results to
Cortex Cloud.
Read more...
name: SARIF Upload
on:
push:
branches:
- main
- master
jobs:
scan:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
You can view SAST code weaknesses generated from ingested SARIF findings:
On the dedicated Code Weaknesses page under Cortex Cloud Application Security Issues
You can modify third-party integrations configurations, including remapping connected repositories and replacing expired API keys.
2. Search for a connected third party data source (such as Veracode, SonarQube and Semgrep).
Reselect Applications: Redirects to the Select Application step of the integration wizard, allowing you to manage selected applications
Change Mapping: Redirects to the Select Application step of the wizard, allowing you to manage mapping
Delete Application: Deletes the application. Mapped repositories will be deleted accordingly. This option is available only if ‘All current and future
applications’ is not selected
Identify issues requiring immediate attention: Quickly surface and prioritize critical security vulnerabilities and risks
Enhance the security posture of your repositories: Gain insights into your organization's overall application security health and pinpoint areas for
improvement
Efficiently address failed build/PR blocks: Monitor and troubleshoot security issues that arise from failed builds and pull requests
Under Dashboard & Reports, select Dashboard → Application Security in the page dropdown menu
Dashboard controls
Time Range: By default, dashboards display data for the last 30 days. You can customize the time frame by clicking Select and selecting exposed time
frame options (24h, last 7/30 days) or by clicking Custom, selecting start and finish dates from the provided calendars, and applying the changes
NOTE:
To adjust the timeframe of the data displayed in a widget, you can select a desired value from the widget's menu. This allows you to customize the time range
for each widget individually, tailoring the data visualization to your specific needs. This overrides the timeframe for the entire dashboard, whether default or
custom.
The Cortex Cloud Application Security dashboard provides an overview of applications and their associated repositories, CI pipelines, and third-party vendors.
It displays the location of each asset within the system, such as the type of version control system hosting each repository (such as GitLab, GitHub) or the type
of tool running the pipeline (such as GitHub Actions), the total count of each asset type, and a detailed breakdown of assets across the system.
Top Repositories at Risk: Identifies and ranks the top repositories in your environment at risk based on the number of high and critical issues identified in
the repositories. Selecting a repository opens the repository description card locally without requiring a redirect to the repository assets page. Refer to
Expanded repository asset information for more information about repository
Top IaC Issues: Displays the most critical IaC issues identified in IaC resources. Selecting an issue or clicking Show All redirects to the IaC
Misconfigurations issues page. Refer to Overview for more information about IaC misconfiguration issues
Top CVE Issues: Displays the most critical CVE vulnerabilities identified in software packages. Selecting an issue or clicking Show All redirects to the
Vulnerabilities issues page. Refer to CVE vulnerability issues for more information about CVE vulnerabilities issues
Top Secrets Issues: Displays the most critical exposed secrets identified in your software environment. Selecting an issue or clicking Show All redirects to
the Secrets issues page. Refer to Overview for more information about Secrets issues
Scan Overview: Provides a summary of the status (passed, blocked) for the types of scans (periodic, PR and CI). Click on a status to redirect to the
corresponding scan type management page. Refer to Overview for more information about scan management
The Cortex Cloud Application Security asset inventory provides a comprehensive view of assets detected in scans across your engineering environment. This
includes code ((Secrets, Infrastructure as Cloud (IaC), Software Composition Analysis (SCA)) and repository inventories. These inventories provide details and
insights, enabling you to understand the composition and context of your AppSec assets.
NOTE:
Application inventories are managed under Application Security Posture Management (ASPM). Refer to Manage application assets for more information
Use cases
Asset inventory: Maintain a comprehensive inventory of all detected assets, including their metadata and relationships to other assets. This
provides a centralized view of all components within the environment
Code to cloud mapping: A graphical representation of the SDLC, highlighting the asset's location within. This visualization allows for a clear
understanding of the asset's journey and its relationship to other components
Application path to production: Trace the asset's path through the application lifecycle, from its origin in code repositories to its deployment. This
includes identifying all intermediate stages and dependencies
Security risks
Secrets exposure: Identify Secrets (such as API keys, passwords, certificates) in an asset, and provide details such as severity, when discovered,
and the assignee
Infrastructure as Code (IaC) Misconfigurations: Identify misconfigurations associated with the IaC asset configuration, and provide details such as
severity, location, when created and the assignee
Software Composition Analysis (SCA) vulnerabilities: Identify known vulnerabilities in open-source packages associated with an asset, including
details such as severity, the CVE issue, CVSS score, when discovered and the assignee
License miscompliance: Identify and detail license miscompliance issues within packages associated with an asset, including severity, license
category (such as strong copyleft), location, when discovered, and the assignee
Package Integrity Issues: Identify and detail any package operational issues in packages associated with the asset, including severity, location,
when created and the assignee
Code Weaknesses: Identify and detail code violations associated with the asset, including severity, location, when created and the assignee
All Code. A consolidated inventory of all code assets in your software development lifecycle (SDLC)
Software Packages An inventory of software package in your SDLC. Refer to Software packages
IaC Resources. An inventory of IaC assets in your SDLC. Refer to Infrastructure-as-Code (IaC) resources for more information
Repositories: An inventory of repositories in your SDLC. Refer to Repositories for more information
CI Pipeline: An inventory of CI pipelines in your SDLC. Refer to CI/CD pipelines as assets for more information
Data presentation
Asset details are displayed on the platform user interface (UI) across two levels: An initial inventory overview displays all assets in a category. Clicking a row in
the inventory expands a description card, providing an overview with additional details about the asset, as well as dedicated tabs that allow you to further
investigate the asset. The description card also includes information about any issues and findings associated with the asset.
9.5.2 | Repositories
The Repository asset inventory provides comprehensive visibility of all your repositories integrated with Cortex Cloud Application Security, providing detailed
information and insights into repository artifacts, configurations, and dependencies. You can directly access issues, and findings related to repository assets
from the Repository assets page, allowing you to prioritize and remediate them without having to navigate to a separate remediation section.
To access repository assets, under Inventory, select All Assets → Repositories (under Code).
Repository dashboard
Providers: A breakdown showing the connected version control providers (such as GitHub and GitLab) and the number of repositories found in each
provider
Privacy State: A breakdown showing the distribution between public and private repositories and the amount of repositories in each category
The following table describes the Repository properties displayed in the inventory table.
Read more...
Field/Attribute Description
Name The name of the repository in the version control system (VCS).
Repository Provider The VCS platform hosting the repository, such as GitHub, GitLab, or Bitbucket
Repository Organization The organizational structure (such as project, team, platform) that contains and manages this repository.
Repository Owners The individuals or teams responsible for maintaining and managing the repository
Importance Score The Repository Importance score is a numerical score representing the overall quality or risk associated
with the repository, based on various factors such as codebase characteristics and path-to-production
environments
Application Name The name of the application to which the repository belongs, indicating it is part of the application's assets.
Open Cases The number of active security or compliance cases linked to the repository
Open Issues The total count of unresolved issues detected across all scan types in the repository
Data Sources —
Last Scan Date The date when the repository was last scanned
Click on an asset in the inventory table to open a detailed Asset card, which provides additional, more in-depth information about the asset. The information is
organized into tabs, including an Overview tab (default display) which provides highlights and a general summary, while contextual tabs focus on particular
properties of the asset. Additionally, the card includes details about detected risks, allowing you to further explore them directly from the asset inventory. You
can also perform actions on the asset using the Actions menu.
The repository asset summary, displayed at the top of the card, provides concise details about the repository, including the name and type of the repository,
and the version control system to which it belongs.
Overview
Highlights provide key security and operational insights related to the repository, including:
Critical/ High issues: An aggregation of critical and high issues discovered within the repository assets across all scan types (IaC, Secrets, SCA) as well
as ingested third party SAST findings
Risk summary: The amount of vulnerabilities associated with the repository grouped by category (cases, issues and findings) and their severity. For more
information about risks, refer to Cortex Cloud Application Security code scanners
Visibility timeline: When the repository issues were first and last detected
Asset details, including Asset Id, Asset Types and Asset Groups associated with the repository
Applications: Lists the applications that include the repository as part of their defined assets or configurations.
Repository details: Provides information about the repository itself. This includes the version control system hosting the repository (e.g., GitHub, GitLab,
Bitbucket), the programming languages used within the repository, its visibility setting (public or private), whether it's archived, the timestamp of the last
commit, and a list of owners
Scan information: A list of scans conducted on the repository, including all scan types (IaC, Secrets and so on). Scan details include the name of the
scanner used, the specific branch of the package that was scanned, the timestamp of the last scan, and the overall status of the scan (such as
completed or failed).
Code to Cloud
Code to Cloud visually represents the software development lifecycle (SDLC), focusing on the repository's role in the path to production. The graph describes
the links between the repository node, pipeline, image, and cluster.
Developers write code and commit changes to a version control system like Git, storing it in a central code repository
When code changes are pushed to the repository, an automated pipeline is triggered, handling build, automated testing, and packaging code into a
deployable artifact
The built and tested application is packaged into a container image, which is then pushed to a container registry like Docker Hub or a cloud-specific
registry
The container image is pulled from the registry and deployed to a container orchestration platform like Kubernetes, managing running application
instances (containers) for scalability, availability, and resource management
The container orchestration platform runs on top of your chosen cloud infrastructure, such as AWS, Azure, or GCP
Applications
The Application tab provides an overview of the application associated with this repository, including a graphical representation of its path to production, which
incorporates the repository's role within the workflow.
Vulnerabilities
The Vulnerabilities tab provides a list of vulnerabilities identified within the repository in your environment. Each vulnerability includes details regarding its
severity level, associated CVE identifier, CVSS score, initial detection date, and assigned team member or group responsible for remediation.
CVSS Score: The Common Vulnerability Scoring System score that quantifies the severity of the vulnerability
Assigned To: The person or team responsible for addressing the vulnerability
Configurations
The Configurations tab displays an inventory of IaC misconfiguration across all repository assets:
Severity level (icon): Indicates the level of severity of the IaC misconfiguration
Asset Name: The name of the IaC resource in which the misconfiguration occurred
Assigned To: The person or team responsible for addressing the vulnerability
Secrets
The Secrets tab displays an inventory of Secrets detected within the repository.
Severity level (icon): Indicates the level of severity of the exposed Secrets
Assigned To: The person or team responsible for addressing the Secrets
Creation Date: The date when the Secrets were initially detected
Package Integrity
The Package Inventory tab provides details about the popularity and maintenance of packages identified within the repository. It also includes an inventory of
package operational risk issues and package license issues, offering a comprehensive view of the package's overall health and compliance.
Severity level (icon): Indicates the level of severity of the package license miscompliance
License Name: The name of the license associated with the package. This indicates the specific license agreement that is potentially being violated
Asset Name: The name of the asset that uses the package with the license miscompliance. This identifies where the license issue occurs
Branch: The branch of the codebase where the asset with the license issue is located
Asset Name: The name of the asset that uses the package with the package operational risk
Branch: The branch of the codebase where the asset with the package operational risk is located
Assigned To: The person or team responsible for addressing the package operational risk
Creation Date: The date when the package operational risk was initially detected
Code Weaknesses
The Code Weakness tab provides an inventory of ingested SAST (Static Application Security Testing) CWEs (Common Weakness Enumerations) identified
within the repository. Each CWE is listed with its corresponding severity level, allowing you to prioritize remediation efforts based on the potential impact on the
Severity level (icon): Indicates the level of severity of the CWE issue
Assigned To: The person or team responsible for addressing the CWE issue
Creation Date: The date when the CWE issue was detected
For more information about about third party SAST ingestion, refer to Overview.
Copy link to asset: Duplicate the link or URL associated with the asset, allowing you to easily share or reference the asset by providing its direct link to
others
View asset data: Display asset data. Formats: JSON, Tree View
Copy text to clipboard: Duplicate selected text for easy pasting elsewhere
Copy entire row: Duplicate the entire row of data for easy pasting elsewhere
Show/hide rows with [Asset_Name]: Show/hide rows matching the [asset name] of the selected row
Ingest Sarif: Allows you to upload a file to ingest third party Sarif data
To create a SBOM:
3. Configure the following settings from the Export SBOM dialog box:
a. Level: Level of data: Select the scope of data to include in the SBOM: Options: Repository, Organization (downloads the SBOM for the entire VCS
organization associated with the repository)
b. Format: Output format: Select the output format for the SBOM. Options:
The IaC resource asset inventory provides a centralized view of all infrastructure-as-code (IaC) resources and their details across your environments. The
platform enables efficient tracking and management of your IaC resources, ensuring compliance with security and governance standards. You can directly
access IaC misconfiguration issues and findings within the IaC assets inventory, allowing you to prioritize and remediate them without having to navigate to a
separate remediation section.
To access IaC assets, under Inventory, select All Assets → Code → IaC Resources.
Cloud Providers: A breakdown showing the connected cloud providers (such as AWS and GCP) and the number of IaC resources found in each provider
Frameworks: A breakdown showing the connected frameworks (such as Terraform and Kubernetes) and the number of IaC resources found in each
framework
The following table describes the IaC resource properties displayed in the inventory table.
Read more...
Property Description
Name The unique identifier for the IaC resource within the system
Resource type The type of IaC resource, such as a virtual machine, storage account, or network security group
Asset Class The classification of the IaC resource, such as compute, storage, network, or security
Framework The IaC framework used to define and manage this resource, such as Terraform or CloudFormation
Repository The version control system repository where the IaC code for this resource is stored
Branch The specific branch of the repository where the IaC asset is located
Provider File Path The relative path to the IaC provider file within the repository that defines this resource
Cloud Provider The cloud provider where this IaC resource is deployed, such as AWS, Azure, or GCP
Deployed State The current deployment status of the IaC resource, such as "Deployed," or "Drifted"
Application Name The application to which this IaC resource belongs, if applicable.
Open Cases The number of open security cases related to this IaC resource.
Open Issues The number of open security issues related to this IaC resource.
Data Sources The sources from which data about this IaC resource is collected, such as cloud provider APIs or
configuration files.
First Seen The date and time when this IaC resource was first discovered by the system.
Click on an asset in the inventory table to open a detailed Asset card, which provides additional, more in-depth information about the asset. The information is
organized into tabs, including an Overview tab (default display) which provides highlights and a general summary, while contextual tabs focus on particular
properties of the asset. Additionally, the card includes details about detected risks, allowing you to further explore them directly from the asset inventory. You
can also perform actions on the asset using the Actions menu.
The IaC resource asset summary, displayed at the top of the card, provides concise details about the IaC resource, such as its name (such as
aws_s3_bucket.website), type (such as S3 bucket), and the associated framework (such as Terraform).
Overview
Highlights include:
Critical/High issues: An aggregation of critical and high issues associated with the IaC resource
Internet Exposed Runtime Asset: Indicates whether the IaC resource, when deployed, results in a runtime asset (such as a container), that is directly
accessible from the public internet
Deployed: Indicates whether the IaC resource has been deployed and is currently active within your cloud environment or infrastructure
New: Indicates whether the IaC resource was created during the past 30 days
Sensitive Data in Runtime: Indicates whether the IaC resource, when deployed, handles or stores sensitive data within its runtime environment
Risk summary: The amount of risks associated with the IaC resource grouped by category (cases, issues and findings) and their severity level. For more
information about IaC scans, refer to Overview
Asset details, including Asset Id, Asset Types and Asset Groups associated with the IaC resource
Visibility timeline: When the IaC resource was first and last detected
Applications: Lists the applications that include this IaC resource as part of their defined assets or configuration
Source Control Information: Displays the origin and location of the IaC resource's code. It includes the specific provider hosting the code (such as
GitHub, GitLab), the repository and branch where the code resides, and the exact file path to the resource's definition
Deployment and Provisioning: Provides details of where and how the IaC resource is deployed and manifested within your cloud environment. It lists the
cloud assets provisioned based on the IaC resource, and specifies the cloud platform where these assets are deployed (such as AWS, Azure, Google
Cloud)
Version History and Metadata: Provides insights into the resource's development history, authorship, and associated metadata. It includes a list of
contributors who have modified the IaC code, any user-defined tags associated with the resource, and details about the initial and most recent commits
related to this IaC definition
Scan information: A list of scans conducted on the IaC resource. Scan details include the name of the scanner used, the specific branch of the package
that was scanned, the timestamp of the last scan, and the overall status of the scan (such as completed or failed)
Code
The Code tab provides two code snippets to help you understand the asset's context within your IaC environment:
Asset definition: This snippet shows the exact code block and file path where the asset is defined within your IaC file
Resource location: This snippet displays the directory path and file name where the asset's resource code resides within your IaC repository
Code to Cloud
The Code to Cloud visually represents the software development lifecycle (SDLC), which includes the selected IaC resource in the path to production. The
graph describes the links between the repository node hosting the IaC resource, the traced runtime resource, and any associated issues.
Developers write code and commit changes to a version control system like Git, storing it in a central code repository
When code changes are pushed to the repository, an automated pipeline is triggered, handling build, automated testing, and packaging code into a
deployable artifact
The built and tested application is packaged into a container image, which is then pushed to a container registry like Docker Hub or a cloud-specific
registry
The container image is pulled from the registry and deployed to a container orchestration platform like Kubernetes, managing running application
instances (containers) for scalability, availability, and resource management
The container orchestration platform runs on top of your chosen cloud infrastructure, such as AWS, Azure, or GCP
Applications
Configuration
The Configuration tab displays an inventory of IaC misconfiguration across all IaC assets.
Severity level (icon): Indicates the level of severity of the IaC misconfiguration
Asset Name: The name of the IaC resource in which the misconfiguration occurred
Assigned To: The person or team responsible for addressing the vulnerability
Secrets
Severity level (icon): Indicates the level of severity of the exposed Secrets
Assigned To: The person or team responsible for addressing the Secrets
Creation Date: The date when the Secrets were initially detected
Right-click on an asset in an inventory table to access the Actions menu, where you can perform the following actions:
Open in new tab: Opens the description tab of the asset for detailed analysis of the issue
View asset data: Opens a new pop-up window displaying the data retrieved for the asset during the most recent scan in either JSON (default) or tree
view. This raw data provides a comprehensive and unformatted view of the asset's properties and attributes as they were initially ingested
Show/hide rows: Stand on data in a row and filter the entire inventory to show or hide assets based on the selected attribute
The Software Packages asset inventory provides a centralized view of all open source software packages and their details across your environments. The
platform enables efficient tracking and management of your software package assets, ensuring compliance with security and governance standards. You can
directly access package vulnerabilities, operational risks and license misconfiguration cases, issues, and findings within the Software Packages asset
inventory, allowing you to prioritize and remediate them without having to navigate to a separate remediation section.
To access OSS package assets, under Inventory, select All Assets → Software Packages (under Code).
Dependency Types: A breakdown showing the amount of direct and transitive (indirect) software packages
The following table describes the software package properties displayed in the inventory table.
Read more...
Property Description
Data Sources The sources used to collect data about the software packages
Licenses The legal permissions or rights granted for the use of the OSS package. For more information about
licenses, refer to Open source software license categories
Dependency Type Whether the package is a direct (a package that is explicitly listed as a requirement) or transitive (a
package that is indirectly required by another package) dependency
Providers The platform (such as GitHub) hosting and supplying the OSS package
File Path The location of the OSS package file in your environment
Repository The centralized storage location where the OSS package is stored and managed
Click on an asset in the inventory table to open a detailed Asset card, which provides additional, more in-depth information about the asset. The information is
organized into tabs, including an Overview tab (default display) which provides highlights and a general summary, while contextual tabs focus on particular
properties of the asset. Additionally, the card includes details about detected risks, allowing you to further explore them directly from the asset inventory. You
can also perform actions on the asset using the Actions menu.
The software package asset summary, displayed at the top of the card, provides concise details about the asset's key attributes, including the package name
and originating package manager.
Overview
The Overview tab summarizes software package package highlights and properties.
Highlights include:
New: Indicates whether the package was first detected in your environment during the past 30 days
Root: Indicates whether this package is the top-level package within its dependency tree, meaning it is not a dependency of any other package within
the project
Deprecated: Whether the package was officially deprecated by its maintainers. This indicates that it is no longer recommended for use and could
potentially include security risks
Risk summary: The amount of vulnerabilities associated with the package grouped by category (cases, issues and findings) and their severity. For more
information about OSS package vulnerabilities, refer to Overview
Visibility timeline: When the package vulnerabilities were first and last detected
Asset details, including Asset Id, Asset Types and Asset Groups associated with the package
Applications: Lists the applications that include this package as part of their defined assets or configurations. See Application below for more details
Package source: Includes details about the package origin, including its provider (such as PyPI, npm), the repository and branch hosting the package
source code, the path to its primary file, the license under which it's distributed, its dependency type (direct or transitive), and a list of contributors
Scan information: A list of Software Composition Analysis (SCA) scans conducted on the package. Scan details include the name of the scanner used,
the specific branch of the package that was scanned, the timestamp of the last scan, and the overall status of the scan (such as completed or failed).
Code
The Code tab identifies the package's location within your codebase by providing the repository, file path, and specific line number. Additionally, it presents the
package's dependency tree, viewable as either a graph or a hierarchical list, presenting its relationships with other components.
Code to Cloud
The Code to Cloud visually represents the software development lifecycle (SDLC), focusing on the package's role in the path to production. The graph
describes the links between the repository node hosting the package, the pipeline, image, and cluster.
Developers write code and commit changes to a version control system like Git, storing it in a central code repository
When code changes are pushed to the repository, an automated pipeline is triggered, handling build, automated testing, and packaging code into a
deployable artifact
The built and tested application is packaged into a container image, which is then pushed to a container registry like Docker Hub or a cloud-specific
registry
The container image is pulled from the registry and deployed to a container orchestration platform like Kubernetes, managing running application
instances (containers) for scalability, availability, and resource management
The container orchestration platform runs on top of your chosen cloud infrastructure, such as AWS, Azure, or GCP
Application
The Application tab provides an overview of the application associated with this package, including a graphical representation of its path to production, which
incorporates the package's role within the workflow.
Vulnerabilities
The Vulnerabilities tab provides a list of vulnerabilities identified within the package in your environment. Each vulnerability includes details regarding its
severity level, associated CVE identifier, CVSS score, initial detection date, and assigned team member or group responsible for remediation.
CVSS Score: The Common Vulnerability Scoring System score that quantifies the severity of the vulnerability
Assigned To: The person or team responsible for addressing the vulnerability
The Package Integrity tab includes details about package operational risks and license issues detected in software packages.
Severity level (icon): Indicates the level of severity of the package license miscompliance
License Name: The name of the license associated with the package. This indicates the specific license agreement that is potentially being violated
Asset Name: The name of the asset that uses the package with the license miscompliance. This identifies where the license issue occurs
Branch: The branch of the codebase where the asset with the license issue is located
Asset Name: The name of the asset that uses the package with the package operational risk
Branch: The branch of the codebase where the asset with the package operational risk is located
Assigned To: The person or team responsible for addressing the package operational risk
Creation Date: The date when the package operational risk was initially detected
Right-click on an asset in an inventory table to access the Actions menu, where you can perform the following actions:
Open in new tab: Opens the description tab of the asset for detailed analysis of the issue
View asset data: Opens a new pop-up window displaying the data retrieved for the asset during the most recent scan in either JSON (default) or tree
view. This raw data provides a comprehensive and unformatted view of the asset's properties and attributes as they were initially ingested
Show/hide rows: Stand on data in a row and filter the entire inventory to show or hide assets based on the selected attribute
The CI/CD pipeline asset inventory provides a centralized view of all CI/CD pipeline assets across your environments, enabling efficient tracking and
management. You can access and analyze pipeline details, properties and insights including deployment status, whether deployed, active or new and a
summary of findings and issues associated with the pipeline. This allows you to assess the security and operational status of your pipelines. You can also
review the pipeline code, the deployment including the pipeline and applications containing the pipeline in dedicated spaces.
To access CI/CD pipelines assets, under Inventory, select All Assets → Code → CI/CD Pipelines.
The dashboard includes a widget detailing the CI pipeline providers. You can filter by provider.
The following table describes the CI/CD Pipeline asset properties displayed in the inventory table.
Read more...
Property Description
Provider The service that supplied the pipeline, such as GitHub Actions or Jenkins
Repository Name The code repository which stores the source code, pipeline configurations,
and related assets used for the CI/CD process
CI File Path The specific location or directory path where the CI configuration file is
stored
Click on an asset in the inventory table to open a detailed Asset card, which provides additional, more in-depth information about the asset. The information is
organized into tabs, including an Overview tab (default display) which provides highlights and a general summary, while contextual tabs focus on particular
properties of the asset. Additionally, the card includes details about detected risks, allowing you to further explore them directly from the asset inventory. You
can also perform actions on the asset using the Actions menu.
The CI/CD Pipeline asset summary, displayed at the top of the card, provides concise details about the CI/CD pipeline assets, such as its name, the platform
used (for example GitHub Actions) and specific pipeline configurations.
Overview
The Overview tab summarizes CI/CD pipeline asset highlights and properties.
Highlights include:
Critical/High issues: An aggregation of critical and high issues associated with the CI/CD pipeline asset
Deployed: Indicates whether the CI/CD pipeline asset has been deployed and is currently active within your cloud environment or infrastructure
New: Indicates whether the CI/CDpipeline asset was created during the past 30 days
Active: Indicates whether the CI/CD pipeline asset is active and processing tasks
Risk summary: The amount of risks associated with the pipeline asset grouped by category (cases, issues and findings ) and their severity level
Asset details, including Asset Id, Asset Types and Asset Groups associated with the CI/CD pipelines asset
Visibility timeline: When the CI/CD pipeline assets were first and last detected
Applications: Lists the applications that include this pipeline assets as part of their defined assets or configuration
CI/CD configuration:
Provider: The platform or service that hosts and manages the CI/CD pipeline, such as Jenkins and GitHub Actions
CI File Repository: The location or repository where the configuration files for the CI/CD pipeline are stored
CI Instance: The specific instance or environment where the CI/CD pipeline is executed
Last Job Execution: The most recent execution of a job within the CI/CD pipeline
Contributors: The individuals or entities who have made contributions to the CI/CD pipeline. This information allows collaboration within the CI/CD
pipeline's development process
Code to Cloud
The Code to Cloud tab visually represents the software development lifecycle (SDLC), which includes the selected CI/CD pipeline in the path to production.
The Applications tab provides an overview of the applications associated with this CI/CD pipeline, including the application risk score, business criticality,
business owners and path to production. The path to production provides a graphical representation the application software development lifecycle, including
the CI/CD pipeline role within the workflow.
Right-click on an asset in an inventory table to access the Actions menu, where you can perform the following actions:
Open in new tab: Opens the description tab of the asset for detailed analysis of the issue
View asset data: Opens a new pop-up window displaying the data retrieved for the asset during the most recent scan in either JSON (default) or tree
view. This raw data provides a comprehensive and unformatted view of the asset's properties and attributes as they were initially ingested
Show/hide rows: Stand on data in a row and filter the entire inventory to show or hide assets based on the selected attribute
You can manage the Instances configured for a Data Source on the Data Sources page. You can edit, delete, enable, or disable instances, and refresh log
data.
2. Find an instance by clicking on a Data Source name or using the Search field.
3. In the row for the instance name, take the required action:
Action Instructions
b. In Edit Data Source, you can update the values in the Connect and Collect sections. The options under
Recommended Content are view only.
If you delete all the instances for a Data Source, the Data Source is not listed on the Data Sources page.
You can add a new data source with the Data Source Onboarder. The Onboarder installs the data source, sets up an instance, configures playbooks and
scripts, and other recommended content. The Onboarder offers default (customizable) options, and displays all configured content in a summary screen at the
end of the process.
+ Add New Instance for an integrated data source by clicking the menu in the right corner of an existing data source. Then skip to Step 4.
Hovering over a data source displays information about the data source and its integrations. Data sources that are already integrated are highlighted
green and show Connect Another Instance. To see details of existing integrations, click on the number of integrations.
The data sources are drawn from the Marketplace, Custom Collectors, and integrations. If you search for a data source and No Data Sources Found,
click Try searching the Marketplace, to view the marketplace page prefiltered for your search. If there are no available options in the Marketplace, you
can use one of the Custom Collectors to build your own.
NOTE:
If a data source contains multiple integrations, the integration configured as the default integration will used by the Data Onboarder. The default
integration of the content pack is indicated in each content pack's documentation. The other integrations are available for configuration in the
Automation and Feed Integrations page after installing the content pack.
When adding XDR data sources the Data Source Onboarder is not available, however, you can still enable the data source. Cortex Cloud then
creates an instance and lists it on the Data Sources page.
4. In the New Data Source window, complete the mandatory fields in the Connect section.
For more information about the fields, click the question mark icon.
5. (Optional) Under Collect, select Fetched alerts and complete the fields.
The items in this section are content specific. Some options are view only and others are customizable. Click on each option for more information:
You can select the Playbooks and Scripts that you want to enable. By default, recommended options are selected. Any unselected content is
added as disabled content. Depending on the selected playbook, some scripts are mandatory.
NOTE:
If you are adding a new instance to an existing data source, these options are View only.
You can adjust the view only options on the relevant page in the system, for example Correlations, Playbooks, or Scripts.
Cortex Cloud automatically installs content packs with required dependencies and updates any pre-installed optional content packs. You can also
Select additional content packs with optional dependencies to be configured during connection.
If the test fails, you can Run Test & Download Debug Log to debug the error.
If errors occurred during the test, you can click See Details and Back to Edit to revise your configuration. For advanced configuration, click on an items
to open a new window to the relevant page in the system (for example, Correlations or Playbooks) filtered by the configuration.
You can manage the Instances configured for a Data Source on the Data Sources page. You can edit, delete, enable, or disable instances, and refresh log
data.
2. Find an instance by clicking on a Data Source name or using the Search field.
3. In the row for the instance name, take the required action:
Action Instructions
b. In Edit Data Source, you can update the values in the Connect and Collect sections. The options under
Recommended Content are view only.
If you delete all the instances for a Data Source, the Data Source is not listed on the Data Sources page.
Abstract
You can manage the cloud instances configured for a CSP on the Data Sources page. You can check the status, edit, delete, enable, or disable instances,
and initiate discovery scan.
2. Find the cloud instance by clicking on the CSP name or using the Search field.
3. In the row for the cloud instance, click View Details. The Cloud Instances page is displayed, filtered by the CSP you selected.
4. In the Cloud Instances page, you can filter the results by any heading and value.
5. Click on an instance name to open the details pane for that instance.
Action Instructions
Discover Now To initiate a discovery scan, in the row for the cloud instance, right-click
and select Discover Now. Alternatively, in the details pane, click the
more options icon and select Discover Now.
Enable/Disable In the row for the cloud instance, right-click and select Enable or
Disable. Alternatively, in the details pane, click the more options icon
and select Enable or Disable.
Delete In the row for the cloud instance, right-click and select Delete.
Alternatively, in the details pane, click the more options icon and select
Delete.
Create a new instance Click New Instance and select the type of CSP of which you want to
create a new instance. Follow the onboarding wizard to define its
settings.
Action Instructions
Edit configuration In the row for the cloud instance, right-click and select Configuration.
Alternatively, in the details pane, click the edit button. Follow the
onboarding wizard to edit the cloud instance's settings. You must
execute the updated template in the CSP environment for the
configuration changes to be applied.
Abstract
You can manage the Kubernetes Connector instances on the Data Sources page. You can check the status, edit or delete Kubernetes Connector instances.
2. Find the Kubernetes instance by clicking on the Kubernetes name or using the Search field.
3. In the row for the Kubernetes instance, click View Details. The Kubernetes Connectors page is displayed with all deployed Kubernetes Connectors. To
view all Kubernetes clusters, including ones that are not yet deployed, go to the Kubernetes Connectivity Management page.
4. In the Kubernetes Connectors page, click on a cluster name to open the details pane for that instance.
5. You can perform the following actions on each Kubernetes Connector instance:
Action Instructions
Open Cluster Details In the details pane, click the more options icon and select Open Cluster
Details. The Asset Card for that Kubernetes cluster is displayed.
Edit Connector In the row for the Kubernetes instance, right-click and select Edit.
Alternatively, in the details pane, click the more options icon and select
Edit Connector. In Edit Kubernetes Connector, enter a name for the
installer. You can edit the namespace for the connector, the scan
cadence, and the version of the connector you want to install. You must
execute the updated template in the Kubernetes environment for the
configuration changes to be applied.
Delete Connector In the row for the Kubernetes instance, right-click and select Delete.
Alternatively, in the details pane, click the more options icon and select
Delete Connector. To remove the connector, you must manually run
Kubernetes commands to delete the resources in the Kubernetes
environment. The commands are listed here.
Navigate to Settings → Data Sources and find the Kubernetes instances by clicking on the Kubernetes name or using the Search field. In the Kubernetes
Connectors page, click Kubernetes Connectivity Management to view all detected Kubernetes clusters. Here, you can check if a cluster is connected, view the
status, and see the connector version. When a new version of the Kubernetes Connector is available, you can update it here.
Abstract
You can troubleshoot errors on cloud instances by drilling down on an instance from the Data Sources page.
To help you to troubleshoot errors on a cloud instance, Cortex Cloud provides the following visibility and drilldown options:
A breakdown of the security capabilities enabled on an instance, detailing the status of each capability along with any open errors or issues.
Additional XQL drill down options to query the history of error and recovery events for each security capability.
Under Cloud Service Provider, review the status of the instances that were onboarded for the service provider. If the status shows Warning or Error, hover
over the service provider and click View Details.
2. On the Cloud Instances page review the list of instances that were onboarded and their overall status. The status is displayed as follows:
Warning: The connector is enabled and has minor issues. For example, some accounts or capabilities are in warning or error status.
Error: The connector is enabled and has substantial errors. For example, an authentication failure, an outpost failure, major permissions issues, or
(for organization level accounts) the majority of the accounts in the instance are in error status.
3. To understand why an instance is showing a Warning or Error status, click on the instance name.
The cloud instance panel provides a breakdown of the security capabilities and the accounts onboarded on the instance. Review the information in the
following sections:
Section Context
Header Displays the overall status of the instance and the following information about the account, as specified during onboarding:
Scope of the instance: The number of accounts onboarded on the instance and their status. See the Accounts section for
more information about the individual accounts and the type of account (single account or organization).
Scan mode: Cloud Scan or Outpost. For accounts using Outpost, information is displayed about the status of the Outpost
account and the account ID.
Security Displays a breakdown of the security capabilities enabled on the instance and their individual statuses. Click on any item that
Capabilities shows a warning or error status to see the open errors and issues that contributed to the status:
Errors are factual objects that are automatically created when problems occur, and provide insight into the current status of
the capability. For example, if a permission is missing, an error is displayed. Browse and filter the errors to better
understand and resolve the problem.
Issues are actionable objects that are triggered when detected problems exceed defined thresholds. Issues are
manageable, trackable, and provide remediation suggestions and automations.
The issues displayed in the panel are open issues that are specifically related to the selected connector with the selected
capability in the observed scope (single account or organization). Click an issue to start investigating it.
Accounts Lists the accounts that are onboarded on the instance and their individual status.
If multiple accounts are onboarded on the instance, click on each account to filter the page information by account, and drill-
down to the security capability statuses for each account.
4. If the instance shows an Outpost error, go to the All Outposts page and find the outpost account that is being used by this instance. Right click the
Outpost account to view the open errors and issues for the account.
5. If the account shows Permission errors, use the side panel to check which permissions are missing. You can also Edit the instance to redeploy the cloud
setup template, which should normally resolve the error.
This dataset records error and recovery events for the security capabilities in cloud instances. By querying this dataset you can see information about
when the error started, the prevalence of the error, and whether there is a recurrency pattern. See the specific fields descriptions and query examples for
each security capability.
Errors related to collection of audit logs in the cloud instance are recorded in the collection_auditing dataset. For more information, see Audit
logs fields and query examples.
7. Set up correlation rules to trigger issues when errors occur in cloud security capabilities. See the following examples.
You can review Outpost entries in the cloud_health_auditing dataset to see Outpost activity over time, or to search for errors on specific accounts.
Outpost entries are added to the dataset as follows:
An error occurred on an Outpost account that disabled or prevented an operation. This is audited as Error.
An exceptional condition occurred on an Outpost account that might cause problems if not resolved. This is audited as Warning.
Field Description
Resource ID Outpost ID
Capability Outpost
Error Details about the error. For informational entries this is blank.
dataset = cloud_health_auditing | filter capability = "Outpost" and classification = "Error" and region = "eu-west-3"
See all entries (error, warning, and recovery) for Outpost_1 on cloud account Account_A:
dataset = cloud_health_auditing | filter capability = "Outpost" and resource_id = “Outpost_1” and account = "Account_A"
You can review Permissions entries in the cloud_health_auditing dataset to see Permissions activity over time, or to search for errors on specific
accounts. Permissions entries are added to the dataset as follows:
An exceptional condition occurred that might cause problems if not resolved. This is audited as Warning.
Field Description
Account Name of the account where the event occurred, or All accounts.
Capability Permissions
You can review Discovery engine entries in the cloud_health_auditing dataset to see Discovery activity over time, or to search for errors on specific
accounts. Discovery entries are added to the dataset as follows:
An exceptional condition occurred that might cause problems if not resolved. This is audited as Warning.
The following table describes the fields for Discovery engine entries:
Field Description
Account Name of the account where the event occurred, or All accounts
Capability Discovery
Identify API exec errors on the Discovery engine for all accounts on the AWS_1 connector:
dataset = cloud_health_auditing | filter capability = "Discovery" and connector = "AWS_1" and classification = “Error”
See all Discovery engine activity on connector AWS_1 for Account_ A in the af-south-1 region:
dataset = cloud_health_auditing | filter capability = "Discovery" and connector = "AWS_1" and account = "accountA" and region = "af-south-1"
You can review ADS entries in the cloud_health_auditing dataset to see ADS activity over time, or to search for errors on specific accounts. ADS entries
are added to the dataset as follows:
The asset or Host was excluded from the scan. This is audited as Excluded.
Field Description
Resource ID Asset ID
Capability ADS
Error Details about the error. For informational entries this is blank.
Identify failed ADS scans on connector "a8df43e848dd42778ae7efd5a706a0fc" for EC2 assets at the asset scope level, filtered by region (northamerica-
northeast2-a):
dataset = cloud_health_auditing | filter capability = "ADS" and classification = "failed" and connector = “a8df43e848dd42778ae7efd5a706a0fc” and type =
"EC2_INSTANCE" and scope = "Asset" and region = "northamerica-northeast2-a"
See all ADS scans (failed and successful) on connector "a8df43e848dd42778ae7efd5a706a0fc" for EC2 assets belonging to Account_A:
dataset = cloud_health_auditing | filter capability = "ADS" and connector = “a8df43e848dd42778ae7efd5a706a0fc” and type = "EC2" and account = “Account_A”
You can review DSPM entries in the cloud_health_auditing dataset to see DSPM activity over time, or to search for errors on specific accounts. DSPM
entries are added to the dataset as follows:
Field Description
Resource ID Asset ID
Capability DSPM
Error Details about the error. For informational entries this is blank.
Identify failed DSPM scans on the AWS_1 connector for S3 asset types, filtered by region (ap-east-1):
dataset = cloud_health_auditing | filter capability = "DSPM" and classification = “Error” and connector = “AWS_1” and type = "S3_BUCKET" and region = "ap-
east-1"
See all DSPM scans (failed and successful) on the AWS_1 connector, for all scanned assets on Account_A:
dataset = cloud_health_auditing | filter capability = "DSPM" and account = "Account_A" and connector = “AWS_1”
You can review Registry scanning entries in the cloud_health_auditing dataset to see Registry scanning activity over time, or to search for errors on
specific accounts. Registry scanning entries are added to the dataset as follows:
The following table describes the fields for Registry scanning entries:
Field Description
Field Description
Resource ID Asset ID
Capability Registry
Error Details about the error. For informational entries this is blank
dataset = cloud_health_auditing | filter capability = "Registry" and classification = “error” and connector = “GCP_1”
Review all registry scans (failed and successful) on connector GCP_1 for asset Asset_A:
dataset = cloud_health_auditing | filter capability = "Registry" and connector = “GCP_1” and ressource_id = "Asset_A"
You can review Audit logs entries in the collection_auditing dataset. Querying this dataset can help you see the connectivity changes of an instance
over time, the escalation or recovery of the connectivity status, and the error, warning, and informational messages related to status changes. For more
information about this dataset, see Verify collector connectivity.
The following table describes the fields for Audit logs entries:
Field Description
dataset = collection_auditing | filter instance = “AWS_1” and log_type = "Audit Logs" and classification = “Error”
The following examples show how to set up correlation rules to trigger Health Collection issues when errors occur on a specific security capability.
Example XQL:
dataset = cloud_health_auditing | filter capability = "DSPM" and classification = “Error” and type = "AWS_S3" and scope = "Asset" and connector = “AWS_1”
Field Value
Severity Medium
Category Collection
In this example, a correlation rule will trigger a Health Collection issue if an error is recorded on account Outpost_A in the us-east-1 region.
Example XQL:
dataset = cloud_health_auditing | filter capability = "Outpost" and account = "Outpost_A" and region = "eu-west-3" and classification = "Error"
Field Value
Severity Medium
Category Collection
The Cortex Cloud Application Security IaC scanner employs a graph-based framework to assess the security posture of cloud infrastructure configurations. By
modeling resources as interconnected nodes and edges, the Cortex Cloud Application Security evaluates relationships between them, providing context that
enables the detection of complex misconfigurations. This approach ensures accurate enforcement of security and compliance policies.
Cortex Cloud Application Security IaC scans create a comprehensive inventory of all IaC resources in your environment. For more information refer to
Infrastructure-as-Code (IaC) resources.
IaC misconfiguration detected during these scans are displayed as IaC findings for analysis and investigation. Cortex Cloud Application Security applies
context and prioritizes findings to create IaC issues which are the smallest unit of IaC misconfiguration that can be remediated. You can remediate these
misconfigurations automatically through Fix Pull Requests or manually by applying suggested code fixes. Dedicated inventories are provided for
misconfiguration issues and findings, enabling detailed analysis of each category.
All Critical and High IaC misconfiguration findings detected in an organization's environment are categorized as issues, which represent the smallest unit for
remediating IaC resource misconfigurations. Where applicable, both manual and automated fixes are provided to resolve these issues.
To access IaC misconfiguration issues, under Modules, select Application Security → Issues → IaC misconfiguration.
TIP:
You can also view IaC misconfiguration issues in dedicated tabs under other sections when available:
On the Configuration tab under Repository assets. Refer to Expanded repository asset information for more information
Under the All Code asset inventory: Select an asset from the table → click the Configuration tab
In Application asset inventories, when an application includes an IaC asset that includes a detected misconfiguration: navigate to Inventory → All Assets
→ Applications → select an option from the Applications menu → select an application from the inventory → Configurations
Under Cases and Issues; perform a query. Select Issues → AppSec Issues (under the All Domains menu) → IaC Scanner (as the Detection Method
value)
Read more...
Field/Attribute Description
Severity Severity level of the misconfiguration such as, Critical, High, Medium, Low)
Name Name of the IaC misconfiguration (such as "Missing security group rule,"
"Insecure S3 bucket policy")
Asset Name Name of the resource affected by the misconfiguration (such as AWS S3
bucket name, EC2 instance ID)
Repository Name of the repository where the IaC code, in which the misconfiguration
issue was detected is stored
Branch The specific branch or version of the IaC code where the misconfiguration
exists
File Path Path to the IaC file where the misconfiguration is located
Framework The IaC framework used to provision the resource with the misconfiguration
(such as Terraform, CloudFormation, Ansible)
Field/Attribute Description
Prioritization Labels Labels or tags assigned to the misconfiguration to aid in prioritization and
triaging
Selecting an issue in the table opens a card with tabs including additional information about the IaC misconfiguration issues, including suggested remediation.
Overview
Description: Provides a summary of the IaC misconfiguration and its potential impact
Affected Assets: Identifies the version control system, repository, and specific code lines containing the IaC misconfiguration
Evidence: Details and location in the codebase of the code containing the IaC misconfiguration
Linked Cases: The cases that the IaC misconfiguration issue is associated with
Issue Details: Includes the detection rule that flagged the issue, the subcategory (the category of IaC findings that the rule generates, such as 'General',
'IAM', 'Networking') and traced issues (a code issue that has a corresponding runtime issue. For example a misconfiguration in a Terraform template that
matches a runtime CSPM asset with the same issue)
Commit details: Includes the commit hash, committer, and the assigned user responsible for remediation
Timestamps: Provides the date the issue was created and last updated
Remediation: Suggested manual remediation steps to address the IaC misconfiguration. See ??? below for detailed information
Actions
The Actions tab displays the code lines in which the IaC misconfiguration occurs and presents the recommended code change directly below the violating
code. You can remediate the issue by clicking Open Fix Pull Request, which generates a pull request (PR) with the proposed resolution, or by fixing it manually.
You can take the following actions to address and manage IaC misconfiguration issues:
Change Status. Modify the status of the issue. Values: New, In Progress, Resolved
Change Severity: Modify the severity level of the issue. Values: Critical, High, Medium, Low
Change Assignee: Change the user or identity assigned to address the issue
Copy text to clipboard: Duplicate selected text for easy pasting elsewhere
Copy entire row: Duplicate the entire row of data for easy pasting elsewhere
Copy issue URL: Duplicate the URL associated with the issue, to share or reference the issue
Show/hide rows with the [severity level]: Show/hide rows matching the [severity level] of the selected row
Open Fix PR: Create a Pull Request (PR) with a suggested fix to remediate the issue
IaC misconfiguration scans produce findings, which are potential security risks in your Infrastructure-as-Code (IaC) definitions. These insights help assess and
analyze the security posture of your IaC assets.
NOTE:
Findings cannot be mitigated. They must be promoted to issues to enable remediation efforts to secure your infrastructure.
To access IaC misconfiguration findings, under Modules, select Application Security → Issues → click the Findings tab.
Read more...
Field/Attribute Description
Asset Name The name of the asset or resource associated with the finding
Branch The branch of code/version control branch where the finding was detected
File Path The file path or location within the repository where the finding is located
Run ID The unique identifier assigned to the particular scan run that generated the
finding
Git User The username or identity of the Git user who committed or made changes
to the code associated with the finding
Data Sources The sources from which the finding was generated or collected
Created The date and time when the finding was created or generated. It provides a
timestamp for when the finding was initially logged or recorded
Overview: Includes when the finding was last updated, the category associated with the finding, and the asset where the finding was detected
Details: The finding provides detailed information, including the location such as the data source, repository, branch, and file path. It also includes the
first hash, commit date, Run ID and the ID of the rule that detected the finding. Additionally, the finding provides the violating code and suggests a
manual fix to mitigate the misconfiguration when available
Cortex Cloud Application Security Secrets scans identify sensitive data embedded in code, such as API keys, encryption keys, OAuth tokens, certificates, PEM
files, passwords, and pass-phrases. These secrets, when exposed, can compromise the security of your infrastructure and applications. Secrets scans
generate comprehensive Secrets issues and findings inventories, which are displayed in dedicated inventory tables for analysis, investigation, prioritization,
and remediation.
Supported file types: Cortex Cloud Application Security scans any plaintext files that are not encrypted, not compressed (for example, not .zip files) and not
compiled (for example, not .jar files), for secrets. Additionally, entropy findings look for keywords to lower the noise, and those keywords must be inline with the
high entropy string to be flagged.
Entropy Analysis: Cortex Cloud Application Security provides signatures that analyze the randomness of strings within the file. Highly random strings, often
referred to as "high entropy," can be indicative of a potential secret. To reduce false positives, Cortex Cloud Application Security considers specific keywords
that might be associated with secrets alongside the randomness of the data for better accuracy.
Valid secrets exposed during scans, and classified as Critical or High by Cortex Cloud Application Security policies, are categorized as issues, which
represent the smallest unit for remediating these exposed Secrets. Manual fix suggestions are provided to resolve these issues.
TIP:
You can also view Secrets issues in dedicated tabs under other sections when available:
On the Secrets tab under Repositories. Refer to Expanded repository asset informationfor more information
Under the All Code inventory: Select an asset from the table → Secrets
In Application asset inventories: navigate to Inventory → All Assets → Application → select an option from the Application menu → select an item from
the inventory → Secrets
In Cases and Issues; perform a query. Select Issues → AppSec Issues (under the All Domains menu) → Secrets (as the Detection Method value)
Read more...
Field/Attribute Description
Severity Severity level of the secret exposure (such as Critical, High, Medium, Low)
Asset Name Name of the resource or asset where the secret was exposed
Branch The specific branch or version of the code where the secret exposure was
detected
File Path Path to the file or location within the code where the secret was exposed
Prioritization Labels Labels or tags assigned to the secret exposure to aid in prioritization and
triage
Selecting an issue in the table opens a card with tabs including additional information about the issues, including suggested remediation.
Overview
Affected Assets: Identifies the assets such as version control system, repository and specific code lines that are affected by the the exposed secret
Linked Cases: The cases that the exposed secret issue is associated with
Commit details: Includes the commit hash, committer, and the assigned user responsible for remediation
Removal Commit Details: for git history findings only, detailing when the secret was removed from the repository
Issue Details: Includes the rule that detected the exposed secret and risk factors
Timestamps: Provides the date the issue was created and last updated
Actions
The Action tab provides steps for manually fixing the exposed secret.
You can take the following actions to address and manage Secrets exposure issues:
Change Status. Modify the status of the issue. Values: New, In Progress, Resolved
Change Severity: Modify the severity level of the issue. Values: Critical, High, Medium, Low
Change Assignee: Change the user or identity assigned to address the issue
Copy text to clipboard: Duplicate selected text for easy pasting elsewhere
Copy entire row: Duplicate the entire row of data for easy pasting elsewhere
Copy issue URL: Duplicate the URL associated with the issue, to share or reference the issue
Show/hide rows with the [severity level]: Show/hide rows matching the [severity level] of the selected row
Secrets findings uncover sensitive information, like credentials or API keys, that may be exposed within your assets. These insights help assess and analyze
the potential exposure of secrets in your environment.
NOTE:
Findings are informational and, as such, are not directly mitigable. Remediation is performed on issues derived from findings.
To access Secrets findings, navigate to Secrets issues (see Secrets issues) and click the Findings tab.
Read more...
Field/Attribute Description
Asset Name The name of the asset or resource associated with the finding
Prioritization Labels Indicates validation status and whether the secret was found only in
historical data
Branch The branch of code/version control branch where the finding was detected
Field/Attribute Description
File Path The file path or location within the repository where the finding is located
Run ID The unique identifier assigned to the particular scan run that generated the
finding
Git User The username or identity of the Git user who committed or made changes
to the code associated with the finding
Data Sources The sources from which the finding was generated or collected
Created The date and time when the finding was created or generated. It provides a
timestamp for when the finding was initially logged or recorded
Overview: Includes when the finding was last updated, the category associated with the finding, and the asset where the finding was detected
Details: Includes the location in which the finding was detected, the ID of the rule that detected the finding, the first hash, first commit date, prioritization
labels and and the validation status
Violating Code: The specific location in the codebase where the exposed secret is located
Cortex Cloud Application Security Software Composition Analysis (SCA) scans provide a comprehensive inventory of all software components dependencies
within your applications. By analyzing these components, the scanner identifies and reports potential vulnerabilities and licensing issues, helping mitigate risks
and ensure compliance. The scan results are presented in detailed inventory tables for easy review, prioritization, and remediation:
CVE Vulnerabilities inventories list vulnerabilities detected in open-source packages, mapped to CVEs. Suggested mitigation is provided through the
CVE Vulnerabilities issues inventory. Refer to CVE vulnerability issues for more information
License inventories provides details of license miscompliance in software packages within your environment. Refer to License miscompliance issues for
more information
Package Integrity: inventories provides details of the operational risk and potential impact of each package in your codebase. Suggested mitigation is
provided through the package operational risk issues inventory. Refer to Package integrity issues for more information
NOTE:
Currently, support for SCA is limited to static analysis, meaning that only direct dependencies are scanned. However, if lock files are present, support is
extended to include the analysis of transitive dependencies as well.
The Software Packages asset inventory provides an overview of all open-source software packages detected across your organization's connected
repositories. Refer to Software packages for more information
All Critical and High SCA findings in software packages within an organization's environment are categorized as SCA vulnerability issues, which represent the
smallest unit for remediating SCA issues. Where applicable, both manual and automated fixes are provided to resolve these issues.
To access CVE vulnerability issues, under Modules, select Application Security → Issues → Vulnerabilities.
TIP:
You can also view SCA vulnerabilities in dedicated tabs under other sections when available:
On the Vulnerabilities tab under Repositories. Refer to Expanded repository asset information for more information
Under the All Code inventory: Select an asset from the table → Vulnerabilities
On the Applications inventory: Navigate to Inventory → All Assets → Applications → select an option from the Applications menu → select an
application from the inventory → Vulnerabilities
In Cases and Issues; perform a query. Select Issues → AppSec Issues (under the All Domains menu) → AppSec CVE Scanner (as the Detection
Method value)
Read more...
Field/Attribute Description
CVSS Score Common Vulnerability Scoring System score representing the severity of the
vulnerability
Package Manager The package management system in which the CVE was detected
Branch The specific branch or version of the code where the vulnerability was
detected
File Path Path to the file or location within the code where the vulnerability was
detected
Domain Provider The source or origin of the software component (such as third-party library,
open-source community)
Selecting an issue in the table opens a card with tabs including additional information about the issues, including suggested remediation.
Overview
The Overview tab provides general details of the SCA vulnerability issue:
Affected Assets: Identifies the assets such as the version control system, package manager and repository hosting the package, that are affected by the
CVE vulnerability
Evidence Details and location in the codebase of the package containing the CVE vulnerability issue
Linked Cases: The cases that the CVE vulnerability issue is associated with
Timestamps: Provides the date the vulnerability was created and last updated
Remediation: Suggested remediation steps to address the vulnerability. See Actions for detailed information
Actions
The Actions tab displays details about the current package including the count and severity of detected vulnerabilities. It suggests potential fixes, such as
package upgrades, and highlights their impact on the vulnerabilities. You can remediate the issue either by clicking Open Fix Pull Request to generate a pull
request (PR) with the proposed resolution, or apply the fix manually.
Change Status. Modify the status of the issue. Values: New, In Progress, Resolved
Change Severity: Modify the severity level of the issue. Values: Critical, High, Medium, Low
Change Assignee: Change the user or identity assigned to address the issue
Copy text to clipboard: Duplicate selected text for easy pasting elsewhere
Copy entire row: Duplicate the entire row of data for easy pasting elsewhere
Copy issue URL: Duplicate the URL associated with the issue, to share or reference the issue
Show/hide rows with the [severity level]: Show/hide rows matching the [severity level] of the selected row
CVE vulnerabilities are specific, known security threats identified in your assets based on the Common Vulnerabilities and Exposures (CVE) system, providing
insights into potential risks.
NOTE:
CVE vulnerability findings cannot be mitigated. They must be promoted to issues to enable remediation efforts to secure your software packages.
To access CVE findings, under Modules, select Application Security → Issues → Vulnerabilities → click the Findings tab.
Read more...
Field/Attribute Description
CVE Common Vulnerabilities and Exposures identifier associated with the finding
CVSS Score Common Vulnerability Scoring System score representing the severity of the
vulnerability
Field/Attribute Description
Asset Name Name of the asset affected by the finding such as application name, library
name)
Package Manager The package management system used (such as npm, Maven, pip)
Branch The specific branch or version of the code where the vulnerability was
detected
File Path Path to the file or location within the code where the vulnerability was
detected
Domain Provider The source or origin of the software component (such as third-party library
repository, open-source community)
Data Source Source of the finding information (the version control system)
Overview: Includes when the finding was last updated, the category associated with the finding, and the asset where the finding was detected
Details: Details of the location and package manager in which the finding was located, the Run ID (the scan that detected the finding), first hash and first
commit date
CVE Information: Includes the CVE ID, description, CVSS score and severity level, the fix version, risk factors, and a link to the package vendor's
website.
Package Integrity (also referred to as package operational risk) assesses the operational risk and potential impact of each package in your codebase by
examining package maintainer and popularity factors, and other relevant metrics. This analysis results in open-source package operational risk severity levels
being categorized into High, Medium and Low. By prioritizing risks based on these categories, you can effectively focus remediation efforts on the most critical
issues.
The following table defines the operational risk metrics used to assess open-source packages.
Read more...
Metric Property
Maintainer Level: Indicates the level of maintenance based on various computed factors
— Versions
— Last release
Metric Property
— Last commit
— Created
— Open issues
Popularity Level: Indicates the level of popularity based on various computed factors
— Weekly downloads
— Number of stars
— Number of forks
— Contributors
All ‘Critical’ and ‘High’ package integrity vulnerabilities detected in an organization's environment are categorized as issues, which represent the smallest unit
for remediation. Manual fix suggestions are provided to resolve these issues.
NOTE:
TIP:
You can also view package integrity issues in dedicated tabs under other sections when available:
On the Package Integrity tab under Repositories. Refer to Expanded repository asset informationfor more information for more information
Under the All Code inventory: Select an asset from the table → Package Integrity
In Application asset inventories: navigate to Inventory → All Assets → Applications → select an option from the Application menu → select an
application from the inventory → Package Integrity
In Cases and Issues; perform a query. Select Issues → AppSec Issues (under the All Domains menu) → Package Operational Risk (as the Detection
Method value)
Read more...
Field/Attribute Description
Severity Severity level of the package integrity issue (such as Critical, High, Medium,
Low)
Asset Name Name of the asset affected by the package integrity issue
Package Manager The package management system used (such as npm, Maven, pip), in
which the package with the issue was detected
Branch The specific branch of the code where the package was used
Domain Provider —
Data Source —
Created Timestamp of when the package integrity issue was first detected
Assignee The individual or identity responsible for addressing the package integrity
issue
Selecting an issue in the table opens a card with tabs including additional information about the issues, including suggested remediation.
Overview
The Overview tab provides general details of the package integrity issue:
Affected Assets: Identifies the assets such as the package manager, repository, and specific code lines, that are affected by the package operational
risk
Evidence: Details and location in the codebase of the package containing the operational risk issue
Linked Cases: The cases that the package operational risk issue is associated with
Commit details: Includes the commit hash, committer, and the assigned user responsible for remediation
Timestamps: Provides the date the issue was created and last updated
Remediation: Suggested manual remediation steps to address the operational risk. See Actions for more information
Actions
The Actions tab provides suggested steps to address package operational risk issues.
Change Status. Modify the status of the issue. Values: New, In Progress, Resolved
Change Severity: Modify the severity level of the issue. Values: Critical, High, Medium, Low
Change Assignee: Change the user or identity assigned to address the issue
Copy text to clipboard: Duplicate selected text for easy pasting elsewhere
Copy entire row: Duplicate the entire row of data for easy pasting elsewhere
Copy issue URL: Duplicate the URL associated with the issue, to share or reference the issue
Show/hide rows with the [severity level]: Show/hide rows matching the [severity level] of the selected row
Cortex Cloud Application Security scans produce package integrity findings, which are potential package operational risks within your software packages.
These insights help assess and analyze the security posture of your software supply chain.
NOTE:
Findings cannot be mitigated. They must be promoted to issues to enable remediation efforts to secure your software packages.
To access package integrity findings, navigate to Package Integrity issues (see Package integrity issues), and click the Findings tab.
Read more...
Field/Attribute Description
Data Source The version control system hosting the repository which includes the
package in which the finding was detected
Package Manager The package manager hosting the package in which the finding was
detected
Repository The repository hosting the package in which the finding was detected
Overview: Includes when the finding was last updated, the category associated with the finding, and the asset where the finding was detected
Details: Details of the location and package manager in which the finding was located, first hash and first commit date
All license miscompliance findings detected in software packages within an organization's environment are defined as issues, which represent the smallest unit
for remediating these violations. Manual fix suggestions are provided to help resolve these issues.
To access license miscompliance issues, under Modules, select Application Security → Issues → Licenses.
TIP:
You can also view license miscompliance issues in dedicated tabs under other sections when available:
In Application asset inventories: navigate to Inventory → All Assets → Application → select an option from the Application menu → select an item from
the inventory → Package Integrity
On the Licenses tab under Repositories. Refer to Expanded repository asset information for more information
Under the All Code inventory: Select an asset from the table → Package Integrity
In Cases and Issues; perform a query. Select Issues → AppSec Issues (under the All Domains menu) → CAS License Scanner (as the Detection
Method value)
Read more...
Field/Attribute Definition
Severity The severity level assigned to a detected license compliance issue (such
as high, medium, low). This indicates the potential risk associated with the
miscompliance
Name The specific license under which the software package is distributed (e.g.,
MIT, GPL, Apache 2.0).
Asset Name The name of the asset such as application, service) that uses this license
Package Manager The package manager used to install or manage the software package
including the license (such as, npm, pip, maven)
Domain Provider The source or provider of the software package that includes the license
License Category A categorization of the license based on its type (such as strong copyleft )
Data Source The source from which the license information was obtained (such as a
specific scanner)
Assignee The individual or team responsible for addressing the license compliance
issue
Created The date and time when the license compliance issue was first detected or
recorded
Selecting an issue in the table opens a card with tabs including additional information about the issues.
Description: Provides a summary of the license miscompliance and its potential impact
Affected Assets: Identifies the assets such as version control system, package manager and repository, that are affected by the license mis-compliance
issue
Evidence: Details and location in the codebase of the package containing the license mis-compliance issue
Linked Cases: The cases that the license mis-compliance issue is associated with
License Details: Includes the miscompliance category (non-permissive, strong copyleft, weak copyleft) and whether the license is OSI approved
Timestamps: Provides the date the license miscompliance was created and last updated
Remediation: Suggested remediation steps to address the vulnerability. See Actions below for detailed information
Actions
The Actions tab displays suggestions to fix the license miscompliance, including contacting your legal team for further investigation.
Change Status. Modify the status of the issue. Values: New, In Progress, Resolved
Change Severity: Modify the severity level of the issue. Values: Critical, High, Medium, Low
Change Assignee: Change the user or identity assigned to address the issue
Copy text to clipboard: Duplicate selected text for easy pasting elsewhere
Copy entire row: Duplicate the entire row of data for easy pasting elsewhere
Copy issue URL: Duplicate the URL associated with the issue, to share or reference the issue
Show/hide rows with the [severity level]: Show/hide rows matching the [severity level] of the selected row
License miscompliance findings are potential licensing vulnerabilities in your open-source software packages. These findings allow you to assess and analyze
your package license compliance. Promoting these findings to issues allows you to address license non-compliance. This ensures compliance with licensing
requirements and maintain the integrity of your software supply chain.
To access license miscompliance findings, under Modules, select Application Security → Issues → Licenses → click the Findings tab.
Read more...
Field/Attribute Definition
License The specific license under which the software package is distributed (e.g.,
MIT, GPL, Apache 2.0).
Asset Name The name of the asset such as application, service) that uses this license
Field/Attribute Definition
Package Manager The package manager used to install or manage the software package
including the license (such as, npm, pip, maven)
Repository The repository hosting the code in which the license miscompliance was
detected
Branch The branch containing the repository with the license miscompliance
License Category A categorization of the license based on its type (such as strong copyleft )
Data Source The version control system hosting the repository with the license
miscompliance
Created The date and time when the license compliance issue was first detected or
recorded
Selecting a finding from the table opens a side panel providing additional details about the license miscompliance, including the location with the file path,
whether OSI approved, first hash and last commit.
Overview: Includes when the finding was last updated, the category associated with the finding, and the asset where the finding was detected
Details: Contains the license location in the codebase, the package manager hosting the license, the license category, OSI approval status, first hash,
and commit date
Open source software licenses define the terms under which open source software can be used, modified, and distributed. In Cortex Cloud Application
Security, licenses are scanned as part of the SCA vulnerability scan for open source packages. All Critical, High and Medium license miscompliance detected
in open source software packages within an organization's environment are defined as issues. This enables structured vulnerability management and focused
remediation. Where applicable, manual and automated fixes are provided.
Cortex Cloud Application Security offers three types of default license categories out of the box, providing comprehensive coverage for managing license
compliance within your environment:
Within each license type, SPDX identifiers are organized and sorted based on their characteristics and attributes. SPDX Identifiers are unique codes assigned
to software licenses by the Software Package Data Exchange (SPDX) project. These identifiers are used to categorize and accurately identify software
packages distributed under various types of licenses, including strong copyleft, weak copyleft, and non-permissive licenses. By associating each software
package with a specific SPDX Identifier, it becomes easier to track and manage license compliance across different licensing policies, ensuring that the
correct license type is identified and adhered to.
Non-permissive licenses
Non Permissive licenses policies identify software packages distributed under non-permissive or restrictive licenses. These licenses restrict how you can use,
modify, and distribute the software. They may limit your ability to integrate the software into certain projects or require you to purchase a commercial license for
specific uses.
The following list displays supported SPDX identifiers for non-permissive licenses.
Read more...
BUSL-1.1
C-UDA-1.0
CC-BY-NC-3.0-DE
CC-BY-NC-ND-3.0-DE
CC-BY-NC-ND-3.0-IGO
CC-BY-NC-SA-2.0-DE
CC-BY-NC-SA-2.0-FR
CC-BY-NC-SA-2.0-UK
CC-BY-NC-SA-3.0-DE
CC-BY-NC-SA-3.0-IGO
CC-BY-ND-3.0-DE
Hippocratic-2.1
JPL-image
MS-LPL
NCGL-UK-2.0
PolyForm-Noncommercial-1.0.0
Strong Copyleft licenses policies identify software packages distributed under strong copyleft licenses, such as the GNU General Public License (GPL). These
licenses require derivative works to be distributed under the same copyleft license terms as the original work. This ensures broader access and modification
rights.
The following list displays supported SPDX identifiers for strong copyleft licenses.
Read more...
AGPL-1.0
AGPL-1.0-only
AGPL-1.0-or-later
AGPL-2.0
AGPL-3.0
AGPL-3.0-only
AGPL-3.0-or-later
Arphic-1999
CERN-OHL-S-2.0
copyleft-next-0.3.0
copyleft-next-0.3.1
GPL-2.0
GPL-3.0
Linux-man-pages-copyleft
OpenPBS-2.3
Weak Copyleft Licenses policies identify software packages distributed under weak copyleft licenses. These licenses permit combining code with other
licenses, including proprietary licenses, without mandating the entire derivative work to be released under the same copyleft license.
The following list displays supported SPDX identifiers for weak copyleft licenses.
Read more...
Artistic-1.0
Artistic-2.0
APSL
CAL-1.0-Combined-Work-Exception
CC-BY-SA-2.0-UK
CC-BY-SA-2.1-JP
CC-BY-SA-3.0-AT
CC-BY-SA-3.0-DE
CC-BY-SA-4.0
CDDL-1.0
CDLA-Sharing-1.0
CERN-OHL-W-2.0
CPOL-1.02
EPL-1.0
EPL-2.0
eCos-2.0
EUPL-3.0
FDK-AAC
LGPL-2.0
LGPL-2.1
LGPL-3.0
MPL-1.1
MPL-2.0
MS-RL
OSL-3.0
QPL-1.0-INRIA-2004
Sendmail-8.23
SimPL-2.0
TAPR-OHL-1.0
TPL-1.0
The following list displays supported SPDX identifiers for non-permissive licenses.
Read more...
C-UDA-1.0
CC-BY-NC-3.0-DE
CC-BY-NC-ND-3.0-DE
CC-BY-NC-ND-3.0-IGO
CC-BY-NC-SA-2.0-DE
CC-BY-NC-SA-2.0-FR
CC-BY-NC-SA-2.0-UK
CC-BY-NC-SA-3.0-DE
CC-BY-NC-SA-3.0-IGO
CC-BY-ND-3.0-DE
Hippocratic-2.1
JPL-image
MS-LPL
NCGL-UK-2.0
PolyForm-Noncommercial-1.0.0
The following list displays supported SPDX identifiers for strong copyleft licenses.
Read more...
AGPL-1.0
AGPL-1.0-only
AGPL-1.0-or-later
AGPL-2.0
AGPL-3.0
AGPL-3.0-only
AGPL-3.0-or-later
Arphic-1999
CERN-OHL-S-2.0
copyleft-next-0.3.0
copyleft-next-0.3.1
GPL-2.0
GPL-3.0
Linux-man-pages-copyleft
OpenPBS-2.3
The following list displays supported SPDX identifiers for weak copyleft licenses.
Read more...
Artistic-2.0
APSL
CAL-1.0-Combined-Work-Exception
CC-BY-SA-2.0-UK
CC-BY-SA-2.1-JP
CC-BY-SA-3.0-AT
CC-BY-SA-3.0-DE
CC-BY-SA-4.0
CDDL-1.0
CDLA-Sharing-1.0
CERN-OHL-W-2.0
CPOL-1.02
EPL-1.0
EPL-2.0
eCos-2.0
EUPL-3.0
FDK-AAC
LGPL-2.0
LGPL-2.1
LGPL-3.0
MPL-1.1
MPL-2.0
MS-RL
OSL-3.0
QPL-1.0-INRIA-2004
Sendmail-8.23
SimPL-2.0
TAPR-OHL-1.0
TPL-1.0
By ingesting Static Application Security Testing (SAST) findings directly from third-party sources, Cortex Cloud Application Security expands its security
coverage, allowing you to gain visibility into and effectively manage SAST violations across your organization. The platform enriches the findings with context,
and provides suggested remediation steps to help you quickly address identified code weaknesses. These capabilities, combined with Cortex Cloud
Application Security's native IaC, SCA and Secrets scanner management and other domains, provide a comprehensive security management platform for your
applications.
Cortex Cloud Application Security provides a dedicated findings inventory for the analysis and auditing of ingested SAST data. Cortex Cloud Application
Security analyzes and enriches these findings, elevating them to issues when necessary, allowing for efficient contextualization, prioritization and remediation
of SAST code violations.
NOTE:
SAST CWE violations refer to security vulnerabilities identified by SAST tools that align with the Common Weakness Enumeration (CWE) list. Refer to
the MITRE CWE website for more information about CWEs
PREREQUISITE:
Before you begin, integrate your third party data sources to ingest SAST data findings. Refer to Ingest third party static application security testing (SAST)
data for more information. Supported third parties and data ingest formats include:
Veracode
SonarQube
Semgrep
SARIF
NOTE:
It is recommended to use supported third-party vendor integrations for data ingestion over manual SARIF file uploads. Native integrations provide automated
synchronization of periodic scan data and increased data precision.
Cortex Cloud Application Security default policies enrich and categorize ingested Critical and High SAST findings detected in your organization's environment
as issues (also known as Code Weaknesses). Issues represent the smallest unit for remediating SAST-identified CWEs.
NOTE:
Users can customize policies to define which findings are categorized as issues.
To access SAST code violation issues, under Modules, select Application Security → Issues → Code Weaknesses.
TIP:
You can also view SAST CWE issues in dedicated tabs under other sections when available:
On the Code Weaknesses tab under the Repositories asset inventory. Refer to Expanded repository asset information for more information
Under the All Code asset inventory: Select an asset from the table → Code Weaknesses
In the Application asset inventory: navigate to Inventory → All Assets → Applications → select an option from the Applications menu → select an
application from the inventory → Code Weaknesses tab
In Cases and Issues; perform a query. Select Issues → AppSec Issues (under the All Domains menu) → SAST Scanner (as the Detection Method
value)
The SAST code weakness issues inventory includes the following fields:
Read more...
Field/Attribute Description
Severity Severity level of the CWE issue (such as Critical, High, Medium, Low)
Name Short, descriptive name of the CWE issue (such as "SQL Injection," "Cross-
Site Scripting")
CWE(s) Common weakness enumeration (CWE) identifiers associated with the issue
Asset Name Name of the repository affected by the CWE issue (such as library name,
file name)
Field/Attribute Description
Language Programming language in which the CWE issue was detected (such as
Java, Python, JavaScript)
Branch The specific branch or version of the code where the CWE issue was
detected
File Path Path to the file or location within the code where the CWE issue was
detected
Prioritization Labels Labels or tags assigned to the CWE issue to aid in prioritization and triaging
Data Source The 3rd party data source for the code weakness issue
Selecting an issue in the table opens a card with tabs including additional information about the issues, including suggested remediation.
Summary
A summary of the code weakness including the severity level, the CWE identity and the type of engine that detected the weakness.
Overview
Timestamps: Provides the date the issue was created and last updated
Assignee: Assign the CWE issue to the appropriate team member remediation using the dropdown menu
Affected Assets: Identifies the version control system and repository containing the CWE
Evidence:
Details and the location in the codebase of the code containing the CWE, including vulnerability classifications (such as OWASP) specific code
lines and functions
Commit details: Includes the commit hash, committer, and the assigned user responsible for remediation
Weakness Details: The CWE identifier with a link to the weakness in the MITRE database
Actions
The Actions tab displays suggested steps to mitigate the CWE issue.
SAST CWE findings are based on ingested third party (such as Semgrep) data. Findings are potential security vulnerabilities identified within your source code
based on common weakness enumerations (CWEs). These insights help assess and analyze the security posture of your applications by identifying
weaknesses in your codebase.
NOTE:
Findings on the Cortex platform are not intended for direct action; but rather represent data collected by the platform. They must be promoted to issues to
enable mitigation efforts to secure your codebase.
To access code weakness findings, navigate to code weakness issues (see SAST code weaknesses (CWEs)) and click the Findings tab.
Field/Attribute Description
Name Short, descriptive name of the CWE finding (such as "SQL Injection,"
"Cross-Site Scripting")
CWE(s) CWE identifier(s) associated with the finding (such as CWE-79, CWE-119)
OWASP Categories Relevant Top 10 OWASP categories associated with the finding (but can be
from different years)
Language Programming language in which the CWE finding was detected (such as
Java, Python, JavaScript)
Branch The specific branch or version of the code where the CWE finding was
detected
File Path Path to the file or location to the code wherein the CWE finding was
detected
Git User Username of the Git user who last modified the file containing the finding
Overview: Includes when the finding was last updated, the category associated with the finding, and the name and link to the asset where the finding
was detected
Details: The location of the finding, the third party data source that detected the finding, the CWE category, the initial hash and commit, and rule ID
An Cortex Cloud Application Security policy defines how a system should respond to application security threats. It includes conditions that trigger the policy,
the scope of its application, and the actions to be taken when these conditions are met. When a policy detects a threat, it generates an issue for remediation.
Cortex Cloud Application Security provides out-of-the-box policies. In addition, you can create custom policies to tailor it to your specific business or
infrastructure requirements. Out-of-the-box policies cannot be modified directly. However, you can create a custom policy by cloning the existing one. This
allows you to make changes to the original policy according to your requirements. Refer to Manage Cortex Cloud Application Security policies for more
information.
To access Cortex Cloud Application Security polices, under Modules, select Cortex Cloud Application Security → AppSec Policies (under Policy
Management).
The Cortex Cloud Application Security policies inventory includes both out-of-the-box and custom policies. The following list describes the policy
fields/properties displayed in the inventory table. Details are provided for properties that require explanation.
NOTE:
Read more...
Field/Attribute Description
Scan Type The type of scanner used for enforcing the AppSec policy, such as Secrets
and IaC scanners
Actions Actions to take when the policy detects its target risk
Scope The type of assets to be evaluated by the policy. See for more information
about policy scope
Trigger Trigger types that define when the condition will be evaluated. Options
include Periodic scan, Pull Request scan and CI scan
Last Triggered The last time that the policy was triggered
Modification Time The timestamp of the most recent change to the policy
Open Issues The amount of issues detected by the policy that remain unresolved
Selecting a policy opens a side panel where you can review its configuration (scan type, trigger, conditions, and actions), creator and modifier details, and the
last modification time.
Create custom Cortex Cloud Application Security policies to enable tailored security checks across your engineering environments and workflows.
1. Under Modules, select Application Security → AppSec Policies (under Policy Management).
3. Provide a policy name (required) and description on the General step of the wizard that is displayed, and click Next.
4. Define the policy conditions on the Define Conditions step of the wizard.
Scan type: Choose the scan types that the policy will apply to. Options include: Select All, Secrets, IaC Misconfigurations, Vulnerabilities,
Licenses and Operational Risk
Conditions: Define the specific criteria that trigger the policy. Click Add Filters to configure a condition. You can combine multiple conditions
to create complex rules for when the policy should be applied. For example, selecting multiple scan types will display all relevant filters for
all chosen types. Refer to Cortex Cloud Application Security policy condition attributes for more information about the conditions that apply
to the different scanner types
Developer Suppressions: Select Ignore to disregard developer suppressions during scans, or Allow to consider them
NOTE:
b. Click Next.
5. Define the policy scope on the Define Scope step of the wizard.
a. Define the scope of the assets to be evaluated by the policy. Values include 'Repositories'. 'Applications' and 'Asset Groups'.
NOTE:
You can define the scope using filters. No filters means apply to all asset types (repositories, applications and asset groups)
b. Click Next.
NOTE:
You can filter the scope step of the wizard by Type (the asset type), name, category, provider, organization, business criticality and business owner.
6. Define actions to be taken when a policy is triggered in the Define Action step of the wizard.
a. Specify which actions to take when the policy detects its target risk:
Create a new issue: Create an issue if policy conditions within the selected scope are met
Block:
Block a CI run if the policy conditions within the selected scope are met. Available when 'CI scan' is selected as the evaluation method
Block a pull request (PR) if the policy conditions within the selected scope are met. Available when 'PR scan' is selected as the
evaluation method
Report:
Enable reporting via CLI if policy conditions within the selected scope are met. Available when CI scan is selected as the evaluation
method
Enable reporting of a pull request (PR) comment if policy conditions within the selected scope are met. Available when PR scan is
selected as the evaluation method
b. Click Create.
You are redirected to the Policies page, which displays the newly created policy.
Cortex Cloud Application Security policy Condition attributes define the criteria evaluated by the policy to determine if the data within the selected scope
triggers a rule. The following table describes policy Condition attributes, sorted by Cortex Cloud Application Security scanner type.
Read more...
Attribute Description
Secrets Severity:
Detection rules
Attribute Description
Vulnerability Severity
Risk factors
CVE ID
CWE
CVSS score
EPSS
Package name
Package version
Comparison: equal to, not equal to, greater than, less than
Attribute Description
IaC Severity:
Detection rules
IaC tag
License Severity
Detection rules
License type
Maintained
Package Name
Package Version
Comparison: equal to, not equal to, greater than, less than
Manage your custom Cortex Cloud Application Security policies to maintain an effective application security posture and adapt your security rules to evolving
threats and requirements.
To manage policies, right-click on a policy in the table or select a policy and then select the menu in the side panel. The following actions are available:
Edit policy: Redirects to the policy wizard, allowing you to modify the policy
NOTE:
Duplicate policy: Clone OOTB policies as templates for creating custom policies. When this option is selected, the policy wizard is displayed with the
original policy configurations, allowing you to modify them as required
NOTE:
The duplicated policy will include the word "clone" in its name and must be renamed.
Disable policy: Deactivate the policy without deleting it. Future scans will not trigger the policy, but existing issues detected by the policy will persist. Bulk
actions are supported, allowing you to disable multiple policies simultaneously
Delete policy: Permanently remove the policy from your environment. Issues detected by the policy will persist. Bulk deletions are supported
Cortex Cloud Application Security detection rules ("rules") are designed to detect security threats within your application security environment, which includes
the various components, configurations, and interactions within your application that can potentially introduce vulnerabilities or pose risks to its security. Cortex
Cloud Application Security rules identify and flag issues based on predefined criteria, ensuring that potential threats are proactively detected and addressed to
enhance the overall security posture of your application.
Cortex Cloud Application Security detection rules cover a wide range of security best practices, inspired by compliance frameworks such as PCI, GDPR, ISO
27001:2013, and NIST, as well as additional best practices beyond regulatory requirements.
In addition to default rules, you can create custom rules to tailor to your specific security requirements.
NOTE:
Out-of-the-box rules cannot be modified directly. However, you can create a custom rule by cloning the existing one. This allows you to make changes to the
original rule according to your requirements. Refer to Manage Cortex Cloud Application Security custom rules for more information.
These user permissions are required for Cortex Cloud Application Security rules:
Users with rules Rules View/Edit permissions can create and modify detection rules
To access Cortex Cloud Application Security rules, under Modules, under Modules select Application Security → Application Security Rules (under Policy
Management).
The Cortex Cloud Application Security rules inventory includes both out-of-the-box and custom rules. The following list describes rules fields/properties
displayed in the inventory table. By default, rules are displayed according to severity and then alphabetically. Details are provided for properties that require
explanation.
NOTE:
Read more...
Field/Property Description
Field/Property Description
Scanner The type of security scanner used to detect violations of this rule
Last modified The date and time when the rule was most recently updated
Framework The framework or language that the detection rule applies to (for example,
GitHub, Terraform, JavaScript)
Selecting a rule opens a card with additional details, including an Issues tab, which displays issues detected by that policy.
To filter rules relating to 'Secrets', select filter icon → Scanner (from the Select field) → Secrets (from the Value field).
To view custom rules only, select Mode from the Select field, not equals as the operator, and Out-of -the-box as the value
Sort rules according to their attributes, such as issue severity, to prioritize remediation efforts
Creating custom Cortex Cloud Application Security detection rules rules allows you to tailor your security measures to address specific and unique threats to
your organization that are not covered by default rules. Custom rules run across branch periodic, PR, and CI scans.
Use the custom rule builder to create rules from scratch or clone and customize existing rules, enabling you to tailor them to meet your specific requirements
effectively.
Secrets scans
IaC scans. Supported frameworks include 'Terraform', 'TFPlan' (with automatic application of Terraform custom rules), 'CloudFormation', 'Kubernetes',
'Bicep', 'Helm', 'Kustomize', Helm and 'ARM'. These scans also apply to serverless deployments
1. Under Modules select Application Security → AppSec Rules (under Policy Management) → Add Rule.
Impact: Describes the potential impact of a detected violation. This description is displayed in Issues and PR Comments
Severity(required): Determines the priority level assigned to findings identified by the rule
Scanner (required): The type of scanner to be used to detect issues based on the rule
Subcategory (required): Refines the scope of the rule. Values include General, IAM, Monitoring, Networking, Public, Storage, Compute,
Kubernetes, Logging, and AI/Machine Learning
Framework: The framework or language that the rule is designed to apply to, such as Terraform, CloudFormation and ARM
Labels: Assign tags to rules to help categorize, filter, and organize them for easier identification and management
b. Click Next.
NOTE:
d. Click Done.
NOTE:
You can leverage YAML templates to create complex rules tailored to specific compliance or security requirements. Cortex Cloud Application Security rules
support attribute-based and connection-state rules.
The following YAML attributes are used to define the properties of the rules.
definition: Contains the logic and conditions for the rule, including attributes, operators, and resource connections
cond_type: Represents the condition type for applying the rule. Options: attributes, connection, filter, resource
attribute: Refers to the specific attribute or property of the cloud resource being evaluated
value: Represents the value that the attribute of the cloud resource should meet for the rule condition
Attribute-based rules
Attributes define resource property configurations. The YAML syntax for attribute configurations aligns with the framework targeted by the rule, such as
Terraform, to define the desired resource state. Cortex Cloud Application Security IaC rules identify and flag any resource that deviates from this defined state
as a violation.
Contain the specified attribute values. For example, if a rule states that the region attribute must be us-west-2, then a resource will only pass this part
of the rule if it includes the region attribute, and the value of that attribute is us-west-2
Match the attribute's presence. For example, if a rule states "the encryptionEnabled attribute must be present," then a resource will only pass if it
includes the encryptionEnabled attribute, regardless of its value
Match the attribute's absence: For example, if a rule says "the publicAccessAllowed attribute must be absent," then a resource will only pass if it
does not include the publicAccessAllowed attribute
In this example, the attribute check flags any aws_redshift_cluster resource where the automated_snapshot_retention_period is not 0.
Supported Operators: Attribute operators apply differently based on the scan type:
For Secrets scans: You must implicitly use the regex operator. Even if regex is not explicitly defined, pattern matching is applied automatically. For
example, in the following secret rule, regex is implicitly applied:
cond_type: "secrets"
value:
- "[A-Za-z0-9]{8,20}"
- "my-super-secret-password-regex"
The table below explains how to use attributes with matching keys and values.
Read more...
Operators Values
Equals equals
Exists exists
Any any
Contains contains
Within within
Operators Values
Subset subset
Intersects intersects
Nesting connection condition types within a NOT block is not currently supported. The following example displays an unsupported 'NOT' block for connection
condition types.
Example 31.
definition:
not:
cond_type: "connection"
resource_types:
- "aws_elb"
connected_resource_types:
- "aws_security_group"
operator: "exists"
Operators within this system support advanced attribute targeting through JSONPath expressions. To apply an operator to a JSONPath result, prefix the
operator with jsonpath_. This allows for flexible and precise data extraction and comparison. For example: jsonpath_length_equals or
jsonpath_length_exists .
Connection-based rules
Connection state in a rule defines whether resources of different types are connected or disconnected. This helps enforce security controls and architectural
constraints by specifying allowed or prohibited relationships between resources.
In this example, aws_lb and aws_elb must be connected to aws_security_group or aws_default_security_group to be compliant.
definition:
cond_type: "connection"
resource_types:
- "aws_elb"
- "aws_lb"
connected_resource_types:
- "aws_security_group"
- "aws_default_security_group"
operator: "exists"
resource_type collection of strings Use either all or [included resource type from
list]
connected_resource_types collection of strings Use either all or [included resource type from
list]
Exists exists
A rule can include layers of defined attributes, connection state, or both. To define the relationship between them, use AND/OR logical operators. You can
customize the attributes, connection state, or both across multiple layers.
Example 33.
In this example, the attribute property is evaluated using OR logic to enforce compliance checks for ensuring all AWS databases have a backup policy.
Read more...
metadata:
name: "Ensure all AWS databases have Backup Policy"
guidelines: "In case of non-compliant resource - add a backup policy configuration for the resource"
category: "storage"
severity: "medium"
scope:
provider: "aws"
definition:
or:
- cond_type: "attribute"
resource_types:
- "aws_rds_cluster"
- "aws_db_instance"
attribute: "backup_retention_period"
operator: "not_exists"
- cond_type: "attribute"
resource_types:
- "aws_rds_cluster"
- "aws_db_instance"
attribute: "backup_retention_period"
operator: "not_equals"
value: "0"
- cond_type: "attribute"
resource_types:
- "aws_redshift_cluster"
attribute: "automated_snapshot_retention_period"
operator: "not_equals"
value: "0"
- cond_type: "attribute"
resource_types:
- "aws_dynamodb_table"
attribute: "point_in_time_recovery"
operator: "not_equals"
value: "false"
- cond_type: "attribute"
resource_types:
Example 34.
In this example, both AND/OR logical operators both AND and OR logical operators are utilized to evaluate both attribute and connection state properties in
order to enforce compliance checks for ensuring that all Application Load Balancers (ALBs) are only connected to HTTPS listeners.
Read more...
metadata:
name: "Ensure all ALBs are connected only to HTTPS listeners"
guidelines: "In case of non-compliant resource - change the definition of the listener/listener_rul protocol value into HTTPS"
category: "networking"
severity: "high"
scope:
provider: "aws"
definition:
and:
- cond_type: "filter"
value:
- "aws_lb"
attribute: "resource_type"
operator: "within"
- cond_type: "attribute"
resource_types:
- "aws_lb"
attribute: "load_balancer_type"
operator: "equals"
value: "application"
- or:
- cond_type: "connection"
resource_types:
- "aws_lb"
connected_resource_types:
- "aws_lb_listener"
operator: "not_exists"
- and:
- cond_type: "connection"
resource_types:
- "aws_lb"
connected_resource_types:
- "aws_lb_listener"
operator: "exists"
- cond_type: "attribute"
resource_types:
- "aws_lb_listener"
attribute: "certificate_arn"
operator: "exists"
- cond_type: "attribute"
resource_types:
- "aws_lb_listener"
attribute: "ssl_policy"
operator: "exists"
- cond_type: "attribute"
resource_types:
- "aws_lb_listener"
attribute: "protocol"
operator: "equals"
value: "HTTPS"
- or:
- cond_type: "attribute"
resource_types:
- "aws_lb_listener"
attribute: "default_action.redirect.protocol"
operator: "equals"
value: "HTTPS"
- cond_type: "attribute"
resource_types:
- "aws_lb_listener"
attribute: "default_action.redirect.protocol"
operator: "not_exists"
- or:
- cond_type: "connection"
resource_types:
- "aws_lb_listener_rule"
connected_resource_types:
- "aws_lb_listener"
operator: "not_exists"
- and:
- cond_type: "connection"
resource_types:
- "aws_lb_listener_rule"
connected_resource_types:
- "aws_lb_listener"
operator: "exists"
- or:
- cond_type: "attribute"
resource_types:
- "aws_lb_listener_rule"
attribute: "default_action.redirect.protocol"
operator: "equals"
Example 35.
In this example, OR logic is applied to custom secrets defined as part of a policy aiming to enforce security measures by restricting the addition of certain types
of secrets.
Read more...
metadata:
name: "My Secret"
guidelines: "Don't add secrets"
category: "secrets"
severity: "high"
definition:
cond_type: "secrets"
value:
- "[A-Za-z0-9]{8,}"
- "my-super-secret-password-regex"
You can manage Cortex Cloud Application Security detection rules to customize and optimize your security configurations according to your specific needs
and preferences: On the AppSec Rules inventory, right-click on a rule or click to open the side panel → select an option:
Edit: Opens the Edit Rule wizard, allowing you to manage existing rules
Duplication: Opens the selected rule in a New Rule dialog box, allowing you to save a copy of the rule. This allows you to customize default rules
according to your requirements
Cortex Cloud Application Security provides dedicated scan inventories for each scan type: periodic branch scans, pull request (PR) scans and CI scans
(utilizing Cortex CLI capabilities). You can now view a comprehensive list of all scans within each inventory, with detailed information for each scan, including
scan health, status and scope, as well as a breakdown of issues detected during a scan. This enables you to easily track, analyze and manage your Cortex
Cloud Application Security scans, assess scan results, and gain detailed insights into identified vulnerabilities.
Branch Periodic Scanning: Scans code branches on a schedule to identify vulnerabilities early in development. For more information about branch
periodic scans, refer to Branch periodic scans
Pull Request Scans: Scans code changes within pull requests to prevent the introduction of new vulnerabilities. For more information about pull
request scans, refer to Pull Request scans
CI Scans: Scans the Continuous Integration (CI) pipeline to detect vulnerabilities before deployment. For more information about CI scans, refer to
CI scans
Periodic, PR and CI runs scan details are presented on the Cortex Cloud console across three levels of granularity: an inventory table providing a list of scans,
a side panel providing general scan details including a high-level breakdown of the findings and issues detected during the scan, and an expanded
description card, providing detailed information about the issues generated from these scans.
NOTE:
While scans provide a comprehensive inventory of all issues detected during a scan, dedicated inventories are also maintained for specific scan types for
more granular management. For more information, refer to IaC scans, Secrets scans and Software Composition Analysis (SCA ) scans.
Branch periodic scans are automated checks that assess the security posture of applications and infrastructure. These scans run at regular intervals using
supported Cortex Cloud Application Security scanners to identify vulnerabilities and weaknesses. You can analyze the scans directly from a dedicated
inventory table, which displays branch periodic scan details, including code context, scan date, health, detected findings, and generated issues.
Under Modules select Application Security → Branch Periodic Scanning (under Scans).
Read more...
Error
Partially: Indicates that the scan execution resulted in a partial completion, with some scan modules (for example IaC) succeeding and others
(such as Secrets) failing
Completed
Click on a scan to open a side panel displaying general details about the scan, as well as a breakdown of the severity and scan type of issues and findings
detected during the scan. Expand the side panel for a detailed list of detected issues and their severity levels.
NOTE:
The inventory table displays scan issues for visibility only; remediation is not available here. To resolve issues, navigate to the dedicated issue type inventory,
where you can manage and remediate them.
Pull Request (PR) scans are initiated by events triggered by version control systems such as GitHub, GitLab, Bitbucket and Azure Repos, or via webhooks.
These scans are run on default or non-default branches containing open PRs or Merge Requests (MR) from your integrated repositories. The scan results are
based on default enforcement thresholds. You can analyze PR scans directly from a dedicated inventory table, which displays PR scan details, including code
context, scan date, health, status, detected findings, and generated issues.
Under Modules select Application Security → Pull Request Scans (under Scans).
Read more...
Error
Partially: Indicates that the scan execution resulted in a partial completion, with some scan modules (for example IaC) succeeding and others
(such as Secrets) failing
Completed
Click on a scan to open a side panel displaying general details about the scan, as well as a breakdown of the severity and scan type of issues and findings
detected during the scan. Expand the side panel for a detailed list of detected issues and their severity levels.
NOTE:
The inventory table displays scan issues for visibility only; remediation is not available here. To resolve issues, navigate to the dedicated issue type inventory,
where you can manage and remediate them.
9.12.4 | CI scans
CI scans allow you to scan and detect exposed secrets, misconfigurations in your infrastructure-as-code (IaC) files, vulnerabilities in your software composition
analysis (SCA) packages, and license non-compliance in your CI pipelines. You can analyze the scans directly from a dedicated inventory table, which
displays CI scan details, including scan health, status, scan scope, scan date, and a unique scan ID for tracking. For deeper analysis, click on a scan in the
table to open a side panel containing general scan details and a breakdown of detected issues and findings by scan type and severity. You can expand the
details panel to view a list of all detected issues during a scan with their respective severity levels.
CI scan inventory
Read more...
Scan ID: The unique identifier assigned to the scan, allowing you to track and manage the results of that specific scan
Error
Partially: Indicates that the scan execution resulted in a partial completion, with some scan modules (for example IaC) succeeding and others
(such as Secrets) failing
Completed
CI status: Displays the health status of the scan. Values: "Passed', 'Blocked', 'In Progress'
Click on a scan to open a side panel displaying general details about the scan, as well as a breakdown of the severity and scan type of issues and findings
detected during the scan. Expand the side panel for a detailed list of detected issues and their severity levels.
NOTE:
The inventory table displays scan issues for visibility only; remediation is not available here. To resolve issues, navigate to the dedicated issue type inventory,
where you can manage and remediate them.
Learn more about managing your datasets and understanding your overall data storage, period based retention.
The Dataset Management page enables you to manage your datasets and understand your overall data storage duration for different retention periods and
datasets based on your hot and cold storage licenses, and retention add-ons that extend your storage. You can view details about your Cortex Cloud licenses
and retention add-ons by selecting Settings → Cortex Cloud License. For more information on license retention and the defaults provided per license, see
License retention in Cortex Cloud.
IMPORTANT:
Cortex Cloud enforces retention on all log-type datasets excluding Host Inventory, Vulnerability Assessment, Metrics, and Users.
Your current hot and cold storage licenses, including the default license retention and any additonal retention add-ons to extend storage, are listed within the
Hot Storage License and Cold Storage License sections of the Dataset Management page. Whenever you extend your license retention, depending on your
requirements and license add-ons for both hot storage and cold storage, the add-ons are listed.
NOTE:
Cold storage, in addition to a cold storage license, requires compute units (CU) to run cold storage queries. For more information on CU, see Manage
compute units. For information on the CU add-on license, see Understand the Cortex Cloud license plan.
You can expand your license retention to include flexible Hot Storage based retention to help accommodate varying storage requirements for different
retention periods and datasets. This add-on license is available to purchase based on your storage requirements for a minimum of 1,000 GB. If this license is
purchased, an Additional Storage subheading in the Hot Storage License section is displayed on the Dataset Management page with a bar indicating how
much of the storage is used.
NOTE:
Only datasets that are already handled as part of the GB license are supported for this license. In addition, the retention configuration is only available
in Cortex Cloud, as opposed to the public APIs.
On any dataset configured to use Additional Hot Storage, you can edit the retention period. This enables you to view the current retention details for hot and
cold storage and configure the retention. This includes setting the amount of flexible hot storage-based retention designated for a dataset and the priority for
the dataset's hot storage. This is used when the storage limit is exceeded to know the data most critical to preserve.
2. In the Datasets table, right-click any dataset designated with flexible hot storage, and select Edit Retention Plan.
Additional hot storage: Set the amount of flexible hot storage-based retention designated for this dataset in months, where a month is calculated as
31 days.
Hot Storage Priority: Select the priority designated for this dataset's hot storage as either Low, Medium, or High. This is used when the storage limit
is exceeded. Data is first deleted from lowest to highest, and then from the oldest to latest timestamp.
4. Click Save.
Datasets table
For each dataset listed in the table, the following information is available:
NOTE:
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Datasets include dataset permission enforcements in the Cortex Query Language(XQL), Query Center, and XQL Widgets. For example, to view or
access any of the endpoints and host_inventory datasets, you need role-based access control (RBAC) permissions to the Endpoint
Administration and Host Inventory views. Managed Security Services Providers (MSSP) administration permissions are not enforced on child tenants,
but only on the MSSP tenant.
Field Description
*TYPE Displays the type of dataset based on the method used to upload the data. The possible values include: Correlation, Lookup, Raw,
Snapshot, System, and User. For more information on each dataset type, see What are datasets?.
*LOG UPDATE Event logs are updated either continuously (Logs) or the current state is updated periodically (State) as detailed in the Last Updated
TYPE column.
*LAST UPDATED Last time the data in the dataset logs were updated.
IMPORTANT:
This column is updated once a day. Therefore, if the dataset was created or updated by the target or lookup flows, it's possible that
the Last Updated value is a day behind when the queries or reports were run as it was before this column was updated.
*ADDITIONAL Amount of flexible hot storage-based retention designated for this dataset in months, where a month is calculated as 31 days.
STORAGE
*TOTAL DAYS Actual number of days that the data is stored in the Cortex Cloudtenant, which is comprised of the HOT RANGE + the COLD RANGE.
STORED
*HOT RANGE Details the exact period of the Hot Storage from the start date to the end date.
*COLD RANGE Details the exact period of the Cold Storage from the start date to the end date.
*TOTAL SIZE Actual size of the data that is stored in the Cortex Cloud tenant. This number is dependent on the events stored in the hot storage. For
STORED the xdr_data dataset, where the first 31 days of storage are included with your license, the first 31 days are not included in the
TOTAL SIZE STORED number.
*ADDITIONAL Actual size of the additional flexible hot storage data that is stored in the Cortex Cloud tenant in GB. This number is dependent on the
SIZE STORED events stored in the hot storage.
*AVERAGE Average daily amount stored in the Cortex Cloud tenant. This number is dependent on the events stored in the hot storage.
DAILY SIZE
*HOT STORAGE Indicates the priority set for the dataset's hot storage as either Low, Medium, or High. This is used when the storage limit is exceeded.
PRIORITY Data is first deleted from lowest to highest, and then from the oldest to latest timestamp.
*TOTAL EVENTS Number of total events/logs that are stored in the Cortex Cloud tenant. This number is dependent on the events stored in the hot
storage.
*AVERAGE Average size of a single event in the dataset (TOTAL SIZE STORED divided by the TOTAL EVENTS). This number is dependent on the
EVENT SIZE events stored in the hot storage.
*TTL For lookup datasets, displays the value of the time to live (TTL) configured for when lookup entries expire and are removed
automatically from the dataset. The possible values are:
Custom: Lookup entries expire according to a set number of days, hours, and minutes. The maximum number of days is 99999.
For more information, see Set time to live for lookup datasets.
Field Description
DEFAULT QUERY Details whether the dataset is configured to use as your default query target in XQL Search, so when you write your queries you do
TARGET not need to define a dataset. By default, only the xdr_data dataset is configured as the DEFAULT QUERY TARGET and this field is
set to Yes. All other datasets have this field set to No. When setting multiple default datasets, your query does not need to mention
any of the dataset names, and Cortex Cloud queries the default datasets using a join.
TOTAL HOT Total hot storage retention configured for the dataset in months, where a month is calculated as 31 days.
RETENTION
TOTAL COLD Total cold storage retention configured for the dataset in months, where a month is calculated as 31 days.
RETENTION
Dataset views
Cortex Cloud supports creating dataset views in the Dataset Management page to enhance data efficiency and security. Dataset views provide a virtual
representation of data from one or more datasets, based on the Cortex Query Language (XQL) query defined, and provide multiple benefits, such as joining
datasets into logical subsets through defined queries, manipulating data without altering underlying datasets, and segregating data for specific user needs or
access privileges through the Role-based access control (RBAC) settings.
Once a dataset view is created, you can edit or delete the dataset view by right-clicking the dataset view in the Dataset Views table. A dataset view can only
be deleted if there are no other dependencies. For example, if a Correlation Rule is based on a dataset view, you wouldn't be able to delete the dataset view
until you removed the dataset view from the XQL query of the Correlation Rule.
Cortex Cloud logs entries for events related to creating, editing, and deleting datasets or dataset views. These monitored activities are available to view in the
datasets and dataset views audit logs in the Management Audit Logs. For more information, see Monitor datasets and dataset views activity.
When building an XQL query to define a dataset view, the query is built in the same way as creating a query through the Query Builder. Yet, it's important to be
aware of the following points that are specific for dataset view queries:
RT Correlation Rules
Query Library
Presets
Only the following XQL stages are supported when building a dataset view query:
alter
dedup
fields
filter
join
replacenull
union
Once the dataset view is created, it is listed as an available dataset when building your XQL queries as long as you have the necessary permissions to
access the dataset view in the Role-based access control (RBAC) settings.
The query must contain no errors, including using only supported commands, to run; otherwise, the Run button remain disabled.
6. Click Save.
NOTE:
You'll only be able to save the dataset view if the query contains no errors; otherwise, the Save button is disabled.
Once the dataset view is created, you can now control user access permissions through Role-based access control (RBAC).
Dataset views access permissions
LICENSE TYPE:
Managing Roles requires an Account Admin or Instance Administrator role. For more information, see Predefined user roles.
Access permissions for dataset views are configured in the same way that you set dataset access permissions for any dataset through user roles in Cortex
Cloud Access Management. Cortex Cloud uses role-based access control (RBAC) to manage roles with specific permissions for controlling user access.
RBAC helps manage access to Cortex Cloud components and datasets, so that users, based on their roles, are granted minimal access required to
accomplish their tasks. Once the user role is configured to access these dataset views, you can now assign the user role to the designated users or user
groups, who you want to access these dataset views.
2. Configure a user role with the dataset views that you want users to access.
a. Select Roles.
To create a new role to assign the dataset views, click New Role, and set a Role Name and Description (optional).
To edit an existing user role with these dataset views, right-click the relevant user role, and select Edit Role.
To create a new role based on an existing role, right-click the relevant user role, select Save As New Role, and set a Role Name and
Description (optional).
c. Under Datasets, you have two options for setting the Cortex Query Language (XQL) dataset access permissions for the user role:
Set the user role with access to all XQL datasets by disabling the Enable dataset access management toggle.
Set the user role with limited access to certain XQL datasets by selecting the Enable dataset access management toggle and selecting the
datasets under the different dataset category headings.
d. Scroll down to Dataset View and select the particular dataset views that you want assigned to this user role.
e. Click Save.
NOTE:
3. Assign the user role with the dataset views configured to the designated users or user groups. For more information, see Assign a user to a role.
For each dataset view listed in the table, information is available. Here are descriptions on the columns that may require further explanation:
Field Description
SOURCE QUERY Displays the query used to create the dataset view.
IS VALID Details whether the query for the dataset view is still valid or not.
RELATED TABLES Details the other datasets that are related to this dataset view.
Abstract
Cortex Cloud runs every Cortex Query Language (XQL) query against a dataset. A dataset is a collection of column:value sets. If you do not specify a dataset
in your query, Cortex Cloud runs the query against the default datasets configured, which is by default xdr_data for a dataset query. The xdr_data dataset
contains all of the endpoint and network data that Cortex Cloud collects. For a Cortex Data Model (XDM) query, unless specific datasets are specified, a query
will run against all mapped datasets. You can always change the default datasets using the set to default option. You can also upload datasets as a CSV, TSV,
or JSON file that contains the data you are interested in querying. These uploaded datasets are called lookup datasets.
It's also possible to create dataset views, which provide a virtual representation of data from one or more datasets, based on the Cortex Query Language
(XQL) query defined. Dataset views enhance data efficiency and security. For example, by segregating data for specific user needs or access privileges
through the Role-based access control (RBAC) settings. For more information, see Dataset views.
Set a dataset as default, which enables you to query the datasets without specifying them in the query.
Name a specific dataset at the beginning of your query with the dataset stage command.
Dataset types
The type of dataset is based on the method used to upload the data. The possible types include:
Lookup: A dataset containing key-value pairs that can be used as a reference to correlate to events. For example, a user list with corresponding access
privileges. You can import or create a lookup dataset, and then reference the values for a certain key, run queries and take action. For more information,
see Lookup datasets.
Raw: Every dataset where PANW data is ingested out-of-the-box or third-party data is ingested using a configured dedicated collector.
Snapshot: A dataset that contains only the last successful snapshot of the data, such as Workday or ServiceNow CMDB tables.
User: If saved by a query using the target command, the Type can be either User or Lookup.
Datasets in XQL
IMPORTANT:
By default, forensic datasets are not included in XQL query results, unless the dataset query is explicitly defined to use a forensic dataset.
Cortex Query Language (XQL) supports using different languages for dataset and field names. In addition, when setting up your XQL query, it is important to
keep in mind the following:
The dataset formats supported are dependent on the data retention offerings available in Cortex Cloud according to whether you want to query hot
storage or cold storage.
Hot Storage queries are performed on a dataset using the format dataset = <dataset name>. This is the default option.
dataset = xdr_data
Cold Storage queries are performed using the format cold_dataset = <dataset name>.
cold_dataset = xdr_data
The refresh times for datasets, where all Cortex Cloud system datasets, which are created out-of-the-box, are continuously ingested in near real-time as
the data comes in, except for the following:
Forensics datasets: The Forensics data is not configured to be updated by default. When you enable a collection in the Agent Settings profile, the
data is collected only once unless you specify an interval. If you specify an interval, the data is collected every <interval> number of hours with
the minimum being 12.
Query against a dataset by selecting it with the dataset command when you create an XQL query. For more information, see Create XQL query.
After you query runs, you can always save your query results as a dataset. You can use the target stage command to save query results as a dataset.
You can manage your datasets and dataset views in Cortex Cloud from the Settings → Configurations → Data Management → Dataset Management page.
Below are some of the main tasks available for all dataset types by right-clicking a particular dataset or dataset view listed in either the Datasets or Dataset
Views table. Only tasks that need further explanation are explained below. Datasets and dataset views can only be deleted if there are no other dependencies.
For example, if a Correlation Rule is based on a dataset or dataset view, you wouldn't be able to delete the dataset or dataset view until you removed the
dataset view from the XQL query of the Correlation Rule.
For more information on tasks specific to lookup datasets, see Lookup datasets.
View Schema
Select View Schema to view the schema information for every field found in the dataset or dataset view result set in the Schema tab after running the query in
XQL. Each system field in the schema is written with an underscore (_) before the name of the field in the FIELD NAME column in the table.
Set as default
Select Set as default to query the dataset without having to specify it in your queries in XQL by typing dataset = <name of dataset>. Once configured,
the DEFAULT QUERY TARGET column entry for this dataset is set to Yes in the Datasets table. By default, this option is not available when right-clicking the
xdr_data dataset as this dataset is the only dataset configured as the DEFAULT QUERY TARGET as it contains all of the endpoint and network data that
Cortex Cloud collects. Once you Set as default another dataset, you can always remove it by right-clicking the dataset and selecting Remove from defaults.
When setting multiple default datasets, your query does not need to mention any of the dataset names, and Cortex Cloud queries the default datasets using a
join. This option is only relevant for datasets.
Select Copy text to clipboard to copy the name of the dataset or dataset view to your clipboard.
Abstract
Learn more about lookup datasets to correlate data from a data source with events in your environment.
Lookup datasets enable you to correlate data from a data source you provide with the events in your environment. For example, you can create a lookup with a
list of high-value assets, terminated employees, or service accounts in your environment. Use lookups in your search, detection rules, and threat hunting.
Lookups are stored as name-value pairs and are cached for optimal query performance and low latency.
Lookup tables support low frequency changes of up to 1200 modifications per day. Changes are implemented whenever a lookup dataset is edited, where
only one person can edit the file. Concurrent users editing the file is not supported.
Investigate threats and respond to cases quickly with the rapid import of IP addresses, file hashes, and other data from CSV files. After you import the
data, use lookup name-value pairs for joins and filters in threat hunting and general queries.
Import business data as a lookup. For example, import user lists with privileged system access, or terminated employees. Then, use the lookup to create
allow lists and blocklists to detect or prevent those users from logging in to the network.
Create allow lists to suppress issues from a group of users, such as users from authorized IP addresses that perform tasks that would normally trigger
the issue. Prevent benign events from becoming issues.
Enrich event data. Use lookups to enrich your event data with name-value combinations derived from external data sources.
You can import or create a lookup dataset, and then reference the values for a certain key, run queries and take action. Lookup datasets are created by any of
the following methods:
Manual upload from a CSV, TSV, or JSON file to Cortex Cloud from the Dataset Management page. For more information, see Import a lookup dataset.
Query results are saved to a lookup dataset. If saved using the target stage, the Type can be either User or Lookup. For more information, see the
target stage in the XQL Language Reference Guide.
After a lookup a dataset is imported, you can always edit the dataset to update the data manually by right-clicking the dataset and selecting Edit.
NOTE:
A lookup dataset can only be deleted if there are no other dependencies. For example, if a Correlation Rule is based on a lookup dataset, you wouldn't be
able to delete the lookup dataset until you removed the dataset from the XQL query of the Correlation Rule.
Abstract
Learn more about importing data from an external file to create or update a lookup dataset in Cortex Cloud.
You can import data from CSV, TSV, or JSON files into Cortex Cloud to create or update lookup datasets.
PREREQUISITE:
The maximum size for the total data to be imported into a lookup dataset is 30 MB.
Field names can contain characters from different languages, special characters, numbers (0-9), and underscores (_).
Field names can't contain duplicate names, white spaces, or carriage returns.
The file doesn't contain a byte array (binary data) as it can't be uploaded.
Each line in the JSON file must represent one JSON object. Ensure no brackets enclose the objects at the top-level.
Example 36.
2. Browse to your CSV, TSV, or JSON file. You can only upload a TSV file if it contains a .tsv file extension.
3. (Optional) Under Name, type a new name for the target dataset.
By default, Cortex Cloud uses the name of the original file as the dataset name. You can change this name to something that will be more meaningful for
your users when they query the dataset. For example, if the original file name is mrkdptusrsnov23.json, you can save the dataset as
marketing_dept_users_Nov_2023.
Dataset names can contain special characters from different languages, numbers (0-9) and underscores (_). You can create dataset names using
uppercase characters, but in queries, dataset names are always treated as if they are lowercase.
IMPORTANT:
The name of a dataset created from a TSV file must always include the extension. For example, if the original file name is mrkdptusrsnov23.tsv,
you can save the dataset with the name marketing_dept_users_Nov_2023.tsv.
4. Replace the existing data in the dataset overwrites the data in an existing lookup dataset with the contents of the new file.
6. After receiving a notification reporting that the upload succeeded, Refresh to view it in your list of datasets.
If the upload fails for any reason, you'll receive a notification in the Notification Center.
Abstract
You can only download a JSON file for a lookup dataset, where the Type set to Lookup on the Dataset Management page. This option is not available for any
other dataset type.
When you download a lookup dataset with field names in a foreign language, the downloaded JSON file displays the fields as COL_<randomstring> as
opposed to returning the fields in the foreign language as expected.
2. In the Datasets table, right-click the lookup dataset that you want to download as a JSON file, and select Download.
Abstract
Learn more about setting the time to live (TTL) for lookup datasets in Cortex Cloud.
You can specify when lookup entries expire and are removed automatically from the lookup dataset by configuring the time to live (TTL). The time period of the
TTL interval is based on when the data was last updated. The default is forever and the entries never expire. You can also configure a specific time according
to the days, hours, and minutes. Expired elements are removed from the lookup dataset by a scheduled job that runs every five minutes.
3. Select one of the following to configure when lookup dataset entries expire and are removed:
Custom: Lookup entries expire according to a set number of days, hours, and minutes. The maximum number of days is 99999.
4. Click Save.
The TTL column in the Datasets table is updated with the changes and these changes are applied immediately on all existing lookup entries.
Abstract
Learn more about the monitored Cortex Cloud datasets and dataset views activities.
Cortex Cloud logs entries for events related to datasets and dataset views monitored activities. Cortex Cloud stores the logs for 365 days. To view the datasets
and dataset views audit logs, select Settings → Management Audit Logs.
You can customize your view of the logs by adding or removing filters to the Management Audit Logs table. You can also filter the page result to narrow down
your search. The following table describes the default and optional fields that you can view in the Cortex Cloud Management Audit Logs table:
NOTE:
Certain fields are exposed and hidden by default. An asterisk (*) is beside every field that is exposed by default.
Field Description
Host Name* This field is not applicable for datasets and dataset views logs.
Reason This field is not applicable for datasets and dataset views logs.
Critical
High
Medium
Low
Informational
Field Description
Type* and Sub-Type* Additional classifications of dataset and dataset view logs (Type and Sub-Type):
Datasets:
Create Dataset
Delete Dataset
Update Dataset
Dataset Views:
Learn more about managing and tracking your compute units usage for API, Apps, and Cold Storage XQL queries.
Cortex Cloud uses compute units (CU) for these types of queries:
API Queries: When running Cortex Query Language (XQL) queries on your data sources using APIs, each XQL query API consumes CU based on the
timeframe, complexity, and number of API response results.
Apps: The Notebooks instance consumes 1000 CU each day and BigQuery queries consume CU based on the timeframe, complexity, and number of
results. Apps is charged daily at 00:00 UTC.
Cold Storage Queries: Cold Storage is a data retention offering for cheaper storage usually for long-term compliance needs with limited search options.
You can perform queries on Cold Storage data using the following dataset formats:
For historical data imported into cold storage queries: cold_dataset = archive_<dataset name>.
NOTE:
For more information, see Import historical data into cold storage.
Timeframe, complexity, and the number of Cold Storage response results of each XQL Cold Storage query.
When you query Cold Storage data, the rewarmed data is saved in a temporary hot storage cache that is available for subsequent queries on the same
time-range at no additional cost. The rewarmed data is available in the cache for 24 hours and on each re-query the cached data is extended for 24
hours, for up to 7 days.
NOTE:
The CU consumption of cold storage queries are based on the number of days in the query time frame. For example, when querying 1 hour of a specific
day, the CU of querying this entire day are consumed. When querying 1 hour that extends past 2 days, such as from 23:50 to 00:50 of the following day,
the CU of querying these two days are consumed.
Abstract
Learn more about how to compute units (CU) works according to your license and available options after reaching your quota.
Cortex Cloud provides a free daily quota of compute units (CU) allocated according to your license size. Queries called without enough quota will fail. To
expand your investigation capabilities, you can purchase additional CU by enabling the Compute Unit add-on.
You can configure the daily consumption limit for your compute units according to your organizational needs and change it when needed. For example, you
can set a lower limit on a daily basis, and during an incident investigation you can change it to a higher limit that enables you to consume more compute units.
Your unused compute unit balance cannot be transferred from one licensing period to the next.
To gauge how many CU you require, Cortex Cloud provides a 30-day free trial period with 1/12 of your allocated annual CU quota to run XQL API and Cold
Storage queries. You can then track the cost of each XQL API and Cold Storage query responses and the Compute Units Usage page. In addition, Cortex
Cloud sends a notification when the Compute Units add-on has reached your daily threshold.
NOTE:
To enable the add-on, select Settings → Configurations → Cortex Cloud License → Addons tile, and select the Compute Unit tile and Enable.
2. In Annual Usage in Compute Units, monitor the number of free compute units per license year, the number of purchased compute units per license year,
and the ratio of used compute units to your yearly total compute units.
If you have Edit permissions for Public APIs, you can customize the Daily limit to cater to your needs.
Divide annual quota evenly: Total annual compute units divided by 365.
No limit
Custom: Configure a daily amount that is equal to or grater than your daily average calculated over a year (annual total/365). Use only integers.
For Managed Security tenants, the values calculated are the total daily usage of parent and child tenants.
3. In Compute Units Usage , view the compute unit usage over the past 30 days or over the past 12 months.
Compute Units Usage over the Last 30 Days: Hover over each bar to view the total number of units used each day. The daily compute units are
calculated at 00:00 UTC time. The red line represents your daily limit for that day. If you change the daily limit a few times on a specific day, the
displayed limit is the last number you configured on that day. Select a bar to display in the Compute Unit Usage table the list of queries executed
on the selected day.
Compute Units Usage over the Last 12 Months: Hover over each bar to view the total number of compute units used each month. The dotted gray
line represents your average annual limit per month. You can use the 12 month display to plan how many compute units you need in the next
licensing period.
4. In the Compute Units Usage table, investigate all the queries that were executed on your tenant. You can filter and sort according to the following fields.
Timestamp
For Notebooks and BQ queries: date and time the query is charged.
Compute Unit Usage: Displays how many query units were used to execute the query .
Tenant: Appears only in a Managed Security tenant. Displays which tenant executed an API query or Cold Storage query.
In the Compute Units Usage table, locate an XQL API or Cold Storage query, right-click and select Show Results.
The query is displayed in the query field of the Query Builder where you can view the query results. For more information, see How to build XQL queries.
2. Find an instance by clicking on a Data Source name or using the Search field.
3. In the row for the instance name, take the required action:
Action Instructions
b. In Edit Data Source, you can update the values in the Connect and Collect sections. The options under
Recommended Content are view only.
If you delete all the instances for a Data Source, the Data Source is not listed on the Data Sources page.
You can add a new data source with the Data Source Onboarder. The Onboarder installs the data source, sets up an instance, configures playbooks and
scripts, and other recommended content. The Onboarder offers default (customizable) options, and displays all configured content in a summary screen at the
end of the process.
+ Add New Instance for an integrated data source by clicking the menu in the right corner of an existing data source. Then skip to Step 4.
Hovering over a data source displays information about the data source and its integrations. Data sources that are already integrated are highlighted
green and show Connect Another Instance. To see details of existing integrations, click on the number of integrations.
The data sources are drawn from the Marketplace, Custom Collectors, and integrations. If you search for a data source and No Data Sources Found,
click Try searching the Marketplace, to view the marketplace page prefiltered for your search. If there are no available options in the Marketplace, you
can use one of the Custom Collectors to build your own.
NOTE:
If a data source contains multiple integrations, the integration configured as the default integration will used by the Data Onboarder. The default
integration of the content pack is indicated in each content pack's documentation. The other integrations are available for configuration in the
Automation and Feed Integrations page after installing the content pack.
When adding XDR data sources the Data Source Onboarder is not available, however, you can still enable the data source. Cortex Cloud then
creates an instance and lists it on the Data Sources page.
4. In the New Data Source window, complete the mandatory fields in the Connect section.
For more information about the fields, click the question mark icon.
5. (Optional) Under Collect, select Fetched alerts and complete the fields.
The items in this section are content specific. Some options are view only and others are customizable. Click on each option for more information:
You can select the Playbooks and Scripts that you want to enable. By default, recommended options are selected. Any unselected content is
added as disabled content. Depending on the selected playbook, some scripts are mandatory.
NOTE:
If you are adding a new instance to an existing data source, these options are View only.
You can adjust the view only options on the relevant page in the system, for example Correlations, Playbooks, or Scripts.
Cortex Cloud automatically installs content packs with required dependencies and updates any pre-installed optional content packs. You can also
Select additional content packs with optional dependencies to be configured during connection.
If the test fails, you can Run Test & Download Debug Log to debug the error.
If errors occurred during the test, you can click See Details and Back to Edit to revise your configuration. For advanced configuration, click on an items
to open a new window to the relevant page in the system (for example, Correlations or Playbooks) filtered by the configuration.
You can manage the Instances configured for a Data Source on the Data Sources page. You can edit, delete, enable, or disable instances, and refresh log
data.
2. Find an instance by clicking on a Data Source name or using the Search field.
3. In the row for the instance name, take the required action:
Action Instructions
b. In Edit Data Source, you can update the values in the Connect and Collect sections. The options under
Recommended Content are view only.
If you delete all the instances for a Data Source, the Data Source is not listed on the Data Sources page.
Abstract
You can manage the cloud instances configured for a CSP on the Data Sources page. You can check the status, edit, delete, enable, or disable instances,
and initiate discovery scan.
2. Find the cloud instance by clicking on the CSP name or using the Search field.
3. In the row for the cloud instance, click View Details. The Cloud Instances page is displayed, filtered by the CSP you selected.
4. In the Cloud Instances page, you can filter the results by any heading and value.
5. Click on an instance name to open the details pane for that instance.
Action Instructions
Discover Now To initiate a discovery scan, in the row for the cloud instance, right-click
and select Discover Now. Alternatively, in the details pane, click the
more options icon and select Discover Now.
Enable/Disable In the row for the cloud instance, right-click and select Enable or
Disable. Alternatively, in the details pane, click the more options icon
and select Enable or Disable.
Delete In the row for the cloud instance, right-click and select Delete.
Alternatively, in the details pane, click the more options icon and select
Delete.
Create a new instance Click New Instance and select the type of CSP of which you want to
create a new instance. Follow the onboarding wizard to define its
settings.
Edit configuration In the row for the cloud instance, right-click and select Configuration.
Alternatively, in the details pane, click the edit button. Follow the
onboarding wizard to edit the cloud instance's settings. You must
execute the updated template in the CSP environment for the
configuration changes to be applied.
Abstract
You can manage the Kubernetes Connector instances on the Data Sources page. You can check the status, edit or delete Kubernetes Connector instances.
2. Find the Kubernetes instance by clicking on the Kubernetes name or using the Search field.
3. In the row for the Kubernetes instance, click View Details. The Kubernetes Connectors page is displayed with all deployed Kubernetes Connectors. To
view all Kubernetes clusters, including ones that are not yet deployed, go to the Kubernetes Connectivity Management page.
4. In the Kubernetes Connectors page, click on a cluster name to open the details pane for that instance.
5. You can perform the following actions on each Kubernetes Connector instance:
Action Instructions
Open Cluster Details In the details pane, click the more options icon and select Open Cluster
Details. The Asset Card for that Kubernetes cluster is displayed.
Edit Connector In the row for the Kubernetes instance, right-click and select Edit.
Alternatively, in the details pane, click the more options icon and select
Edit Connector. In Edit Kubernetes Connector, enter a name for the
installer. You can edit the namespace for the connector, the scan
cadence, and the version of the connector you want to install. You must
execute the updated template in the Kubernetes environment for the
configuration changes to be applied.
Delete Connector In the row for the Kubernetes instance, right-click and select Delete.
Alternatively, in the details pane, click the more options icon and select
Delete Connector. To remove the connector, you must manually run
Kubernetes commands to delete the resources in the Kubernetes
environment. The commands are listed here.
Navigate to Settings → Data Sources and find the Kubernetes instances by clicking on the Kubernetes name or using the Search field. In the Kubernetes
Connectors page, click Kubernetes Connectivity Management to view all detected Kubernetes clusters. Here, you can check if a cluster is connected, view the
status, and see the connector version. When a new version of the Kubernetes Connector is available, you can update it here.
Abstract
You can troubleshoot errors on cloud instances by drilling down on an instance from the Data Sources page.
To help you to troubleshoot errors on a cloud instance, Cortex Cloud provides the following visibility and drilldown options:
A breakdown of the security capabilities enabled on an instance, detailing the status of each capability along with any open errors or issues.
Additional XQL drill down options to query the history of error and recovery events for each security capability.
Under Cloud Service Provider, review the status of the instances that were onboarded for the service provider. If the status shows Warning or Error, hover
over the service provider and click View Details.
2. On the Cloud Instances page review the list of instances that were onboarded and their overall status. The status is displayed as follows:
Warning: The connector is enabled and has minor issues. For example, some accounts or capabilities are in warning or error status.
Error: The connector is enabled and has substantial errors. For example, an authentication failure, an outpost failure, major permissions issues, or
(for organization level accounts) the majority of the accounts in the instance are in error status.
3. To understand why an instance is showing a Warning or Error status, click on the instance name.
The cloud instance panel provides a breakdown of the security capabilities and the accounts onboarded on the instance. Review the information in the
following sections:
Section Context
Header Displays the overall status of the instance and the following information about the account, as specified during onboarding:
Scope of the instance: The number of accounts onboarded on the instance and their status. See the Accounts section for
more information about the individual accounts and the type of account (single account or organization).
Scan mode: Cloud Scan or Outpost. For accounts using Outpost, information is displayed about the status of the Outpost
account and the account ID.
Security Displays a breakdown of the security capabilities enabled on the instance and their individual statuses. Click on any item that
Capabilities shows a warning or error status to see the open errors and issues that contributed to the status:
Errors are factual objects that are automatically created when problems occur, and provide insight into the current status of
the capability. For example, if a permission is missing, an error is displayed. Browse and filter the errors to better
understand and resolve the problem.
Issues are actionable objects that are triggered when detected problems exceed defined thresholds. Issues are
manageable, trackable, and provide remediation suggestions and automations.
The issues displayed in the panel are open issues that are specifically related to the selected connector with the selected
capability in the observed scope (single account or organization). Click an issue to start investigating it.
Accounts Lists the accounts that are onboarded on the instance and their individual status.
If multiple accounts are onboarded on the instance, click on each account to filter the page information by account, and drill-
down to the security capability statuses for each account.
4. If the instance shows an Outpost error, go to the All Outposts page and find the outpost account that is being used by this instance. Right click the
Outpost account to view the open errors and issues for the account.
5. If the account shows Permission errors, use the side panel to check which permissions are missing. You can also Edit the instance to redeploy the cloud
setup template, which should normally resolve the error.
This dataset records error and recovery events for the security capabilities in cloud instances. By querying this dataset you can see information about
when the error started, the prevalence of the error, and whether there is a recurrency pattern. See the specific fields descriptions and query examples for
each security capability.
NOTE:
Errors related to collection of audit logs in the cloud instance are recorded in the collection_auditing dataset. For more information, see Audit
logs fields and query examples.
7. Set up correlation rules to trigger issues when errors occur in cloud security capabilities. See the following examples.
You can review Outpost entries in the cloud_health_auditing dataset to see Outpost activity over time, or to search for errors on specific accounts.
Outpost entries are added to the dataset as follows:
An error occurred on an Outpost account that disabled or prevented an operation. This is audited as Error.
An exceptional condition occurred on an Outpost account that might cause problems if not resolved. This is audited as Warning.
Field Description
Field Description
Resource ID Outpost ID
Capability Outpost
Error Details about the error. For informational entries this is blank.
dataset = cloud_health_auditing | filter capability = "Outpost" and classification = "Error" and region = "eu-west-3"
See all entries (error, warning, and recovery) for Outpost_1 on cloud account Account_A:
dataset = cloud_health_auditing | filter capability = "Outpost" and resource_id = “Outpost_1” and account = "Account_A"
You can review Permissions entries in the cloud_health_auditing dataset to see Permissions activity over time, or to search for errors on specific
accounts. Permissions entries are added to the dataset as follows:
An exceptional condition occurred that might cause problems if not resolved. This is audited as Warning.
Field Description
Account Name of the account where the event occurred, or All accounts.
Capability Permissions
You can review Discovery engine entries in the cloud_health_auditing dataset to see Discovery activity over time, or to search for errors on specific
accounts. Discovery entries are added to the dataset as follows:
An exceptional condition occurred that might cause problems if not resolved. This is audited as Warning.
The following table describes the fields for Discovery engine entries:
Field Description
Account Name of the account where the event occurred, or All accounts
Capability Discovery
Identify API exec errors on the Discovery engine for all accounts on the AWS_1 connector:
dataset = cloud_health_auditing | filter capability = "Discovery" and connector = "AWS_1" and classification = “Error”
See all Discovery engine activity on connector AWS_1 for Account_ A in the af-south-1 region:
dataset = cloud_health_auditing | filter capability = "Discovery" and connector = "AWS_1" and account = "accountA" and region = "af-south-1"
You can review ADS entries in the cloud_health_auditing dataset to see ADS activity over time, or to search for errors on specific accounts. ADS entries
are added to the dataset as follows:
The asset or Host was excluded from the scan. This is audited as Excluded.
Field Description
Field Description
Resource ID Asset ID
Capability ADS
Error Details about the error. For informational entries this is blank.
Identify failed ADS scans on connector "a8df43e848dd42778ae7efd5a706a0fc" for EC2 assets at the asset scope level, filtered by region (northamerica-
northeast2-a):
dataset = cloud_health_auditing | filter capability = "ADS" and classification = "failed" and connector = “a8df43e848dd42778ae7efd5a706a0fc” and type =
"EC2_INSTANCE" and scope = "Asset" and region = "northamerica-northeast2-a"
See all ADS scans (failed and successful) on connector "a8df43e848dd42778ae7efd5a706a0fc" for EC2 assets belonging to Account_A:
dataset = cloud_health_auditing | filter capability = "ADS" and connector = “a8df43e848dd42778ae7efd5a706a0fc” and type = "EC2" and account = “Account_A”
You can review DSPM entries in the cloud_health_auditing dataset to see DSPM activity over time, or to search for errors on specific accounts. DSPM
entries are added to the dataset as follows:
Field Description
Resource ID Asset ID
Field Description
Capability DSPM
Error Details about the error. For informational entries this is blank.
Identify failed DSPM scans on the AWS_1 connector for S3 asset types, filtered by region (ap-east-1):
dataset = cloud_health_auditing | filter capability = "DSPM" and classification = “Error” and connector = “AWS_1” and type = "S3_BUCKET" and region = "ap-
east-1"
See all DSPM scans (failed and successful) on the AWS_1 connector, for all scanned assets on Account_A:
dataset = cloud_health_auditing | filter capability = "DSPM" and account = "Account_A" and connector = “AWS_1”
You can review Registry scanning entries in the cloud_health_auditing dataset to see Registry scanning activity over time, or to search for errors on
specific accounts. Registry scanning entries are added to the dataset as follows:
The following table describes the fields for Registry scanning entries:
Field Description
Resource ID Asset ID
Capability Registry
Error Details about the error. For informational entries this is blank
Field Description
dataset = cloud_health_auditing | filter capability = "Registry" and classification = “error” and connector = “GCP_1”
Review all registry scans (failed and successful) on connector GCP_1 for asset Asset_A:
dataset = cloud_health_auditing | filter capability = "Registry" and connector = “GCP_1” and ressource_id = "Asset_A"
You can review Audit logs entries in the collection_auditing dataset. Querying this dataset can help you see the connectivity changes of an instance
over time, the escalation or recovery of the connectivity status, and the error, warning, and informational messages related to status changes. For more
information about this dataset, see Verify collector connectivity.
The following table describes the fields for Audit logs entries:
Field Description
dataset = collection_auditing | filter instance = “AWS_1” and log_type = "Audit Logs" and classification = “Error”
The following examples show how to set up correlation rules to trigger Health Collection issues when errors occur on a specific security capability.
In this example, a correlation rule will trigger a Health Collection issue if a DSPM scan fails on an AWS_S3 asset on the AWS_1 connector.
Example XQL:
dataset = cloud_health_auditing | filter capability = "DSPM" and classification = “Error” and type = "AWS_S3" and scope = "Asset" and connector = “AWS_1”
Field Value
Field Value
Severity Medium
Category Collection
In this example, a correlation rule will trigger a Health Collection issue if an error is recorded on account Outpost_A in the us-east-1 region.
Example XQL:
dataset = cloud_health_auditing | filter capability = "Outpost" and account = "Outpost_A" and region = "eu-west-3" and classification = "Error"
Field Value
Severity Medium
Category Collection