11 Generation Intel Core™ Processor: Datasheet, Volume 1 of 2
11 Generation Intel Core™ Processor: Datasheet, Volume 1 of 2
Processor
Datasheet, Volume 1 of 2
Revision 010
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel
products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted
which includes subject matter disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel
product specifications and roadmaps.
All product plans and roadmaps are subject to change without notice.
The products described may contain design defects or errors known as errata which may cause the product to deviate from
published specifications. Current characterized errata are available on request.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service
activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your
system manufacturer or retailer or learn more at intel.com.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for
a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or
usage in trade.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other
names and brands may be claimed as the property of others.
8 Datasheet, Volume 1 of 2
Contents
1 Introduction ............................................................................................................ 11
1.1 Processor Volatility Statement............................................................................. 13
1.2 Package Support ............................................................................................... 13
1.3 Supported Technologies ..................................................................................... 14
1.3.1 API Support (Windows*) ......................................................................... 15
1.4 Power Management Support ............................................................................... 15
1.4.1 Processor Core Power Management........................................................... 15
1.4.2 System Power Management ..................................................................... 15
1.4.3 Memory Controller Power Management...................................................... 16
1.4.4 Processor Graphics Power Management ..................................................... 16
1.4.4.1 Memory Power Savings Technologies ........................................... 16
1.4.4.2 Display Power Savings Technologies ............................................ 16
1.4.4.3 Graphics Core Power Savings Technologies................................... 16
1.5 Thermal Management Support ............................................................................ 16
1.6 Ball-out Information .......................................................................................... 17
1.7 Processor Testability .......................................................................................... 17
1.8 Operating Systems Support ................................................................................ 17
1.9 Terminology and Special Marks ........................................................................... 17
1.10 Related Documents ........................................................................................... 21
2 Technologies ........................................................................................................... 22
2.1 Platform Environmental Control Interface (PECI) ................................................... 22
2.1.1 PECI Bus Architecture ............................................................................. 22
2.2 Intel® Virtualization Technology (Intel® VT) .......................................................... 24
2.2.1 Intel® Virtualization Technology (Intel® VT) for Intel® 64 and Intel® Architecture
(Intel® VT-X) ......................................................................................... 24
2.2.2 Intel® Virtualization Technology (Intel® VT) for Directed I/O (Intel® VT-d) .... 27
2.2.3 Intel® APIC Virtualization Technology (Intel® APICv) .................................. 29
2.3 Security Technologies ........................................................................................ 30
2.3.1 Intel® Trusted Execution Technology (Intel® TXT) ...................................... 30
2.3.2 Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI) ........ 31
2.3.3 Perform Carry-Less Multiplication Quad Word Instruction (PCLMULQDQ) ........ 32
2.3.4 Intel® Secure Key .................................................................................. 32
2.3.5 Execute Disable Bit................................................................................. 32
2.3.6 Boot Guard Technology ........................................................................... 33
2.3.7 Intel® Supervisor Mode Execution Protection (SMEP) .................................. 33
2.3.8 Intel® Supervisor Mode Access Protection (SMAP) ...................................... 33
2.3.9 Intel® Software Guard Extensions (Intel® SGX).......................................... 33
2.3.10 Intel® Secure Hash Algorithm Extensions (Intel® SHA Extensions) ................ 35
2.3.11 User Mode Instruction Prevention (UMIP)................................................... 35
2.3.12 Read Processor ID (RDPID)...................................................................... 35
2.3.13 Total Memory Encryption (Intel® TME) ...................................................... 36
2.3.14 Control-flow Enforcement Technology (Intel® CET) ..................................... 36
2.3.14.1 Shadow Stack .......................................................................... 36
2.3.14.2 Indirect Branch Tracking ............................................................ 37
2.3.15 KeyLocker Technology ............................................................................ 37
2.3.16 Devil’s Gate Rock (DGR).......................................................................... 37
2.4 Power and Performance Technologies................................................................... 38
2.4.1 Intel® Smart Cache Technology ............................................................... 38
2.4.2 IA Core Level 1 and Level 2 Caches .......................................................... 38
2.4.3 Intel® Turbo Boost Max Technology 3.0 .................................................... 39
Datasheet, Volume 1 of 2 9
2.4.4 Power Aware Interrupt Routing (PAIR).......................................................39
2.4.5 Intel® Hyper-Threading Technology (Intel® HT Technology) .........................39
2.4.6 Intel® Turbo Boost Technology 2.0............................................................40
2.4.6.1 Intel® Turbo Boost Technology 2.0 Power Monitoring .....................40
2.4.6.2 Intel® Turbo Boost Technology 2.0 Power Control ..........................40
2.4.6.3 Intel® Turbo Boost Technology 2.0 Frequency ...............................40
2.4.7 Enhanced Intel SpeedStep® Technology ....................................................41
2.4.8 Intel® Speed Shift Technology ..................................................................41
2.4.9 Intel® Advanced Vector Extensions 2 (Intel® AVX2) ....................................41
2.4.10 Advanced Vector Extensions 512 Bit (Intel® AVX-512).................................42
2.4.11 Intel® 64 Architecture x2APIC ..................................................................43
2.4.12 Intel® Dynamic Tuning Technology (DTT) ..................................................44
2.4.13 Intel® GNA 2.0 (GMM and Neural Network Accelerator)................................45
2.4.14 Cache Line Write Back (CLWB) .................................................................45
2.4.15 Ring Interconnect ...................................................................................45
2.5 Intel® Image Processing Unit (Intel® IPU6)...........................................................46
2.5.1 Platform Imaging Infrastructure................................................................46
2.5.2 Intel® Image Processing Unit (Intel® IPU6)................................................46
2.6 Debug Technologies ...........................................................................................47
2.6.1 Intel® Processor Trace ............................................................................47
2.6.2 Platform CrashLog ..................................................................................47
2.6.3 Telemetry Aggregator .............................................................................47
2.7 Clock Topology ..................................................................................................49
2.7.1 Integrated Reference Clock PLL ................................................................49
2.8 Intel Volume Management Device (VMD) Technology..............................................49
2.8.1 Intel Volume Management Device Technology Objective...............................49
2.8.2 Intel Volume Management Device Technology ............................................50
2.8.3 Key Features..........................................................................................51
2.9 Deprecated Technologies ....................................................................................51
3 Power Management .................................................................................................52
3.1 Advanced Configuration and Power Interface (ACPI) States Supported ......................55
3.2 Processor IA Core Power Management ..................................................................56
3.2.1 OS/HW Controlled P-states ......................................................................57
3.2.1.1 Enhanced Intel SpeedStep® Technology .......................................57
3.2.1.2 Intel® Speed Shift Technology ....................................................57
3.2.2 Low-Power Idle States.............................................................................57
3.2.3 Requesting the Low-Power Idle States .......................................................58
3.2.4 Processor IA Core C-State Rules ...............................................................58
3.2.5 Package C-States ...................................................................................59
3.2.6 Package C-States and Display Resolutions..................................................62
3.3 Processor Graphics Power Management ................................................................63
3.3.1 Memory Power Savings Technologies.........................................................63
3.3.1.1 Intel® Rapid Memory Power Management (Intel® RMPM)................63
3.3.2 Display Power Savings Technologies ..........................................................64
3.3.2.1 Intel® Seamless Display Refresh Rate Switching Technology (Intel®
SDRRS Technology) with eDP* Port .............................................64
3.3.2.2 Intel® Automatic Display Brightness ............................................64
3.3.2.3 Smooth Brightness ....................................................................64
3.3.2.4 Intel® Display Power Saving Technology (Intel® DPST) 6.3.............64
3.3.2.5 Panel Self-Refresh 2 (PSR 2).......................................................65
3.3.2.6 Low-Power Single Display Pipe (LPSP) ..........................................65
3.3.2.7 Intel® Smart 2D Display Technology (Intel® S2DDT) .....................65
3.3.3 Processor Graphics Core Power Savings Technologies ..................................65
3.3.3.1 Intel® Graphics Dynamic Frequency.............................................65
3.3.3.2 Intel® Graphics Render Standby Technology (Intel® GRST) ............66
10 Datasheet, Volume 1 of 2
3.3.3.3 Dynamic FPS (DFPS) ................................................................. 66
3.4 System Agent Enhanced Intel SpeedStep® Technology ........................................... 66
3.5 Voltage Optimization.......................................................................................... 66
3.6 ROP (Rest Of Platform) PMIC .............................................................................. 66
3.7 PCI Express* Power Management ........................................................................ 67
4 Thermal Management .............................................................................................. 68
4.1 Processor Thermal Management .......................................................................... 68
4.1.1 Thermal Considerations........................................................................... 68
4.1.1.1 Package Power Control .............................................................. 69
4.1.1.2 Platform Power Control .............................................................. 70
4.1.1.3 Turbo Time Parameter (Tau) ...................................................... 71
4.1.2 Configurable TDP (cTDP) and Low-Power Mode........................................... 71
4.1.2.1 Configurable TDP ...................................................................... 71
4.1.2.2 Low-Power Mode ...................................................................... 72
4.1.3 Thermal Management Features ................................................................ 73
4.1.3.1 Adaptive Thermal Monitor .......................................................... 73
4.1.3.2 Digital Thermal Sensor .............................................................. 75
4.1.3.3 PROCHOT# Signal..................................................................... 76
4.1.3.4 PROCHOT Output Only............................................................... 77
4.1.3.5 Bi-Directional PROCHOT# .......................................................... 77
4.1.3.6 PROCHOT Demotion Algorithm.................................................... 77
4.1.3.7 Voltage Regulator Protection using PROCHOT# ............................. 78
4.1.3.8 Thermal Solution Design and PROCHOT# Behavior ........................ 78
4.1.3.9 Low-Power States and PROCHOT# Behavior ................................. 79
4.1.3.10 THRMTRIP# Signal.................................................................... 79
4.1.3.11 Critical Temperature Detection ................................................... 79
4.1.3.12 On-Demand Mode ..................................................................... 79
4.1.3.13 MSR Based On-Demand Mode..................................................... 79
4.1.3.14 I/O Emulation-Based On-Demand Mode ....................................... 80
4.1.4 Intel® Memory Thermal Management ........................................................ 80
4.2 Thermal and Power Specifications........................................................................ 81
5 Memory ................................................................................................................... 85
5.1 System Memory Interface .................................................................................. 85
5.1.1 Processor SKU Support Matrix .................................................................. 85
5.1.2 Supported Population.............................................................................. 86
5.1.3 Supported Memory Modules and Devices ................................................... 88
5.1.4 System Memory Timing Support............................................................... 89
5.1.5 System Memory Timing Support............................................................... 89
5.1.6 SAGV Points .......................................................................................... 90
5.1.7 Memory Controller (MC) .......................................................................... 90
5.1.8 System Memory Controller Organization Mode (DDR4) ................................ 91
5.1.9 System Memory Frequency...................................................................... 92
5.1.10 Technology Enhancements of Intel® Fast Memory Access (Intel® FMA).......... 92
5.1.11 Data Scrambling .................................................................................... 93
5.1.12 ECC DDR4 H-Matrix Syndrome Codes........................................................ 93
5.1.13 Data Swapping ...................................................................................... 94
5.1.14 DDR I/O Interleaving .............................................................................. 94
5.1.15 DRAM Clock Generation........................................................................... 95
5.1.16 DRAM Reference Voltage Generation ......................................................... 95
5.1.17 Data Swizzling ....................................................................................... 95
5.2 Integrated Memory Controller (IMC) Power Management ........................................ 95
5.2.1 Disabling Unused System Memory Outputs ................................................ 95
5.2.2 DRAM Power Management and Initialization ............................................... 96
5.2.2.1 Initialization Role of CKE ............................................................ 97
5.2.2.2 Conditional Self-Refresh............................................................. 97
Datasheet, Volume 1 of 2 11
5.2.2.3 Dynamic Power-Down ................................................................97
5.2.2.4 DRAM I/O Power Management ....................................................98
5.2.3 DDR Electrical Power Gating .....................................................................98
5.2.4 Power Training .......................................................................................98
6 USB-C* Sub System .................................................................................................99
6.1 USB-C Sub-System General Capabilities.............................................................. 100
6.2 USB™ 4 Router ............................................................................................... 101
6.2.1 USB 4 Host Router Implementation Capabilities ........................................ 101
6.3 USB-C Sub-system xHCI/xDCI Controllers........................................................... 102
6.3.1 USB 3 Controllers ................................................................................. 102
6.3.1.1 Extensible Host Controller Interface (xHCI) ................................. 102
6.3.1.2 Extensible Device Controller Interface (xDCI) .............................. 102
6.3.2 USB-C Sub-System PCIe Interface .......................................................... 103
6.4 USB-C Sub-System Display Interface.................................................................. 103
7 PCIe* Interface ..................................................................................................... 104
7.1 Processor PCI Express* Interface ....................................................................... 104
7.1.1 PCI Express* Support............................................................................ 104
7.1.2 PCI Express* Lane Polarity Inversion ....................................................... 106
7.1.3 PCI Express* Architecture ...................................................................... 106
7.1.4 PCI Express* Configuration Mechanism.................................................... 107
7.1.5 PCI Express* Equalization Methodology ................................................... 107
7.1.6 PCI Express* Hot-Plug........................................................................... 108
7.1.6.1 Presence Detection .................................................................. 108
7.1.6.2 SMI/SCI Generation................................................................. 108
8 Direct Media Interface (DMI) and On Package Interface (OPI) .............................. 109
8.1 Direct Media Interface (DMI) ............................................................................. 109
8.1.1 DMI Lane Reversal and Polarity Inversion................................................. 109
8.1.2 DMI Error Flow ..................................................................................... 110
8.1.3 DMI Link Down ..................................................................................... 110
8.2 On Package Interface (OPI)............................................................................... 110
8.2.1 OPI Support:........................................................................................ 110
8.2.2 Functional description: .......................................................................... 110
9 Graphics ................................................................................................................ 111
9.1 Processor Graphics........................................................................................... 111
9.1.1 Media Support (Intel® QuickSync and Clear Video Technology HD) .............. 111
9.1.1.1 Hardware Accelerated Video Decode .......................................... 111
9.1.1.2 Hardware Accelerated Video Encode........................................... 112
9.1.1.3 Hardware Accelerated Video Processing ...................................... 113
9.1.1.4 Hardware Accelerated Transcoding ............................................ 113
9.2 Platform Graphics Hardware Feature .................................................................. 114
9.2.1 Hybrid Graphics.................................................................................... 114
10 Display................................................................................................................... 115
10.1 Display Technologies Support ............................................................................ 115
10.2 Display Configuration ....................................................................................... 115
10.3 Display Features .............................................................................................. 117
10.3.1 General Capabilities .............................................................................. 117
10.3.2 Multiple Display Configurations ............................................................... 117
10.3.3 High-bandwidth Digital Content Protection (HDCP) .................................... 117
10.3.4 DisplayPort* ........................................................................................ 118
10.3.4.1 Multi-Stream Transport (MST)................................................... 118
10.3.5 High-Definition Multimedia Interface (HDMI*)........................................... 120
10.3.6 embedded DisplayPort* (eDP*) .............................................................. 121
12 Datasheet, Volume 1 of 2
10.3.7 MIPI* DSI ........................................................................................... 122
10.3.8 Integrated Audio .................................................................................. 122
10.3.9 DisplayPort* Input (DP-IN) .................................................................... 123
11 Camera/MIPI ........................................................................................................ 125
11.1 Camera Pipe Support ....................................................................................... 125
11.2 MIPI* CSI-2 Camera Interconnect ..................................................................... 125
11.2.1 Camera Control Logic............................................................................ 125
11.2.2 Camera Modules .................................................................................. 125
11.2.3 CSI-2 Lane Configuration ...................................................................... 126
12 Signal Description ................................................................................................. 129
12.1 System Memory Interface ................................................................................ 131
12.1.1 DDR4 Memory Interface ........................................................................ 131
12.1.2 LPDDR4x Memory Interface ................................................................... 134
12.2 PCIe4 Gen4 Interface Signals............................................................................ 135
12.3 Direct Media Interface (DMI) Signals.................................................................. 135
12.4 PCIe16 Gen4 Interface Signals .......................................................................... 136
12.5 Reset and Miscellaneous Signals ........................................................................ 136
12.6 Display Interfaces ........................................................................................... 137
12.6.1 Embedded DisplayPort* (eDP*) Signals ................................................... 137
12.6.2 MIPI DSI* Signals ................................................................................ 138
12.6.3 Digital Display Interface (DDI) Signals .................................................... 138
12.6.4 Digital Display Audio Signals .................................................................. 138
12.7 DP-IN Interface Signals.................................................................................... 139
12.8 USB Type-C Signals ......................................................................................... 139
12.9 MIPI* CSI-2 Interface Signals ........................................................................... 140
12.10 Processor Clocking Signals................................................................................ 141
12.11 Testability Signals ........................................................................................... 141
12.12 Error and Thermal Protection Signals ................................................................. 142
12.13 Processor Power Rails ..................................................................................... 142
12.14 Ground, Reserved and Non-Critical to Function (NCTF) Signals .............................. 143
12.15 Processor Internal Pull-Up / Pull-Down Terminations ............................................ 144
13 Electrical Specifications ......................................................................................... 145
13.1 Processor Power Rails ..................................................................................... 145
13.1.1 Power and Ground Pins ......................................................................... 145
13.1.2 Full Integrated Voltage Regulator (FIVR) ................................................. 145
13.1.3 VCC Voltage Identification (VID) ............................................................. 145
13.2 DC Specifications ............................................................................................ 146
13.2.1 Processor Power Rails DC Specifications .................................................. 146
13.2.1.1 VccIN DC Specifications............................................................ 146
13.2.1.2 VccIN_AUX DC Specifications.................................................... 148
13.2.1.3 VDD2 DC Specifications ........................................................... 150
13.2.1.4 VccST DC Specifications........................................................... 150
13.2.1.5 Vcc1P8A DC Specifications ....................................................... 152
13.2.1.6 DDR4 DC Specifications ........................................................... 152
13.2.1.7 LPDDR4x DC Specifications ...................................................... 154
13.2.1.8 PCI Express* Graphics (PEG) DC Specifications ........................... 155
13.2.1.9 Digital Display Interface (DDI) DC Specifications ......................... 155
13.2.1.10embedded DisplayPort* (eDP*) DC Specification ......................... 156
13.2.1.11MIPI* CSI-2 D-Phy Receiver DC Specifications ............................ 156
13.2.1.12CMOS DC Specifications........................................................... 157
13.2.1.13GTL and OD DC Specification.................................................... 157
13.2.1.14PECI DC Characteristics ........................................................... 158
14 Package Mechanical Specifications ........................................................................ 161
14.1 Package Mechanical Attributes .......................................................................... 161
Datasheet, Volume 1 of 2 13
14.2 Package Loading and Die Pressure Specifications ................................................. 161
14.2.1 Package Loading Specifications ............................................................... 161
14.2.2 Die Pressure Specifications..................................................................... 162
14.3 Package Storage Specifications.......................................................................... 163
15 CPU And Device IDs ............................................................................................... 165
15.1 CPUID ............................................................................................................ 165
15.2 PCI Configuration Header.................................................................................. 165
Figures
1-1 11th Generation Intel® Core™ UP3/UP4/H35/UP3-Refresh//H35-Refresh Processor Lines
Platform Diagram ...................................................................................................12
1-2 H Processor Line Platform Diagram ...........................................................................13
2-1 Example for PECI Host-Clients Connection..................................................................23
2-2 Example for PECI EC Connection...............................................................................24
2-3 Device to Domain Mapping Structures .......................................................................28
2-4 Processor Cache Hierarchy .......................................................................................38
2-5 Processor Camera System .......................................................................................46
2-6 Telemetry Aggregator .............................................................................................48
3-1 UP3 and UP4 Processor Lines Power States ................................................................53
3-2 H Processor Line Power States..................................................................................54
3-3 Processor Package and IA Core C-States....................................................................55
3-4 Idle Power Management Breakdown of the Processor IA Cores ......................................57
3-5 Package C-State Entry and Exit ................................................................................60
4-1 Package Power Control ............................................................................................70
4-2 PROCHOT Demotion Signal Description ......................................................................78
5-1 Intel® DDR4 Flex Memory Technology Operations .......................................................91
5-2 DDR4 Interleave (IL) and Non-Interleave (NIL) Modes Mapping ....................................95
6-1 USB-C* Sub-system Block Diagram...........................................................................99
7-1 PCI Express* Related Register Structures in the Processor ......................................... 107
8-1 Example for DMI Lane Reversal Connection .............................................................. 109
10-1 Processor Display Architecture................................................................................ 116
10-2 DisplayPort* Overview .......................................................................................... 118
10-3 HDMI* Overview .................................................................................................. 120
10-4 MIPI* DSI Overview.............................................................................................. 122
10-5 DP-IN Block Diagram ............................................................................................ 124
13-1 Input Device Hysteresis ........................................................................................ 159
Tables
1-1 11th Generation Intel® Core™ Processor Lines ............................................................11
1-2 Operating Systems Support .....................................................................................17
1-3 Terminology...........................................................................................................17
1-4 Special marks ........................................................................................................20
3-1 System States........................................................................................................55
3-2 Integrated Memory Controller (IMC) States ................................................................56
3-3 G, S, and C Interface State Combinations ..................................................................56
3-4 Core C-states .........................................................................................................58
3-5 Package C-States ...................................................................................................60
3-6 Deepest Package C-State Available ...........................................................................63
3-7 TCSS Power State...................................................................................................63
3-8 Package C-States with PCIe* Link States Dependencies ...............................................67
14 Datasheet, Volume 1 of 2
4-1 Configurable TDP Modes.......................................................................................... 72
4-2 TDP Specifications .................................................................................................. 81
4-3 Package Turbo Specifications ................................................................................... 83
4-4 Junction Temperature Specifications ......................................................................... 84
5-1 DDR Support Matrix Table ....................................................................................... 85
5-2 Processor DDR Speed Support ................................................................................. 85
5-4 LPDDR4x Channels Population Rules ......................................................................... 86
5-5 DDR4 Memory Down Channels Population Rules ......................................................... 86
5-3 DDR Technology Support Matrix ............................................................................... 86
5-6 H DDR4 SoDIMM Population Configuration ................................................................. 87
5-7 Supported DDR4 SoDIMM Module Configurations ........................................................ 88
5-8 Supported DDR4 Memory Down Device Configurations ................................................ 88
5-9 Supported LPDDR4x x32 DRAMs Configurations.......................................................... 88
5-10 Supported LPDDR4x x64 DRAMs Configurations......................................................... 89
5-11 DDR4 System Memory Timing Support...................................................................... 89
5-12 LPDDR4x System Memory Timing Support ................................................................. 90
5-13 System Agent Enhanced Speed Steps (SA-GV) and Gear Mode Frequencies ................... 90
5-14 ECC H-Matrix Syndrome Codes ................................................................................ 93
5-15 Interleave (IL) and Non-Interleave (NIL) Modes Pin Mapping........................................ 94
6-1 USB-C* Port Configuration .................................................................................... 100
6-2 USB-C* Lanes Configuration .................................................................................. 101
6-3 USB-C* Non-Supported Lane Configuration ............................................................. 101
6-4 PCIe via USB4 Configuration .................................................................................. 103
7-1 PCI Express* 4 -lane Bifurcation and Lane Reversal Mapping...................................... 104
7-2 PCI Express* 16-lane Bifurcation and Lane Reversal Mapping ..................................... 104
7-3 PCI Express* Maximum Transfer Rates and Theoretical Bandwidth .............................. 106
9-1 Hardware Accelerated Video Decoding..................................................................... 111
9-2 Hardware Accelerated Video Encode ....................................................................... 112
10-1 Display Ports Availability and Link Rate ................................................................... 115
10-2 Display Resolutions and Link Bandwidth for Multi-Stream Transport Calculations........... 119
10-3 DisplayPort Maximum Resolution ............................................................................ 120
10-4 HDMI Maximum Resolution ................................................................................... 121
10-5 Embedded DisplayPort Maximum Resolution ............................................................ 121
10-6 MIPI* DSI Maximum Resolution ............................................................................ 122
10-7 Processor Supported Audio Formats over HDMI and DisplayPort*................................ 123
11-1 CSI-2 Lane Configuration for UP3-Processor Line ...................................................... 126
11-2 CSI-2 Lane Configuration for UP4-Processor Line ...................................................... 126
11-3 CSI-2 Lane Configuration for H-Processor Line ......................................................... 127
12-1 Signal Tables Terminology ..................................................................................... 129
12-2 DDR4 Memory Interface ........................................................................................ 131
12-3 LPDDR4x Memory Interface ................................................................................... 134
12-4 DMI Interface Signals ........................................................................................... 135
12-5 Processor Clocking Signals..................................................................................... 141
12-6 Processor Power Rails Signals ................................................................................ 142
12-7 Processor Pull-up Power Rails Signals...................................................................... 143
12-8 GND, RSVD, and NCTF Signals ............................................................................... 144
13-1 Processor VccIN Active and Idle Mode DC Voltage and Current Specifications ................ 146
13-2 VccIN_AUX Supply DC Voltage and Current Specifications.......................................... 148
13-3 Memory Controller (VDD2) Supply DC Voltage and Current Specifications .................... 150
13-4 Vcc Sustain (VccST) Supply DC Voltage and Current Specifications ............................. 150
13-5 Vcc Sustain Gated (VccSTG) Supply DC Voltage and Current Specifications .................. 151
13-6 Vcc1P8A Supply DC Voltage and Current Specifications ............................................. 152
13-7 DDR4 Signal Group DC Specifications...................................................................... 152
13-8 LPDDR4x Signal Group DC Specifications ................................................................. 154
13-9 ......................................................................................................................... 154
Datasheet, Volume 1 of 2 15
13-10DSI HS Transmitter DC Specifications...................................................................... 155
13-11DSI LP Transmitter DC Specifications ...................................................................... 155
13-12Digital Display Interface Group DC Specifications (DP/HDMI) ...................................... 156
13-13DP-IN Group DC Specifications ............................................................................... 156
13-14CMOS Signal Group DC Specifications...................................................................... 157
13-15 GTL Signal Group and Open Drain Signal Group DC Specifications .............................. 157
13-16PECI DC Electrical Limits........................................................................................ 158
13-17System Reference Clocks DC and AC Specifications ................................................... 159
14-1 Package Mechanical Attributes................................................................................ 161
14-2 Package Loading Specifications............................................................................... 162
14-3 Package Loading Specifications............................................................................... 162
15-1 CPUID Format ...................................................................................................... 165
15-2 PCI Configuration Header....................................................................................... 166
15-3 Host Device ID (DID0) .......................................................................................... 166
15-4 Other Device ID UP3/UP4/H35/UP3-Refresh/H35-Refresh........................................... 166
15-5 Other Device IDs (H Processor Line)........................................................................ 167
15-6 Graphics Device ID (DID2)..................................................................................... 168
16 Datasheet, Volume 1 of 2
Revision History
Revision
Description Revision Date
Number
§§
Datasheet, Volume 1 of 2 17
1 Introduction
The following table describes the different 11th Generation Intel® Core™ processor
lines:
4
UP4-Processor Line BGA1598 9W Up to 96EU 1-Chip
2
4 1-Chip
UP3-Processor Line BGA1449 28W Up to 96EU
2
4
UP3-Refresh - Processor Line BGA1449 28W Up to 96EU 1-Chip
2
UP3- Pentium/Celeron
BGA1449 15W 2 48EU 1-Chip
Processor Line
8
H- Processor Line BGA1787 45W Up to 32EU 2-Chip
6
Note:
1. Processor lines offering may change.
2. For additional TDP Configurations, refer to Chapter 4, “Thermal Management”, for adjustment to the
base TDP required to preserve base frequency associated with the sustained long-term thermal
capability.
3. TDP workload does not reflect I/O connectivity cases such as Thunderbolt™.
Datasheet, Volume 1 of 2 11
Figure 1-1. 11th Generation Intel® Core™ UP3/UP4/H35/UP3-Refresh//H35-Refresh
Processor Lines Platform Diagram
12 Datasheet, Volume 1 of 2
Figure 1-2. H Processor Line Platform Diagram
Not all processor interfaces and features are presented in all Processor Lines. The
presence of various interfaces and features will be indicated within the relevant
sections and tables.
Note: Throughout this document, the 11th Generation Intel® Core™ processor may be
referred to as processor andthe Intel® 500 Series Chipset Family On-Package Platform
Controller Hub (LP) may be referred to as PCH.
Note: Powered down refers to the state which all processor power rails are off.
Datasheet, Volume 1 of 2 13
• A 45.5 x 25 mm, Z=1.185 +/-0.096 mm (the height from the bottom of the BGA to
the top of the die), BGA package for 11th Generation Intel® Core™ UP3-Processor
Line.
• A 50 x 26.5 mm, Z= 1.325 +/- 0.103 mm (the height from the bottom of the BGA
to the top of the die), BGA package for H-Processor Line.
14 Datasheet, Volume 1 of 2
• Intel® GNA 2.0 (Intel® GMM and Neural Network Accelerator)
• Cache Line Write Back (CLWB)
• Intel® Image Processing Unit (Intel® IPU)
• Intel® Processor Trace
• Platform CrashLog
• Integrated Reference Clock PLL
Note: The availability of the features above may vary between different processor SKUs.
DirectX* extensions:
• PixelSync, Instant Access, Conservative Rasterization, Render Target Reads,
Floating-point De-norms, Shared a Virtual memory, Floating Point atomics, MSAA
sample-indexing, Fast Sampling (Coarse LOD), Quilted Textures, GPU Enqueue
Kernels*, GPU Signals processing unit. Other enhancements include color
compression.
Refer to Section 3.2, “Processor IA Core Power Management” for more information.
Datasheet, Volume 1 of 2 15
1.4.3 Memory Controller Power Management
• Disabling Unused System Memory Outputs
• DRAM Power Management and Initialization
• Initialization Role of CKE
• Conditional Self-Refresh
• Dynamic Power Down
• DRAM I/O Power Management
• DDR Electrical Power Gating (EPG)
• Power Training
Refer Section 5.2, “Integrated Memory Controller (IMC) Power Management” for more
information.
16 Datasheet, Volume 1 of 2
• Memory Thermal Throttling
• External Thermal Sensor (TS-on-DIMM and TS-on-Board)
• Render Thermal Throttling
• Fan Speed Control with DTS
• Intel® Turbo Boost Technology 2.0 Power Control
• Intel® Dynamic Tuning - Intel® Dynamic Platform and Thermal Framework (DPTF).
Datasheet, Volume 1 of 2 17
Table 1-3. Terminology (Sheet 2 of 4)
Term Description
DDC Digital Display Channel (Refer to PCH Datasheet (#631119) for more details)
USB controller power states ranging from D0i0 to D0i3, where D0i0 is fully powered on
D0ix-states
and D0i3 is primarily powered off. Controlled by SW.
DP* DisplayPort*
Intel® Virtualization Technology (Intel® VT) for Directed I/O. Intel® VT-d is a hardware
® assist, under system software (Virtual Machine Manager or OS) control, for enabling I/
Intel VT-d
O device Virtualization. Intel® VT-d also brings robust security by providing protection
from errant DMAs by using DMA remapping, a key feature of Intel® VT-d.
18 Datasheet, Volume 1 of 2
Table 1-3. Terminology (Sheet 3 of 4)
Term Description
LPDDR4x Low Power Double Data Rate SDRAM memory technology /x- additional power save.
Low Power Mode.The LPM Frequency is less than or equal to the LFM Frequency. The
LPM LPM TDP is lower than the LFM TDP as the LPM configuration limits the processor to
single thread operation
The Latency Tolerance Reporting (LTR) mechanism enables Endpoints to report their
service latency requirements for Memory Reads and Writes to the Root Complex, so
LTR that power management policies for central platform resources (such as main memory,
RC internal interconnects, and snoop resources) can be implemented to consider
Endpoint service requirements.
Multi-Chip Package - includes the processor and the PCH. In some SKUs, it might have
MCP
additional On-Package Cache.
Minimum Frequency Mode. MFM is the minimum ratio supported by the processor and
MFM can be read from MSR CEh [55:48]. For more information, refer to the appropriate
BIOS specification.
Platform Controller Hub. The chipset with centralized platform capabilities including the
main I/O interfaces along with display connectivity, audio features, power
PCH
management, manageability, security, and storage features. The PCH may also be
referred to as “chipset”.
The term “processor core” refers to the Si die itself, which can contain multiple
Processor Core execution cores. Each execution core has an instruction cache, data cache, and 256-KB
L2 cache. All execution cores share the LLC.
A unit of DRAM corresponding to four to eight devices in parallel, ignoring ECC. These
Rank
devices are usually, but not always, mounted on a single side of a SoDIMM.
Datasheet, Volume 1 of 2 19
Table 1-3. Terminology (Sheet 4 of 4)
Term Description
The type of storage redirection used from AMT 11.0 onward. In contrast to IDE-R,
which presents remote floppy or CD drives as though they were integrated into the
USB-R
host machine, USB-R presents remote drives as though they were connected via a USB
port.
Brackets ([]) sometimes follow a ball, pin, registers or a bit name. These brackets enclose a
[] range of numbers, for example, TCP[2:0]_TXRX_P[1:0] may refer to four USB-C* pins or
EAX[7:0] may indicate a range that is 8 bits length.
Hexadecimal numbers are identified with an x in the number. All numbers are decimal (base 10)
0x000 unless otherwise specified. Non-obvious binary numbers have the ‘b’ enclosed at the end of the
number. For example, 0101b
20 Datasheet, Volume 1 of 2
1.10 Related Documents
Document Document Number
Intel® 500 Series Chipset Family On-Package Platform Controller Hub Datasheet, 631119
Volume 1 of 2
Intel® 500 Series Chipset Family On-Package Platform Controller Hub Datasheet,
631120
Volume 2 of 2
®
11th Gen Intel Core™ Processor Family for IoT Platforms - Datasheet Addendum 632133
Intel® 500 Series Chipset Family On- Package Platform Controller Hub (PCH) Speci-
630747
fication Update
§§
Datasheet, Volume 1 of 2 21
2 Technologies
The implementation of the features may vary between the processor SKUs.
Details on the different technologies of Intel processors and other relevant external
notes are located at the Intel technology web site: http://www.intel.com/technology/
Note: The last section of this chapter is dedicated to deprecated technologies. These
technologies are not supported in this processor but were supported in previous
generations.
Note: PECI interface can be implemented using a single BI_DIRECTIONAL I/O pin serial
interface or using the eSPI parallel bus
The idle state on the bus is ‘0’ (logical low) and near zero (Logical voltage level).
22 Datasheet, Volume 1 of 2
Figure 2-1. Example for PECI Host-Clients Connection
VCCST
VCCST
Q3
nX
Q1
nX
PECI
Q2
CPECI
1X
<10pF/Node
Additional
PECI Clients
Datasheet, Volume 1 of 2 23
Figure 2-2. Example for PECI EC Connection
VCCST
Processor
VCCST
R
Out
VREF _CPU
VCCST PECI
Embedded
Controller
In
43 Ohm
VCCST
Intel® Virtualization Technology (Intel® VT) Intel® 64 and Intel® Architecture (Intel®
VT-x) added hardware support in the processor to improve the Virtualization
performance and robustness. Intel® Virtualization Technology for Directed I/O (Intel®
VT-d) extends Intel® VT-x by adding hardware assisted support to improve I/O device
Virtualization performance.
Intel® VT-x specifications and functional descriptions are included in the Intel® 64
Architectures Software Developer’s Manual, Volume 3 available at:
http://www.intel.com/products/processor/manuals
The Intel® VT-d specification and other VT documents can be referenced at:
http://www.intel.com/content/www/us/en/virtualization/virtualization-technology/
intel-virtualization-technology.html
24 Datasheet, Volume 1 of 2
• Robust: VMMs no longer need to use para-virtualization or binary translation. This
means that VMMs will be able to run off-the-shelf operating systems and
applications without any special steps.
• Enhanced: Intel® VT enables VMMs to run 64 bit guest operating systems on IA
x86 processors.
• More Reliable: Due to the hardware support, VMMs can now be smaller, less
complex, and more efficient. This improves reliability and availability and reduces
the potential for software conflicts.
• More Secure: The use of hardware transitions in the VMM strengthens the
isolation of VMs and further prevents corruption of one VM from affecting others on
the same system.
The processor supports the following added new Intel® VT-x features:
• Mode-based Execute Control for EPT (MBEC)
— A mode of EPT operation which enables different controls for executability of
Guest Physical Address (GPA) based on Guest specified mode (User/
Supervisor) of linear address translating to the GPA. When the mode is
enabled, the executability of a GPA is defined by two bits in EPT entry. One bit
for accesses to user pages and other one for accesses to supervisor pages.
— This mode requires changes in VMCS and EPT entries. VMCS includes a bit
“Mode-based execute control for EPT” which is used to enable/disable the
mode. An additional bit in EPT entry is defined as “execute access for user-
mode linear addresses”; the original EPT execute access bit is considered as
“execute access for supervisor-mode linear addresses”. If the “mode-based
execute control for EPT” VM-execution control is disabled the additional bit is
ignored and the system work with one bit i.e. the original bit, for execute
control for both user and supervisor pages.
— Behavioral changes - Behavioral changes are across three areas:
• Access to GPA - If the “Mode-based execute control for EPT”
VMexecution control is 1, treatment of guest-physical accesses by
instruction fetches depends on the linear address from which an
instruction is being fetched:
a. If the translation of the linear address specifies user mode (the U/S bit
was set in every paging structure entry used to translate the linear
address), the resulting guest-physical address is executable under EPT
only if the XU bit (at position 10) is set in every EPT paging-structure
entry used to translate the guest-physical address.
Datasheet, Volume 1 of 2 25
• VMExit - The exit qualification due to EPT violation reports clearly
whether the violation was due to User mode access or supervisor mode
access.
— Capability Querying: IA32_VMX_PROCBASED_CTLS2 has bit to indicate the
capability, RDMSR can be used to read and query whether the processor
supports the capability or not.
• Extended Page Table (EPT) Accessed and Dirty Bits
— EPT A/D bits enabled VMMs to efficiently implement memory management and
page classification algorithms to optimize VM memory operations, such as de-
fragmentation, paging, live migration, and check-pointing. Without hardware
support for EPT A/D bits, VMMs may need to emulate A/D bits by marking EPT
paging-structures as not-present or read-only, and incur the overhead of EPT
page-fault VM exits and associated software processing.
• EPTP (EPT Pointer) Switching
— EPTP switching is a specific VM function. EPTP switching allows guest software
(in VMX non-root operation, supported by EPT) to request a different EPT
paging-structure hierarchy. This is a feature by which software in VMX non-root
operation can request a change of EPTP without a VM exit. The software will be
able to choose among a set of potential EPTP values determined in advance by
software in VMX root operation.
• Pause Loop Exiting
— Support VMM schedulers seeking to determine when a virtual processor of a
multiprocessor virtual machine is not performing useful work. This situation
may occur when not all virtual processors of the virtual machine are currently
scheduled and when the virtual processor in question is in a loop involving the
PAUSE instruction. The new feature allows detection of such loops and is thus
called PAUSE-loop exiting.
26 Datasheet, Volume 1 of 2
2.2.2 Intel® Virtualization Technology (Intel® VT) for Directed
I/O (Intel® VT-d)
Intel® VT-d Objectives
The key Intel® VT-d objectives are domain-based isolation and hardware-based
virtualization. A domain can be abstractly defined as an isolated environment in a
platform to which a subset of host physical memory is allocated. Intel® VT-d provides
accelerated I/O performance for a virtualization platform and provides software with
the following capabilities:
• I/O Device Assignment and Security: for flexibly assigning I/O devices to VMs
and extending the protection and isolation properties of VMs for I/O operations.
• DMA Remapping: for supporting independent address translations for Direct
Memory Accesses (DMA) from devices.
• Interrupt Remapping: for supporting isolation and routing of interrupts from
devices and external interrupt controllers to appropriate VMs.
• Reliability: for recording and reporting to system software DMA and interrupt
errors that may otherwise corrupt memory or impact VM isolation.
Datasheet, Volume 1 of 2 27
Figure 2-3. Device to Domain Mapping Structures
(Dev 0, Func 1)
Context entry 0
Address Translation
Context entry Table Structures for Domain B
For bus 0
Intel® VT-d functionality often referred to as an Intel® VT-d Engine, has typically been
implemented at or near a PCI Express* host bridge component of a computer system.
This might be in a chipset component or in the PCI Express functionality of a processor
with integrated I/O. When one such VT-d engine receives a PCI Express transaction
from a PCI Express bus, it uses the B/D/F number associated with the transaction to
search for an Intel® VT-d translation table. In doing so, it uses the B/D/F number to
traverse the data structure shown in the above figure. If it finds a valid Intel® VT-d
table in this data structure, it uses that table to translate the address provided on the
PCI Express bus. If it does not find a valid translation table for a given translation, this
results in an Intel® VT-d fault. If Intel® VT-d translation is required, the Intel® VT-d
engine performs an N-level table walk.
For more information, refer Intel® Virtualization Technology for Directed I/O
Architecture Specification at http://www.intel.com/content/dam/www/public/us/en/
documents/product-specifications/vt-directed-io-spec.pdf
28 Datasheet, Volume 1 of 2
Intel® VT-d Key Features
The processor supports the following added new Intel® VT-d features:
• 4-level Intel® VT-d Page walk – both default Intel® VT-d engine, as well as the
Processor Graphics VT-d engine are upgraded to support 4-level Intel® VT-d tables
(adjusted guest address width of 48 bits)
• Intel® VT-d super-page – support of Intel® VT-d super-page (2 MB, 1 GB) for
default Intel® VT-d engine (that covers all devices except IGD)
IGD Intel® VT-d engine does not support super-page and BIOS should disable
super-page in default Intel® VT-d engine when iGfx is enabled.
Datasheet, Volume 1 of 2 29
When APIC Virtualization is enabled, the processor emulates many accesses to the
APIC, tracks the state of the virtual APIC, and delivers virtual interrupts — all in VMX
non-root operation without a VM exit.
The following are the VM-execution controls relevant to APIC Virtualization and virtual
interrupts:
• Virtual-interrupt Delivery. This controls enable the evaluation and delivery of
pending virtual interrupts. It also enables the emulation of writes (memory-
mapped or MSR-based, as enabled) to the APIC registers that control interrupt
prioritization.
• Use TPR Shadow. This control enables emulation of accesses to the APIC’s task-
priority register (TPR) via CR8 and, if enabled, via the memory-mapped or MSR-
based interfaces.
• Virtualize APIC Accesses. This control enables virtualization of memory-mapped
accesses to the APIC by causing VM exits on accesses to a VMM-specified APIC-
access page. Some of the other controls, if set, may cause some of these accesses
to be emulated rather than causing VM exits.
• Virtualize x2APIC Mode. This control enables virtualization of MSR-based
accesses to the APIC.
• APIC-register Virtualization. This control allows memory-mapped and MSR-
based reads of most APIC registers (as enabled) by satisfying them from the
virtual-APIC page. It directs memory-mapped writes to the APIC-access page to
the virtual-APIC page, following them by VM exits for VMM emulation.
• Process Posted Interrupts. This control allows software to post virtual interrupts
in a data structure and send a notification to another logical processor; upon
receipt of the notification, the target processor will process the posted interrupts by
copying them into the virtual-APIC page.
Note: Intel® APIC Virtualization Technology may not be available on all SKUs.
http://www.intel.com/products/processor/manuals
The Intel® TXT platform helps to provide the authenticity of the controlling
environment such that those wishing to rely on the platform can make an appropriate
trust decision. The Intel® TXT platform determines the identity of the controlling
environment by accurately measuring and verifying the controlling software.
Another aspect of the trust decision is the ability of the platform to resist attempts to
change the controlling environment. The Intel® TXT platform will resist attempts by
software processes to change the controlling environment or bypass the bounds set by
the controlling environment.
Intel® TXT is a set of extensions designed to provide a measured and controlled launch
of system software that will then establish a protected environment for itself and any
additional software that it may execute.
30 Datasheet, Volume 1 of 2
These extensions enhance two areas:
• The launching of the Measured Launched Environment (MLE).
• The protection of the MLE from potential corruption.
The enhanced platform provides these launch and control interfaces using Safer Mode
Extensions (SMX).
Intel® AES-NI consists of six Intel® SSE instructions. Four instructions, AESENC,
AESENCLAST, AESDEC, and AESDELAST facilitate high-performance AES encryption
and decryption. The other two, AESIMC and AESKEYGENASSIST, support the AES key
expansion procedure. Together, these instructions provide full hardware for supporting
AES;
• Offering security,
Datasheet, Volume 1 of 2 31
• High performance, and
• Flexibility.
This generation of the processor has increased the performance of the Intel® AES-NI
significantly compared to previous products.
The Intel® AES-NI specifications and functional descriptions are included in http://
www.intel.com/products/processor/manuals
Some possible usages of the RDRAND instruction include cryptographic key generation
as used in a variety of applications, including communication, digital signatures, secure
storage, and so on.
32 Datasheet, Volume 1 of 2
2.3.6 Boot Guard Technology
Boot Guard technology is a part of boot integrity protection technology. Boot Guard can
help protect the platform boot integrity by preventing the execution of unauthorized
boot blocks. With Boot Guard, platform manufacturers can create boot policies such
that invocation of an unauthorized (or untrusted) boot block will trigger the platform
protection per the manufacturer's defined policy.
With verification based in the hardware, Boot Guard extends the trust boundary of the
platform boot process down to the hardware level.
Benefits of this protection are that Boot Guard can help maintain platform integrity by
preventing re-purposing of the manufacturer’s hardware to run an unauthorized
software stack.
Note: Boot Guard availability may vary between the different SKUs.
http://www.intel.com/products/processor/manuals
Software Guard Extensions (SGX) architecture provides the capability to create isolated
execution environments named Enclaves that operate from a protected region of
memory.
Datasheet, Volume 1 of 2 33
Enclave code can be accessed using new special ISA commands that jump into per
Enclave predefined addresses. Data within an Enclave can only be accessed from that
same Enclave code.
The latter security statements hold under all privilege levels including supervisor mode
(ring-0), System Management Mode (SMM) and other Enclaves.
Intel® SGX features a memory encryption engine that both encrypt Enclave memory as
well as protect it from corruption and replay attacks.
Intel® SGX benefits over alternative Trusted Execution Environments (TEEs) are:
• Enclaves are written using C/C++ using industry standard build tools.
• High processing power as they run on the processor.
• Large amount of memory are available as well as non-volatile storage (such as disk
drives).
• Simple to maintain and debug using standard IDEs (Integrated Development
Environment)
• Scalable to a larger number of applications and vendors running concurrently
• Dynamic memory allocation:
— Heap and thread-pool management
— On-demand stack growth
— Dynamic module/library loading
— Concurrency management in applications such as garbage collectors
— Write-protection of EPC pages (Enclave Page Cache - Enclave protected
memory) after initial relocation
— On-demand creation of code pages (JIT, encrypted code modules)
• Allow Launch Enclaves other than the one currently provided by Intel
• Maximum protected memory size has increased to 256 MB.
— Supports 64, 128 and 256 MB protected memory sizes.
• VMM Over-subscription. The VMM over-subscription mechanism allows a VMM to
make more resources available to virtual machines than what is actually available
on the platform. The initial Intel® SGX architecture was optimized for EPC
partitioning/ballooning model for VMMs, where a VMM assigns a static EPC partition
to each SGX guest OS without over-subscription and guests are free to manage
(i.e. oversubscribe) their own EPC partitions. The Intel® SGX EPC Over subscription
Extensions architecture provides a set of new instructions allowing VMMs to
efficiently oversubscribe EPC memory for its guest operating systems.
https://software.intel.com/en-us/sgx.
http://www.intel.com/products/processor/manuals:
34 Datasheet, Volume 1 of 2
2.3.10 Intel® Secure Hash Algorithm Extensions (Intel® SHA
Extensions)
The Secure Hash Algorithm (SHA) is one of the most commonly employed
cryptographic algorithms. Primary usages of SHA include data integrity, message
authentication, digital signatures, and data de-duplication. As the pervasive use of
security solutions continues to grow, SHA can be seen in more applications now than
ever. The Intel® SHA Extensions are designed to improve the performance of these
compute-intensive algorithms on Intel® architecture-based processors.
The Intel® SHA Extensions are a family of seven instructions based on the Intel®
Streaming SIMD Extensions (Intel® SSE) that are used together to accelerate the
performance of processing SHA-1 and SHA-256 on Intel architecture-based processors.
Given the growing importance of SHA in our everyday computing devices, the new
instructions are designed to provide a needed boost of performance to hashing a single
buffer of data. The performance benefits will not only help improve responsiveness and
lower power consumption for a given application, but they may also enable developers
to adopt SHA in new applications to protect data while delivering to their user
experience goals. The instructions are defined in a way that simplifies their mapping
into the algorithm processing flow of most software libraries, thus enabling easier
development.
http://software.intel.com/en-us/artTGLes/intel-sha-extensions
If the OS opt-in to use UMIP, the following instruction are enforced to run in supervisor
mode:
• SGDT - Store the GDTR register value
• SIDT - Store the IDTR register value
• SLDT - Store the LDTR register value
• SMSW - Store Machine Status Word
• STR - Store the TR register value
Datasheet, Volume 1 of 2 35
http://www.intel.com/products/processor/manuals
TME encrypts memory accesses using the AES XTS algorithm with 128-bit keys. The
encryption key used for memory encryption is generated using a hardened random
number generator in the processor and is not exposed to software.
Data in-memory and on the external memory buses is encrypted and exists in plain
text only inside the processor. This allows existing software to operate without any
modification while protecting memory using TME. TME does not protect memory from
modifications.
TME allows the BIOS to specify a physical address range to remain unencrypted.
Software running on a TME enabled system has full visibility into all portions of memory
that are configured to be unencrypted by reading a configuration register in the
processor.
https://software.intel.com/sites/default/files/managed/a5/16/Multi-Key-Total-Memory-
Encryption-Spec.pdf
CET provides the following components to defend against ROP/JOP style control-flow
subversion attacks:
The shadow stack is protected from tamper through the page table protections such
that regular store instructions cannot modify the contents of the shadow stack. To
provide this protection the page table protections are extended to support an additional
attribute for pages to mark them as “Shadow Stack” pages. When shadow stacks are
enabled, control transfer instructions/flows such as near call, far call, call to interrupt/
exception handlers, and so on. store their return addresses to the shadow stack. The
RET instruction pops the return address from both stacks and compares them. If the
36 Datasheet, Volume 1 of 2
return addresses from the two stacks do not match, the processor signals a control
protection exception (#CP). Stores from instructions such as MOV, XSAVE, and so on
are not allowed to the shadow stack.
The processor implements a state machine that tracks indirect JMP and CALL
instructions. When one of these instructions is seen, the state machine moves from
IDLE to WAIT_FOR_ENDBRANCH state. In WAIT_FOR_ENDBRANCH state the next
instruction in the program stream must be an ENDBRANCH. If an ENDBRANCH is not
seen the processor causes a control protection fault (#CP), otherwise the state
machine moves back to IDLE state.
https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-
enforcement-technology-preview.pdf
The Software can wrap it own key via the ENCODEKEY instruction and receive a handle.
The handle is used with the AES*KL instructions to handle encrypt and decrypt
operations. Once a handle is obtained, the software can delete the original key from
memory.
Supervisor/user paging on the smaller Ring 0 portion will enforce access policy for all
the ring 3 code with regard to the SMM state save, MSR registers, IO ports and other
registers.
The Ring 0 portion can perform save/restore of register context to allow the Ring 3
section to make use of those registers without having access to the OS context or the
ability to modify the OS context.
The Ring 0 portion is signed and provided by Intel. This portion is attested by the
processor.
Datasheet, Volume 1 of 2 37
2.4 Power and Performance Technologies
2.4.1 Intel® Smart Cache Technology
The Intel® Smart Cache Technology is a shared Last Level Cache (LLC).
• The LLC is non-inclusive.
• The LLC may also be referred to as a 3rd level cache.
• The LLC is shared between all IA cores as well as the Processor Graphics.
• The 1st and 2nd level caches are not shared between physical cores and each
physical core has a separate set of caches.
• The size of the LLC is SKU specific with a maximum of 3 MB per physical core and is
a 12-way associative cache.
The 2nd level cache holds both data and instructions. It is also referred to as mid-level
cache or MLC.
The processor 2nd level cache size is 1.25 MB and is a 20-way non-inclusive associative
cache.
Other System
Devices
PCIe
Agent Local Memory
38 Datasheet, Volume 1 of 2
Figure 2-4. Processor Cache Hierarchy
Notes:
1. L1 Data cache (DCU) - 48 KB (per core)
2. L1 Instruction cache (IFU) - 32 KB (per core)
3. MLC - Mid Level Cache - 1.25 MB (per core)
To enable ITBMT 3.0 the processor exposes individual core capabilities; including
diverse maximum turbo frequencies.
An operating system that allows for varied per core frequency capability can then
maximize power savings and performance usage by assigning tasks to the faster cores,
especially on low core count workloads.
Processors enabled with these capabilities can also allow software (most commonly a
driver) to override the maximum per-core Turbo frequency limit and notify the
operating system via an interrupt mechanism.
For more information on the Intel® Turbo Boost Max 3.0 Technology, refer to http://
www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-
boost-max-technology.html
Intel® Turbo Boost Max 3.0 Technology is only supported by H/H35 processor lines.
This enhancement is most beneficial for high-interrupt scenarios like Gigabit LAN,
WLAN peripherals, and so on.
Datasheet, Volume 1 of 2 39
2.4.6 Intel® Turbo Boost Technology 2.0
The Intel® Turbo Boost Technology 2.0 allows the processor IA core/processor graphics
core to opportunistically and automatically run faster than the processor IA core base
frequency/processor graphics base frequency if it is operating below power,
temperature, and current limits. The Intel® Turbo Boost Technology 2.0 feature is
designed to increase the performance of both multi-threaded and single-threaded
workloads.
Compared with previous generation products, Intel® Turbo Boost Technology 2.0 will
increase the ratio of application power towards TDP and also allows to increase power
above TDP as high as PL2 for short periods of time. Thus, thermal solutions and
platform cooling that are designed to less than thermal design guidance might
experience thermal and performance issues since more applications will tend to run at
the maximum power limit for significant periods of time.
Note: Intel® Turbo Boost Technology 2.0 may not be available on all SKUs.
Any of these factors can affect the maximum frequency for a given workload. If the
power, current, or thermal limit is reached, the processor will automatically reduce the
frequency to stay within its TDP limit. Turbo processor frequencies are only active if the
operating system is requesting the P0 state. For more information on P-states and C-
states, refer Power Management.
40 Datasheet, Volume 1 of 2
2.4.7 Enhanced Intel SpeedStep® Technology
Enhanced Intel SpeedStep® Technology enables OS to control and select P-state. The
following are the key features of Enhanced Intel SpeedStep® Technology:
• Multiple frequencies and voltage points for optimal performance and power
efficiency. These operating points are known as P-states.
• Frequency selection is software controlled by writing to processor MSRs. The
voltage is optimized based on the selected frequency and the number of active
processors IA cores.
— Once the voltage is established, the PLL locks on to the target frequency.
— All active processor IA cores share the same frequency and voltage. In a multi-
core processor, the highest frequency P-state requested among all active IA
cores is selected.
— Software-requested transitions are accepted at any time. If a previous
transition is in progress, the new transition is deferred until the previous
transition is completed.
• The processor controls voltage ramp rates internally to ensure glitch-free
transitions.
Note: Because there is low transition latency between P-states, a significant number of
transitions per-second are possible.
Intel® Advanced Vector Extensions (Intel® AVX) are designed to achieve higher
throughput to certain integer and floating point operation. Due to varying processor
power characteristics, utilizing AVX instructions may cause a) parts to operate below
the base frequency b) some parts with Intel® Turbo Boost Technology 2.0 to not
Datasheet, Volume 1 of 2 41
achieve any or maximum turbo frequencies. Performance varies depending on
hardware, software and system configuration and you should consult your system
manufacturer for more information.
Intel® Advanced Vector Extensions refers to Intel® AVX, Intel® AVX2 or Intel® AVX-
512.
Note: Intel® AVX and AVX2 Technologies may not be available on all SKUs.
Intel® AVX-512 instructions are important because they open up higher performance
capabilities for the most demanding computational tasks. Intel® AVX-512 instructions
offer the highest degree of compiler support by including an unprecedented level of
richness in the design of the instruction capabilities.
Intel® AVX-512 features include 32 vector registers each 512 bit wide and eight
dedicated mask registers. Intel® AVX-512 is a flexible instruction set that includes
support for broadcast, embedded masking to enable prediction, embedded floating
point rounding control, embedded floating-point fault suppression, scatter instructions,
high-speed math instructions, and compact representation of large displacement
values.
Intel® AVX-512 offers a level of compatibility with Intel® AVX which is stronger than
prior transitions to new widths for SIMD operations. Unlike Intel® SSE and Intel® AVX
which cannot be mixed without performance penalties, the mixing of Intel® AVX and
Intel® AVX-512 instructions is supported without penalty. Intel® AVX registers YMM0-
YMM15 map into Intel® AVX-512 registers ZMM0-ZMM15 (in x86-64 mode), very much
like Intel® SSE registers map into Intel® AVX registers. Therefore, in processors with
Intel® AVX-512 support, Intel® AVX and Intel® AVX2 instructions operate on the lower
128 or 256 bits of the first 16 ZMM registers.
http://www.intel.com/products/processor/manuals
Intel® AVX-512 has multiple extensions that CPUID has been enhanced to expose.
• AVX512F (Foundation): expands most 32 bit and 64 bit based AVX instructions
with EVEX coding scheme to support 512 bit registers, operation masks, parameter
broadcasting, and embedded rounding and exception control
• AVX512CD (Conflict Detection): efficient conflict detection to allow more loops
to be vectorized
• AVX512BW (Byte and Word): extends AVX-512 to cover 8 bit and 16 bit integer
operations
42 Datasheet, Volume 1 of 2
• AVX512DQ (Doubleword and Quadword): extends AVX-512 to cover 32 bit and
64 bit integer operations
• AVX512VL (Vector Length): extends most AVX-512 operations to also operate
on XMM (128 bit) and YMM (256 bit) registers
• AVX512IFMA (Integer Fused Multiply-Add): fused multiply-add of integers
using 52 bit precision
• AVX512VBMI (Vector Byte Manipulation Instructions): adds vector byte
permutation instructions which were not present in AVX-512BW
• AVX512VBMI2 (Vector Byte Manipulation Instructions 2): adds byte/word
load, store and concatenation with shift
• VPOPCNTDQ: count of bits set to 1
• VPCLMULQDQ: carry-less multiplication of quadwords
• AVX-512VNNI (Vector Neural Network Instructions): vector instructions for
deep learning
• AVX512GFNI (Galois Field New Instructions): vector instructions for
calculating Galois Fields
• AVX512VAES (Vector AES instructions): vector instructions for AES coding
• AVX512BITALG (Bit Algorithms): byte/word bit manipulation instructions
expanding VPOPCNTDQ
• AVX512VP2INTERSECT: Compute Intersection Between DWORDS/QUADWORDS
to a Pair of Mask Registers
The key enhancements provided by the x2APIC architecture over xAPIC are the
following:
• Support for two modes of operation to provide backward compatibility and
extensibility for future platform innovations:
Datasheet, Volume 1 of 2 43
— In xAPIC compatibility mode, APIC registers are accessed through memory
mapped interface to a 4 KByte page, identical to the xAPIC architecture.
— In the x2APIC mode, APIC registers are accessed through the Model Specific
Register (MSR) interfaces. In this mode, the x2APIC architecture provides
significantly increased processor addressability and some enhancements on
interrupt delivery.
• Increased range of processor addressability in x2APIC mode:
— Physical xAPIC ID field increases from 8 bits to 32 bits, allowing for interrupt
processor addressability up to 4G-1 processors in physical destination mode. A
processor implementation of x2APIC architecture can support fewer than 32
bits in a software transparent fashion.
— Logical xAPIC ID field increases from 8 bits to 32 bits. The 32 bit logical x2APIC
ID is partitioned into two sub-fields – a 16 bit cluster ID and a 16 bit logical ID
within the cluster. Consequently, ((2^20) - 16) processors can be addressed in
logical destination mode. Processor implementations can support fewer than
16 bits in the cluster ID sub-field and logical ID sub-field in a software agnostic
fashion.
• More efficient MSR interface to access APIC registers:
— To enhance inter-processor and self-directed interrupt delivery as well as the
ability to virtualize the local APIC, the APIC register set can be accessed only
through MSR-based interfaces in x2APIC mode. The Memory Mapped IO
(MMIO) interface used by xAPIC is not supported in x2APIC mode.
• The semantics for accessing APIC registers have been revised to simplify the
programming of frequently-used APIC registers by system software. Specifically,
the software semantics for using the Interrupt Command Register (ICR) and End Of
Interrupt (EOI) registers have been modified to allow for more efficient delivery
and dispatching of interrupts.
• The x2APIC extensions are made available to system software by enabling the local
x2APIC unit in the “x2APIC” mode. To benefit from x2APIC capabilities, a new
operating system and a new BIOS are both needed, with special support for the
x2APIC mode.
• The x2APIC architecture provides backward compatibility to the xAPIC architecture
and forwards extensible for future Intel platform innovations.
http://www.intel.com/products/processor/manuals/.
44 Datasheet, Volume 1 of 2
2.4.13 Intel® GNA 2.0 (GMM and Neural Network Accelerator)
GNA stands for Gaussian Mixture Model and Neural Network Accelerator.
The GNA is used to process speech recognition without user training sequence. The
GNA is designed to unload the processor cores and the system memory with complex
speech recognition tasks and improve the speech recognition accuracy. The GNA is
designed to compute millions of Gaussian probability density functions per second
without loading the processor cores while maintaining low power consumption.
CPU CPU
C o re 0 C o re 1
DRAM
Memory Bus
CPU CPU
C o re 2 C o re 3
Memory Bus
SRAM GNA
D SP
http://www.intel.com/products/processor/manuals
The Ring shares frequency and voltage with the Last Level Cache (LLC).
Datasheet, Volume 1 of 2 45
The Ring's frequency dynamically changes. Its frequency is relative to both processor
cores and processor graphics frequencies.
46 Datasheet, Volume 1 of 2
IPU6 provides a complete high quality hardware accelerated pipeline, and is therefore
not dependent on algorithms running on the vector processors to provide the highest
quality output.
UP4/UP3 processot has the most advance IPU6, H has a lighter version of the IPU.
Intel® VTune™ Amplifier for Systems and the Intel® System Debugger are part of
Intel® System Studio (2015 and newer) product, which includes updates for the new
debug and trace features, including Intel® PT and Intel® Trace Hub.
An update to the Linux* performance utility, with support for Intel® PT, is available for
download at https://github.com/virtuoso/linux-perf/tree/intel_pt. It requires rebuilding
the kernel* and the perf utility.
Additionally, CrashLog enables the BIOS or the OS to collect data on failures with the
intent to collect and classify the data as well as analyze failure trends.
CrashLog is a mechanism to collect debug information into a single location and then
allow access to that data via multiple methods, including the BIOS and OS of the failing
system.
Crash Data Detector notifies the Crash Data Requester of the error condition in order
for the Crash Data Requester to collect Crash Data from several different IPs and/or
Crash Nodes and stores the data to the Crash Data Storage (on-die SRAM) prior to the
reset.
After the system has rebooted, the Crash Data Collector reads the Crash Data from the
Crash Data Storage and makes the data available to either to software and/or back to a
central server to track error frequency and trends.
Datasheet, Volume 1 of 2 47
• Standardized PCIe discovery solution that enables software to discover and
manage telemetry across products
• Standardized definitions for telemetry decode, including data type definitions
• Exposure of commonly used telemetry for power and performance debug including:
— P-State status, residency and counters
— C-State status, residency and counters
— Energy monitoring
— Device state monitoring (for example, PCIe L1)
— Interconnect/bus bandwidth counters
— Thermal monitoring
Exposure of SoC state snapshot for atomic monitoring of package power states,
uninterrupted by software that reads.
The Telemetry Aggregator is also a companion to the CrashLog feature where data is
captured about the SoC at the point of a crash. These counters can provide insights
into the nature of the crash.
48 Datasheet, Volume 1 of 2
2.7 Clock Topology
The processor has 3 reference clocks that drive the various components within the
SoC:
• Processor reference clock or base clock (BCLK). 100 MHz with SSC.
• PCIe reference clock (PCTGLK). 100 MHz with SSC.
• Fixed clock. 38.4 MHz without SSC (crystal clock).
By integrating the BCLK PLL into the processor die, a cleaner clock is achieved at a
lower power compared to the legacy PCH BCLK PLL solution.
The BCLK PLL has controls for RFI/EMI mitigations as well as Over-clocking capabilities.
Datasheet, Volume 1 of 2 49
In other words, the Operating System requires multiple PCIe devices to have multiple
driver instances, making volume management across multiple host bus adapters
(HBAs) and driver instances difficult.
Intel Volume Management technology requires support in BIOS and driver, memory and
configuration space management.
A Volume Management Device (VMD) exposes a single device to the operating system,
which will load a single storage driver. The VMD resides in the processor's PCIe root
complex and it appears to the OS as a root bus integrated endpoint. In the processor,
the VMD is in a central location to manipulate access to storage devices which may be
attached directly to the processor or indirectly through the PCH. Instead of allowing
individual storage devices to be detected by the OS and therefore causing the OS to
load a separate driver instance for each, VMD provides configuration settings to allow
specific devices and root ports on the root bus to be invisible to the OS.
Access to these hidden target devices is provided by the VMD to the single, unified
driver.
50 Datasheet, Volume 1 of 2
2.8.3 Key Features
Supports MMIO mapped Configuration Space (CFGBAR):
• Supports MMIO Low
• Supports MMIO High
• Supports Register Lock or Restricted Access
• Supports Device Assign
• Function Assign
• MSI Remapping Disable
§§
Datasheet, Volume 1 of 2 51
3 Power Management
This chapter provides information on the following Power Management topics:
• Advanced Configuration and Power Interface (ACPI) States
• Processor IA Core Power Management
• Integrated Memory Controller (IMC) Power Management
• PCI Express* Power Management
• Direct Media Interface (DMI) Power Management
• Processor Graphics Power Management
52 Datasheet, Volume 1 of 2
Figure 3-1. UP3 and UP4 Processor Lines Power States
Datasheet, Volume 1 of 2 53
Figure 3-2. H Processor Line Power States
54 Datasheet, Volume 1 of 2
Figure 3-3. Processor Package and IA Core C-States
Notes:
1. PkgC2/C3 are Non-architectural: Software cannot request to enter these states
explicitly. These states are intermediate states between PkgC0 and PkgC6.
2. There are constraints that prevent the system to go deeper.
3. The “core state” relates to the core which is in the HIGEST power state in the
package (most active).
Full On: CPU operating. Individual devices may be shut to save power. The different CPU
G0/S0/C0
operating levels are defined by Cx states.
GO/S0/Cx Cx state: CPU manages C-states by itself and can be in low power state
Datasheet, Volume 1 of 2 55
Table 3-1. System States (Sheet 2 of 2)
State Description
Suspend-To-RAM (STR): The system context is maintained in system DRAM, but power is
shut to non-critical circuits. Memory is retained, and refreshes continue. All external clocks
G1/S3 are shut off; RTC clock and internal ring oscillator clocks are still toggling.
In S3 (H only), SLP_S3 signal stays asserted, SLP_S4 and SLP_S5 are inactive until a wake
occurs.
Suspend-To-Disk (STD): The context of the system is maintained on the disk. All power is
then shut to the system except to the logic required to resume. Externally appears same as
G1/S4 S5 but may have different wake events.
In S4, SLP_S3 and SLP_S4 both stay asserted and SLP_S5 is inactive until a wake occurs.
Soft Off: System context not maintained. All power is shut except for the logic required to
G2/S5 restart. A full boot is required when waking.
Here, SLP_S3, SLP_S4, and SLP_S5 are all active until a wake occurs.
Mechanical OFF: System context not maintained. All power shut except for the RTC. No
“Wake” events are possible because the system does not have any power. This state occurs
G3 if the user removes the batteries, turns off a mechanical switch, or if the system power
supply is at a level that is insufficient to power the “waking” logic. When system power
returns the transition will depend on the state just prior to the entry to G3.
Pre-Charge Power Down CKE de-asserted (not self-refresh) with all banks closed.
Active Power Down CKE de-asserted (not self-refresh) with minimum one bank active.
G0 S0 C0 Full On On Full On
Deep Power
G0 S0 C6/C7 On Deep Power Down
Down
Suspend to RAM.
G1 S3 Power off Off Off, except RTC
S3 valid for H only.
56 Datasheet, Volume 1 of 2
3.2.1 OS/HW Controlled P-states
Caution: Long-term reliability cannot be assured unless all the Low-Power Idle States are
enabled.
While individual threads can request low-power C-states, power saving actions only
take place once the processor IA core C-state is resolved. processor IA core C-states
are automatically resolved by the processor. For thread and processor IA core C-states,
a transition to and from C0 state is required before entering any other C-state.
Datasheet, Volume 1 of 2 57
3.2.3 Requesting the Low-Power Idle States
The primary software interfaces for requesting low-power idle states are through the
MWAIT instruction with sub-state hints and the HLT instruction (for C1 and C1E).
However, the software may make C-state requests using the legacy method of I/O
reads from the ACPI-defined processor clock control registers, referred to as P_LVLx.
This method of requesting C-states provides legacy support for operating systems that
initiate C-state transitions using I/O reads.
For legacy operating systems, P_LVLx I/O reads are converted within the processor to
the equivalent MWAIT C-state request. Therefore, P_LVLx reads do not directly result in
I/O reads to the system. The feature, known as I/O MWAIT redirection, should be
enabled in the BIOS.
The BIOS can write to the C-state range field of the PMG_IO_CAPTURE MSR to restrict
the range of I/O addresses that are trapped and emulate MWAIT like functionality. Any
P_LVLx reads outside of this range do not cause an I/O redirection to MWAIT(Cx) like
the request. They fall through like a normal I/O instruction.
When P_LVLx I/O instructions are used, MWAIT sub-states cannot be defined. The
MWAIT sub-state is always zero if I/O MWAIT redirection is used. By default, P_LVLx I/
O redirections enable the MWAIT 'break on EFLAGS.IF’ feature that triggers a wake up
on an interrupt, even if interrupts are masked by EFLAGS.IF.
58 Datasheet, Volume 1 of 2
Table 3-4. Core C-states (Sheet 2 of 2)
Core C1 + lowest frequency and voltage operating point (package in C0
C1E MWAIT(C1E)
state)
Processor IA, flush their L1 instruction cache, the L1 data cache, and L2
MWAIT(C6/7/7s/ cache to the LLC shared cache cores save their architectural state to an
C8/9/10) or IO SRAM before reducing IA cores voltage, if possible may also be reduced to
C6-C10
read=P_LVL3/4/5/ 0V. Core clocks are off.
6/7/8
C7s is C7 with an additional PLL off.
In general, deeper C-states, such as C6 or C7, have long latencies and have higher
energy entry/exit costs. The resulting performance and energy penalties become
significant when the entry/exit frequency of a deeper C-state is high. Therefore,
incorrect or inefficient usage of deeper C-states have a negative impact on battery life
and idle power. To increase residency and improve battery life and idle power in deeper
C-states, the processor supports C-state auto-demotion.
C-State auto-demotion:
• C7/C6 to C1/C1E
The decision to demote a processor IA core from C6/C7 to C1/C1E is based on each
processor IA core’s immediate residency history. Upon each processor IA core C6/C7
request, the processor IA core C-state is demoted to C1 until a sufficient amount of
residency has been established. At that point, a processor IA core is allowed to go into
C6 or C7. If the interrupt rate experienced on a processor IA core is high and the
processor IA core is rarely in a deep C-state between such interrupts, the processor IA
core can be demoted to a C1 state.
Datasheet, Volume 1 of 2 59
The processor exits a package C-state when a break event is detected. Depending on
the type of break event, the processor does the following:
• If a processor IA core break event is received, the target processor IA core is
activated and the break event message is forwarded to the target processor IA
core.
— If the break event is not masked, the target processor IA core enters the
processor IA core C0 state and the processor enters package C0.
— If the break event is masked, the processor attempts to re-enter its previous
package state.
• If the break event was due to a memory access or snoop request,
— But the platform did not request to keep the processor in a higher package C-
state, the package returns to its previous C-state.
— And the platform requests a higher power C-state, the memory access or snoop
request is serviced and the package remains in the higher power C-state.
60 Datasheet, Volume 1 of 2
Table 3-5. Package C-States (Sheet 2 of 2)
Package
Description Dependencies
C state
All cores in C6 or deeper + Processor Graphics in RC6, LLC may be flushed All processor IA cores in C6 or
and turned off, memory in self refresh, memory clock stopped. deeper.
The processor will enter Package C3 when: Processor Graphics in RC6.
PKG C3 memory in self refresh, memory
• All IA cores in C6 or deeper + Processor Graphic cores in RC6.
clock stopped.
• The platform components/devices allows proper LTR for entering
LLC may be flushed and turned
Package C3. off.
Package C6 + If all IA cores requested C7, LLC ways may be flushed until it is Package C6.
cleared. If the entire LLC is flushed, voltage will be removed from the LLC. If all IA cores requested C7.
The processor will enter Package C7 when: LLC ways may be flushed until it is
PKG C7 cleared.
• All IA cores in C7 or deeper + Processor Graphic cores in RC6.
If the entire LLC is flushed,
• The platform components/devices allow proper LTR for entering Package
voltage will be removed from the
C7. LLC.
Notes:
• Display In PSR is only on single embedded panel configuration and panel support PSR feature.
• TCSS may enter lowest power state (TC Cold) when no device attached to any of the TCSS ports.
Datasheet, Volume 1 of 2 61
Package C-State Auto-Demotion
The Processor may demote the Package C state to a shallower C state, for example
instead of going into package C10, it will demote to package C8 (and so on as
required). The processor decision to demote the package C state is based on the
required C states latencies, entry/exit energy/power and devices LTR.
Modern Standby
Modern Standby is a platform state. On display time out the OS requests the processor
to enter package C10 and platform devices at RTD3 (or disabled) in order to attain low
power in idle. Modern Standby requires proper BIOS (refer BIOS specification in
Section 1.10, “Related Documents”) and OS configuration.
C6DRAM
The C6DRAM feature saves the processor internal state at Package C6 and deeper to
DRAM instead of on-die SRAM.
When the processor state has been saved to DRAM, the dedicated save/restore SRAM
modules are power gated, enabling idle power savings. The SRAM modules operate on
the sustained voltage rail (VccST).
The memory region used for C6DRAM resides in the Processor Reserved Memory region
(PRMRR) which is encrypted and replay protected. The processor issues a Machine
Check exception (#MC) if the processor state has been corrupted.
Note: The availability of C6DRAM may vary between processor lines offers.
Note: Display resolution is not the only factor influencing the deepest Package C-state the
processor can get into. Device latencies, interrupt response latencies, and core C-states
are among other factors that influence the final package C-state the processor can
enter.
62 Datasheet, Volume 1 of 2
The following table lists display resolutions and deepest available package C-State.The
display resolutions are examples using common values for blanking and pixel rate.
Actual results will vary. The table shows the deepest possible Package C-state. System
workload, system idle, and AC or DC power also affect the deepest possible Package C-
state.
Notes:
1. All Deep states are with Display on.
2. The deepest C-state has variance, dependent various parameters such as SW and Platform Devices.
3. Relevant to all Processor lines.
Datasheet, Volume 1 of 2 63
3.3.2 Display Power Savings Technologies
Intel® DPST 6.3 has improved power savings without adversely affecting the
performance.
64 Datasheet, Volume 1 of 2
3.3.2.5 Panel Self-Refresh 2 (PSR 2)
Panel Self-Refresh feature allows the Processor Graphics core to enter low-power state
when the frame buffer content is not changing constantly. This feature is available on
panels capable of supporting Panel Self-Refresh. Apart from being able to support, the
eDP* panel should be eDP 1.4 compliant. PSR 2 adds partial frame updates and
requires an eDP 1.4 compliant panel.
Datasheet, Volume 1 of 2 65
3.3.3.2 Intel® Graphics Render Standby Technology (Intel® GRST)
Intel® Graphics Render Standby Technology is a technique designed to optimize the
average power of the graphics part. The Graphics Render engine will be put in a sleep
state, or Render Standby (RS), during times of inactivity or basic video modes. While in
Render Standby state, the graphics part will place the VR (Voltage Regulator) into a low
voltage state. Hardware will save the render context to the allocated context buffer
when entering RS state and restore the render context upon exiting RS state.
Before changing the DDR data rate, the processor sets DDR to self-refresh and changes
the needed parameters. The DDR voltage remains stable and unchanged.
BIOS/MRC DDR training at maximum, mid and minimum frequencies sets I/O and
timing parameters.
Refer Table 5-13, “System Agent Enhanced Speed Steps (SA-GV) and Gear Mode
Frequencies”.
66 Datasheet, Volume 1 of 2
3.7 PCI Express* Power Management
• Active power management support using L1 Substates (L1.1,L1.2)
• L0s power state is not supported on 11th Generation Intel® Core™ processor
platform.
• All inputs and outputs disabled in L2/L3 Ready state.
• Processor PCIe* interface supports Hot-Plug.
Note: An increase in power consumption may be observed when PCI Express* ASPM
capabilities are disabled.
Table 3-8. Package C-States with PCIe* Link States Dependencies
L-State Description Package C-State
§§
Datasheet, Volume 1 of 2 67
4 Thermal Management
Caution: Thermal specifications given in this chapter are on the component and package level
and apply specifically to the processor. Operating the processor outside the specified
limits may result in permanent damage to the processor and potentially other
components in the system.
The processor integrates multiple processing IA cores, graphics cores and for some
SKUs a PCH on a single package. This may result in power distribution differences
across the package and should be considered when designing the thermal solution.
Intel® Turbo Boost Technology 2.0 allows processor IA cores to run faster than the base
frequency. It is invoked opportunistically and automatically as long as the processor is
conforming to its temperature, power delivery, and current control limits. When Intel®
Turbo Boost Technology 2.0 is enabled:
• Applications are expected to run closer to TDP more often as the processor will
attempt to maximize performance by taking advantage of estimated available
energy budget in the processor package.
• The processor may exceed the TDP for short durations to utilize any available
thermal capacitance within the thermal solution. The duration and time of such
operation can be limited by platform runtime configurable registers within the
processor.
• Graphics peak frequency operation is based on the assumption of only one of the
graphics domains (GT/GTx) being active. This definition is similar to the IA core
Turbo concept, where peak turbo frequency can be achieved when only one IA core
is active. Depending on the workload being applied and the distribution across the
graphics domains the user may not observe peak graphics frequency for a given
workload or benchmark.
68 Datasheet, Volume 1 of 2
• Thermal solutions and platform cooling that is designed to less than thermal design
guidance may experience thermal and performance issues.
Note: Intel® Turbo Boost Technology 2.0 availability may vary between the different SKUs.
Notes:
1. Implementation of Intel® Turbo Boost Technology 2.0 only requires configuring
PL1, PL1, Tau and PL2.
2. PL3 and PL4 are disabled by default.
Datasheet, Volume 1 of 2 69
Figure 4-1. Package Power Control
Time
Note1: Optional Feature, default is disabled
When the Psys signal is properly implemented, the system designer can utilize the
package power control settings of PsysPL1/Tau, PsysPL2, and PsysPL3 for additional
manageability to match the platform power delivery and platform thermal solution
limitations for Intel® Turbo Boost Technology 2.0. The operation of the PsysPL1/tau,
PsysPL2 and PsysPL3 are analogous to the processor power limits described.
• Platform Power Limit 1 (PsysPL1): A threshold for average platform power that
will not be exceeded - recommend to set to equal platform thermal capability.
• Platform Power Limit 2 (PsysPL2): A threshold that if exceeded, the PsysPL2
rapid power limiting algorithms will attempt to limit the spikes above PsysPL2.
• Platform Power Limit 3 (PsysPL3): A threshold that if exceeded, the PsysPL3
rapid power limiting algorithms will attempt to limit the duty cycle of spikes above
PsysPL3 by reactively limiting frequency.
• PsysPL1 Tau: An averaging constant used for PsysPL1 exponential weighted
moving average (EWMA) power calculation.
• The Psys signal and associated power limits / Tau are optional for the system
designer and disabled by default.
70 Datasheet, Volume 1 of 2
• The Psys data will not include power consumption for charging.
• The Intel Dynamic Tuning (DTT/DPTF) is recommended for performance
improvement in mobile platforms. Dynamic Tuning is configured by system
manufacturers dynamically optimizing the processor power based on the currently
platform thermal and power delivery conditions. Contact Intel Representatives for
enabling details.
Note: Configurable TDP and Low-Power Mode technologies are not battery life improvement
technologies.
Note: Configurable TDP availability may vary between the different SKUs.
With cTDP (Configurable TDP), the processor is now capable of altering the maximum
sustained power with an alternate processor IA core base frequency. Configurable TDP
allows operation in situations where extra cooling is available or situations where a
cooler and quieter mode of operation is desired. The requirements for developing a
non-driver approach can be found by referencing the appropriate processor
Configurable TDP and LPM Implementation Guide (refer ).
Datasheet, Volume 1 of 2 71
Table 4-1. Configurable TDP Modes
Mode Description
Base The average power dissipation and junction temperature operating condition limit,
specified in Table 4-2, “TDP Specifications” and Table 4-4, “Junction Temperature
Specifications” for the SKU Segment and Configuration, for which the processor is
validated during manufacturing when executing an associated Intel-specified high-
complexity workload at the processor IA core frequency corresponding to the
configuration and SKU.
TDP-Up The SKU-specific processor IA core frequency where manufacturing confirms logical
functionality within the set of operating condition limits specified for the SKU segment
and Configurable TDP-Up configuration in Table 4-2, “TDP Specifications” and Table 4-4,
“Junction Temperature Specifications”. The Configurable TDP-Up Frequency and
corresponding TDP is higher than the processor IA core Base Frequency and SKU
Segment Base TDP.
TDP-Down The processor IA core frequency where manufacturing confirms logical functionality
within the set of operating condition limits specified for the SKU segment and
Configurable TDP-Down configuration in Table 4-2, “TDP Specifications” and Table 4-4,
“Junction Temperature Specifications”. The Configurable TDP-Down Frequency and
corresponding TDP is lower than the processor IA core Base Frequency and SKU Segment
Base TDP.
In each mode, the Intel® Turbo Boost Technology 2.0 power limits are reprogrammed
along with a new OS controlled frequency range. The Intel Dynamic Tuning driver
assists in TDP operation by adjusting processor PL1 dynamically. The cTDP mode does
not change the maximum per-processor IA core turbo frequency.
Through the Dynamic tuning (DTT/DPTF) driver, LPM can be configured to use each of
the following methods to reduce active power:
• Restricting package power control limits and Intel® Turbo Boost Technology
availability
• Off-Lining processor IA core activity (Move processor traffic to a subset of cores)
• Placing a processor IA Core at LFM or LSF (Lowest Supported Frequency)
• Utilizing IA clock modulation
• LPM power as listed in the TDP Specifications table is defined at a point which
processor IA core working at LSF, GT = RPn and 1 IA core active
72 Datasheet, Volume 1 of 2
Minimum Frequency Mode (MFM) of operation, which is the Lowest Supported
Frequency (LSF) at the LFM voltage, has been made available for use under LPM for
further reduction in active power beyond LFM capability to enable cooler and quieter
modes of operation.
The Adaptive Thermal Monitor can be activated when the package temperature,
monitored by any Digital Thermal Sensor (DTS), meets its maximum operating
temperature. The maximum operating temperature implies maximum junction
temperature TjMAX.
Reaching the maximum operating temperature activates the Thermal Control Circuit
(TCC). When activated the TCC causes both the processor IA core and graphics core to
reduce frequency and voltage adaptively. The Adaptive Thermal Monitor will remain
active as long as the package temperature remains at its specified limit. Therefore, the
Adaptive Thermal Monitor will continue to reduce the package frequency and voltage
until the TCC is de-activated.
TjMAX is factory calibrated and is not user configurable. The default value is software
visible in the TEMPERATURE_TARGET (0x1A2) MSR, bits [23:16].
The Adaptive Thermal Monitor does not require any additional hardware, software
drivers, or interrupt handling routines. It is not intended as a mechanism to maintain
processor thermal control to PL1 = TDP. The system design should provide a thermal
solution that can maintain normal operation when PL1 = TDP within the intended usage
range.
TCC Activation Offset can be set as an offset from TjMAX to lower the onset of TCC and
Adaptive Thermal Monitor. In addition, there is an optional time window (Tau) to
manage processor performance at the TCC Activation offset value via an EWMA
(Exponential Weighted Moving Average) of temperature.
Datasheet, Volume 1 of 2 73
TCC Activation Offset with Tau=0
If enabled, the offset should be set lower than any other passive protection such as
ACPI _PSV trip points
To manage the processor with the EWMA (Exponential Weighted Moving Average) of
temperature, an offset (degrees Celsius) is written to the TEMPERATURE_TARGET
(0x1A2) MSR, bits [29:24], and the time window (Tau) is written to the
TEMPERATURE_TARGET (0x1A2) MSR [6:0]. The Offset value will be subtracted from
the value found in bits [23:16] and be the temperature.
The processor will manage to this average temperature by adjusting the frequency of
the various domains. The instantaneous Tj can briefly exceed the average temperature.
The magnitude and duration of the overshoot is managed by the time window value
(Tau).
Once the temperature has dropped below the trigger temperature, the operating
frequency and voltage will transition back to the normal system operating point.
Once a target frequency/bus ratio is resolved, the processor IA core will transition to
the new target automatically.
• On an upward operating point transition, the voltage transition precedes the
frequency transition.
• On a downward transition, the frequency transition precedes the voltage transition.
• The processor continues to execute instructions. However, the processor will halt
instruction execution for frequency transitions.
74 Datasheet, Volume 1 of 2
If a processor load-based Enhanced Intel SpeedStep Technology/P-state transition
(through MSR write) is initiated while the Adaptive Thermal Monitor is active, there are
two possible outcomes:
• If the P-state target frequency is higher than the processor IA core optimized
target frequency, the P-state transition will be deferred until the thermal event has
been completed.
• If the P-state target frequency is lower than the processor IA core optimized target
frequency, the processor will transition to the P-state operating point.
If the frequency/voltage changes are unable to end an Adaptive Thermal Monitor event,
the Adaptive Thermal Monitor will utilize clock modulation. Clock modulation is done by
alternately turning the clocks off and on at a duty cycle (ratio between clock “on” time
and total time) specific to the processor. The duty cycle is factory configured to 25% on
and 75% off and cannot be modified. The period of the duty cycle is configured to 32
microseconds when the Adaptive Thermal Monitor is active. Cycle times are
independent of processor frequency. A small amount of hysteresis has been included to
prevent excessive clock modulation when the processor temperature is near its
maximum operating temperature. Once the temperature has dropped below the
maximum operating temperature, and the hysteresis timer has expired, the Adaptive
Thermal Monitor goes inactive and clock modulation ceases. Clock modulation is
automatically engaged as part of the Adaptive Thermal Monitor activation when the
frequency/voltage targets are at their minimum settings. Processor performance will be
decreased when clock modulation is active. Snooping and interrupt processing are
performed in the normal manner while the Adaptive Thermal Monitor is active.
Clock modulation will not be activated by the Package average temperature control
mechanism.
Achieving this is done by reducing IA and other subsystem agent's voltages and
frequencies in a gradual and coordinated manner that varies depending on the
dynamics of the situation. IA frequencies and voltages will be directed down as low as
LFM (Lowest Frequency Mode). Further restricts are possible via Thermal Trolling point
(TT1) under conditions where thermal budget cannot be re-gained fast enough with
voltages and frequencies reduction alone. TT1 keeps the same processor voltage and
clock frequencies the same yet skips clock edges to produce effectively slower clocking
rates. This will effectively result in observed frequencies below LFM on the Windows
PERF monitor.
Datasheet, Volume 1 of 2 75
When the temperature is retrieved by the processor MSR, it is the instantaneous
temperature of the given DTS. When the temperature is retrieved using PECI, it is the
average of the highest DTS temperature in the package over a 256 ms time window.
Intel recommends using the PECI reported temperature for platform thermal control
that benefits from averaging, such as fan speed control. The average DTS temperature
may not be a good indicator of package Adaptive Thermal Monitor activation or rapid
increases in temperature that triggers the Out of Specification status bit within the
PACKAGE_THERM_STATUS (0x1B1) MSR and IA32_THERM_STATUS (0x19C) MSR.
Unlike traditional thermal devices, the DTS outputs a temperature relative to the
maximum supported operating temperature of the processor (TjMAX), regardless of TCC
activation offset. It is the responsibility of software to convert the relative temperature
to an absolute temperature. The absolute reference temperature is readable in the
TEMPERATURE_TARGET (0x1A2) MSR. The temperature returned by the DTS is an
implied negative integer indicating the relative offset from TjMAX. The DTS does not
report temperatures greater than TjMAX. The DTS-relative temperature readout directly
impacts the Adaptive Thermal Monitor trigger point. When a package DTS indicates
that it has reached the TCC activation (a reading of 0x0, except when the TCC
activation offset is changed), the TCC will activate and indicate an Adaptive Thermal
Monitor event. A TCC activation will lower both processor IA core and graphics core
frequency, voltage, or both. Changes to the temperature can be detected using two
programmable thresholds located in the processor thermal MSRs. These thresholds
have the capability of generating interrupts using the processor IA core's local APIC.
The error associated with DTS measurements will not exceed ±5 °C within the entire
operating range.
Digital Thermal Sensor based fan speed control (TFAN) is a recommended feature to
achieve optimal thermal performance. At the TFAN temperature, Intel recommends full
cooling capability before the DTS reading reaches TjMAX.
76 Datasheet, Volume 1 of 2
4.1.3.3.1 PROCHOT Input Only
The PROCHOT# signal should be set to input only by default. In this state, the
processor will only monitor PROCHOT# assertions and respond by setting the
maximum frequency to 10Khz.
The following two features are enabled when PROCHOT is set to Input only:
• Fast PROCHOT: Respond to PROCHOT# within 10 uS of PROCHOT# pin assertion,
reducing the processor frequency by 50 %.
• PROCHOT Demotion Algorithm: designed to improve system performance
during multiple PROCHOT assertions (refer Section 4.1.3.6, “PROCHOT Demotion
Algorithm”).
The processor package will remain at the lowest supported P-state until the system de-
asserts PROCHOT#. The processor can be configured to generate an interrupt upon
assertion and de-assertion of the PROCHOT# signal.
Datasheet, Volume 1 of 2 77
no consecutive assertions detected. The processor will raise the frequency if no
consecutive PROCHOT assertion events will occur. PROCHOT demotion algorithm
enabled only when the PROCHOT is configured as input.
78 Datasheet, Volume 1 of 2
4.1.3.9 Low-Power States and PROCHOT# Behavior
Depending on package power levels during package C-states, outbound PROCHOT#
may de-assert while the processor is idle as power is removed from the signal. Upon
wake up, if the processor is still hot, the PROCHOT# will re-assert, although typically
package idle state residency should resolve any thermal issues. The PECI interface is
fully operational during all C-states and it is expected that the platform continues to
manage processor IA core and package thermals even during idle states by regularly
polling for thermal data over PECI.
PECI can be implemented using either the single bit bidirectional I/O pin or using the
eSPI interface
Datasheet, Volume 1 of 2 79
MSR. In this mode, the duty cycle can be programmed in either 12.5% or 6.25%
increments (discoverable using CPUID). Thermal throttling using this method will
modulate each processor IA core's clock independently.
Punit firmware is responsible for aggregating DRAM temperature sources into a per-
DIMM reading as well as an aggregated virtual 'max' sensor reading. At reset, MRC
communicates to the MC the valid channels and ranks as well as DRAM type. At that
time, Punit firmware sets up a valid channel and rank mask that is then used in the
thermal aggregation algorithm to produce a single maximum temperature
80 Datasheet, Volume 1 of 2
4.2 Thermal and Power Specifications
The following notes apply to the tables below, Table 4-2, “TDP Specifications”,
Table 4-3, “Package Turbo Specifications”, and Table 4-4, “Junction Temperature
Specifications”
The TDP and Configurable TDP values are the average power dissipation in junction temperature
operating condition limit, for the SKU Segment and Configuration, for which the processor is validated
1
during manufacturing when executing an associated Intel-specified high-complexity workload at the
processor IA core frequency corresponding to the configuration and SKU.
TDP workload may consist of a combination of processor IA core intensive and graphics core intensive
2
applications.
3 Can be modified at runtime by MSR writes, with MMIO and with PECI commands.
'Turbo Time Parameter' is a mathematical parameter (units of seconds) that controls the processor
turbo algorithm using a moving average of energy usage. Do not set the Turbo Time Parameter to a
4
value less than 0.1 seconds. refer to Section 4.1.1.2, “Platform Power Control” for further
information.
The shown limit is a time averaged-power, based upon the Turbo Time Parameter. Absolute product
5
power may exceed the set limits for short durations or under virus or uncharacterized workloads.
The Processor will be controlled to a specified power limit as described in Section 2.4.6.1, “Intel®
Turbo Boost Technology 2.0 Power Monitoring”. If the power value and/or 'Turbo Time Parameter' is
6
changed during runtime, it may take a short period of time (approximately 3 to 5 times the 'Turbo
Time Parameter') for the algorithm to settle at the new control limits.
7 This is a hardware default setting and not a behavioral characteristic of the part.
8 For controllable turbo workloads, the PL2 limit may be exceeded for up to 10 ms.
LPM power level is an opportunistic power and is not a guaranteed value as usages and
9
implementations may vary.
Power limits may vary depending on if the product supports the 'TDP-up' and/or 'TDP-down' modes.
10
Default power limits can be found in the PKG_PWR_SKU MSR (614h).
The processor die do not reach maximum sustained power simultaneously since the sum of the 2
11
die's estimated power budget is controlled to be equal to or less than the package TDP (PL1) limit.
cTDP down power is based on GT2 equivalent graphics configuration. cTDP down does not decrease
12 the number of active Processor Graphics EUs but relies on Power Budget Management (PL1) to
achieve the specified power level.
Hardware default of PL1 Tau=1 s, By including the benefits available from power and thermal
16
management features the recommended is to use PL1 Tau=28 s.
Configurable
3.3GHz 652
TDP-Up
H-Processor 8- Core GT1 1.45GHz
451 1,9,10,15
Line 45W 2.6GHz
Base
Datasheet, Volume 1 of 2 81
Table 4-2. TDP Specifications (Sheet 2 of 3)
Processor IA
Segment Cores, Processor IA Graphics Thermal Notes
and Graphics Configuration Core Core Design Power (from table
Package Configuration Frequency Frequency (TDP) [W] above)
and TDP
2.3GHz up to 451
Base
2.6GHz
H-Processor 8- Core GT1 1.45GHz
Configurable 1.9GHz up to 1,9,10,15
Line 45W 352
TDP-Down 2.1GHz
2.6GHz up to 451
Base 3.2GHz 1.4GHz up to
1.45GHz
H-Processor 6- Core GT1
Configurable 2.1GHz up to 1,9,10,15
Line 45W 352
TDP-Down 2.6GHz
2.4GHz to
Base 28
3.0GHz
2.4GHz up to
Configurable 15
3.0GHz 1.3GHz up to
UP3- TDP-Down 1
4- Core GT2 1.35GHz
Processor 1,9,10,15
28W Configurable 0.9GHz up to
Line 12
TDP-Down 2 1.2GHz
Base 3.0GHz 28
Configurable 2.2GHz 15
TDP-Down 1
UP3- 1.25GHz 1,9,10,15
2- Core GT2
Processor
28W Configurable
Line 1.7GHz 12
TDP-Down 2
Base 1.1GHz to
1.2GHz 9
Configurable 1.5GHz up to
15
UP4- TDP-Up 2.1Ghz
Processor 4- Core GT2 1.1GHz 1,9,10,11,15
9W Configurable 0.8GHz up to
Line 7
TDP-Down 0.9GHz
Base 1.8GHz 9
Configurable
2.5GHz 15
TDP-Up 1.1GHz
UP4-
2- Core GT2
Processor Configurable 1,9,10,11,15
9W 1.5GHz 7
Line TDP-Down
3.1GHz up to
Base 351
3.3GHz
H35- 1.35GHz
4- Core GT2
Processor Configurable 2.6GHz up to 1,9,10,15
35W TDP-Down 3.0GHz 281
Line
LFM 0.4GHz 0.1GHz N/A
82 Datasheet, Volume 1 of 2
Table 4-2. TDP Specifications (Sheet 3 of 3)
Processor IA
Segment Cores, Processor IA Graphics Thermal Notes
and Graphics Configuration Core Core Design Power (from table
Package Configuration Frequency Frequency (TDP) [W] above)
and TDP
3.4GHz 35
H35-Refresh Base 1.4GHz
Processor 4- Core GT2
Line 35W Configurable 1,9,10,11,15
2.9GHz 28
TDP-Down
2.9 GHz 28
Base
Power Limit 1
0.01 1 448 56 S
Time (PL1 Tau)
Power Limit 1
0.01 1 448 28 S
Time (PL1 Tau)
Power Limit 1
0.01 1 448 28 S
Time (PL1 Tau)
UP3/ UP3-
Power Limit 1 3,4,5,6
Refresh - 4/2- Core GT2
Processor 28W N/A 28 N/A N/A W ,7,8,14
(PL1) ,16
Line
Power Limit 2
N/A PL1*1.25 N/A N/A W
(PL2)
Power Limit 1
0.01 1 448 28 S
Time (PL1 Tau)
UP3-Pentium/ 2- Core GT2
Celeron Power Limit 1 3,4,5,6
15W N/A 15 N/A N/A W ,7,8,14
Processor (PL1) ,16
Line
Power Limit 2
N/A PL1*1.25 N/A N/A W
(PL2)
Datasheet, Volume 1 of 2 83
Table 4-3. Package Turbo Specifications (Sheet 2 of 2)
Processor IA
Segment Cores, MSR Recommendation
Hardware
and Graphics Parameter Minimum Max Value Units Notes
Default
Package Configuration Value
and TDP
Power Limit 1
0.01 1 448 28 S
Time (PL1 Tau)
UP4- Power Limit 1 3,4,5,6
4/2- Core GT2
Processor N/A 9 N/A N/A W ,7,8,14
9W (PL1)
Line ,16
Power Limit 2
N/A PL1*1.25 N/A N/A W
(PL2)
Power Limit 1
0.01 1 448 28 S
Time (PL1 Tau)
H35/ H35
Power Limit 1 3,4,5,6
Refresh - 4- Core GT2
N/A 35 N/A N/A W ,7,8,14
Processor 35W (PL1) ,16,17
Line
Power Limit 2
N/A PL1*1.25 N/A N/A W
(PL2)
Notes:
1. For the notes, refer to the 1st page of Chapter 4, “Thermal Management”.
2. No Specifications for Min/Max PL1/PL2 values.
3. Hardware default of PL1 Tau=1 s, By including the benefits available from power and thermal management features the
recommended is to use PL1 Tau=28 s.
H-Processor Tj Junction
0 100 0 100 ºC 1, 2
Line BGA temperature limit
UP3/UP3- Tj Junction
Refresh// temperature limit
H35-H35
0 100 35 100 ºC 1, 2
Refresh
Processor
Line BGA
UP4- Tj Junction
Processor temperature limit 0 100 0 90 ºC 1, 2, 3
Line BGA
Notes:
1. The thermal solution needs to ensure that the processor temperature does not exceed the TDP Specification Temperature.
2. The processor junction temperature is monitored by Digital Temperature Sensors (DTS). For DTS accuracy, refer Section
4.1.3.2.1, “Digital Thermal Sensor Accuracy (T_accuracy)”.
3. UP4 specification need to be compliance with the 90ºC TDP specification temperature, TCC Offset = 10 and Tau value should be
programed into MSR 1A2h. The recommended TCC_Offset averaging Tau value is 5 s.
§§
84 Datasheet, Volume 1 of 2
5 Memory
Notes:
1. 1DPC refer to when only 1DIMM slot per channel is routed.
2. 2DPC refers to when 2 DIMM slots per Channel are routed and are fully populated or partially
populated with 1 DIMM only. 2DPC supported in H segment only.
3. An Interleave SoDIMM/MD placements like butterfly or back-to-back supported with a
Non-Interleave Ballout mode at UP3.
4. LPDDR4x technology should be implemented homogeneous means that all DRAM devices in the
system should be from the same vendor and have the same part number. Implementing a mix of
DRAM devices may cause serious signal integrity and functional issues.
5. DDR4 supports asymmetric channel memory capacity. For best IA and GFx performance it is
recommended to use symmetric channels capacity
.
6. There is no support for memory modules with different technologies or capacities on opposite
sides of the same memory module. If one side of a memory module is populated, the other side
is either identical or empty.
7. VDD2 is Processor and DRAM voltage, and VDDQ is DRAM voltage.
8. H-processor supports ECC with Interleave Ballout only. ECC with non-interleave is not supported.
9. H-processor DDR4 DIMM0 (Rank[1:0]) must be populated in case DIMM1 (Rank[3:2]) is
populated.
Processor will not boot in case DIMM1 (Rank[3:2]) is populated and DIMM0 (Rank[1:0]) is not
populated.
10. Some SKUs may configured to run up to 2933 MT/s
UP3-Refresh
3200 3200 N/A 4266
Processor Line
Note:
1. H-Processor DDR4 2DPC is supported when the channel is populated with the same
SoDIMM part number.
Datasheet, Volume 1 of 2 85
Table 5-3. DDR Technology Support Matrix
Technology Form Factor Ball count Processor
Notes:
1. Non POR configuration only
# of Chs PKG Width # of DQs x16 x16 x16 x16 x16 x16 x16 x16
4 64 64 x64
4 32 64 x32 x32
Note: White blank means not populated
86 Datasheet, Volume 1 of 2
Table 5-6. H DDR4 SoDIMM Population Configuration
Channel 0 Channel 1
Configuration
DIMM 0 DIMM 1 DIMM 0 DIMM 1
X X X X
X X X
X X X
2 DIMM per X X
channel
X X
X X
X
X
X
1 DIMM per X
channel
X X
Notes:
1. X means SoDIMM populated, White blank means SoDIMM not populated, Gray blank means no slot
2. DDR4 DIMM0(Rank[1:0]) must be populated as a default DIMM, DDR4 DIMM1(Rank[3:2]) is optional. Populating
Slot1(Ranks[3:2]) when Slot0(Ranks[1:0]) is not populate may cause system not to boot.
3. 2DPC required populated DIMM have same part number within each channel.
4. DDR4 Memory Down Channel must use Rank[0]
Datasheet, Volume 1 of 2 87
5.1.3 Supported Memory Modules and Devices
Table 5-7. Supported DDR4 SoDIMM Module Configurations
Raw # of # of Banks
DIMM DRAM Device DRAM # of # of Row/Col Page
Card DRAM Inside ECC
Capacity Technology Organization Ranks Address Bits Size
Version Devices DRAM
A 8 GB 8 Gb 1024M x 8 8 1 16/10 16 8K No
A 16 GB 16 Gb 2048M x 8 8 1 17/10 16 8K No
C 4 GB 8 Gb 512M x 16 4 1 16/10 8 8K No
C 8 GB 16 Gb 1024M x 16 4 1 17/10 8 8K No
E 16 GB 8 Gb 1024M x 8 16 2 16/10 16 8K No
E 32 GB 16 Gb 2048M x 8 16 2 17/10 16 8K No
Notes:
1. For SDP: 1Rx16 using 16Gb die density - the maximum system capacity is 16 GB
2. For DDP: 1Rx16 using 16Gb die density - the maximum system capacity is 32 GB.
3. Pending DRAM samples availability.
4. Maximum system capacity refer to system with 2 channels populated
8 GB DDP 16x32 8 Gb 16 Gb 1
16 GB QDP 16x32 8 Gb 32 Gb 2
16 GB DDP 16x32 16 Gb 32 Gb 1
32 GB QDP 16x32 16 Gb 64 Gb 2
88 Datasheet, Volume 1 of 2
Table 5-9. Supported LPDDR4x x32 DRAMs Configurations (Sheet 2 of 2)
Maximum System Capacity 4 PKG Type (Die bits per Ch x PKG bits)2 Die Density PKG Density Rank Per PKGs
Notes:
1. x32 BGA devices are 200 balls
2. QDP = Quad Die Package, ODP-Octal Die Package
3. Each LPDDR4x channel include two sub-channels
4. Maximum system capacity refers to system with all 8 sub-channels populated
5. Pending DRAM samples availability.
9-12,
DDR4 3200 22 13.75 13.75 2 2N
14,16,18,20
Datasheet, Volume 1 of 2 89
Table 5-12. LPDDR4x System Memory Timing Support
DRAM WL (tCK)
Transfer Rate (MT/s) tCL (tCK) tRCD (ns) tRPpb (ns) tRPab (ns)
Device Set B
LPDDR4x 4266 36 18 18 21 34
11th Generation Intel® Core™ processor adds support for a 4th point for SAGV. A 4th
GV point allows Pcode to select a more optimal frequency so that SA and Qclk region
are operating at a lower voltage/frequency but still providing the required BW.
Table 5-13. System Agent Enhanced Speed Steps (SA-GV) and Gear Mode Frequencies
Processor DDR Maximum SAGV- High
Technology SAGV-LowBW SAGV-MedBW SAGV-HighBW
Line Rate [MT/s] Performance
Notes:
1. 11th Generation Intel® Core™ Processor supports dynamic gearing technology where the Memory Controller can run at 1:1
(Gear-1, Legacy mode) or 1:2 (Gear-2 mode) ratio of DRAM speed. The gear ratio is the ratio of DRAM speed to Memory
Controller Clock.
MC Channel Width equal to DDR Channel width multiply by Gear Ratio
2. SA-GV operating points
a. LowBW- Low frequency point, Minimum Power point. Characterized by low power, low BW, high latency. The
system will stay at this point during low to moderate BW consumption.
b. MedBW - this point is tuned for balance between power & performance (BW demand) Characterized by moderate
power and latency, moderate BW. Only during IA performance workloads, the system will to switch to this point and
only in case this point can provide enough BW
c. HighBW Maximum Bandwidths Point, minimum memory latency point, Characterized by high power, low latency
and high BW. This point intended for high GT and moderate-high IA BW
d. High Performance - Lowest Latency point, low BW and highest power
3. Refer to Section 3.4, “System Agent Enhanced Intel SpeedStep® Technology” for more details
The two controllers are independent and have no means of communicating with each
other, so they need to be configured separately.
In a symmetric memory population, each controller only view half of the total physical
memory address space.
90 Datasheet, Volume 1 of 2
Both MC support only one technology in a system DDR4, LPDDR4x mix of technologies
in one system is not allowed.
Single-Channel Mode
In this mode, all memory cycles are directed to a single channel. Single-Channel mode
is used when either the Channel A or Channel B DIMM connectors are populated in any
order, but not both.
The IMC supports Intel® Flex Memory Technology Mode. Memory is divided into a
symmetric and asymmetric zone. The symmetric zone starts at the lowest address in
each channel and is contiguous until the asymmetric zone begins or until the top
address of the channel with the smaller capacity is reached. In this mode, the system
runs with one zone of dual-channel mode and one zone of single-channel mode,
simultaneously, across the whole memory array.
Note: Channels A and B can be mapped for physical channel 0 and 1 respectively or vice
versa; however, channel A size should be greater or equal to channel B size.
TOM
C N o n in te rle a v e d
access
B
C
D ual channel
in te rle a v e d a c c e s s
B B
B
MC A MC B
M C A a n d M C B c a n b e co n fig u re d to b e p h y sic a l c h a n n e ls 0 o r 1
B – T h e la rg e st p h y s ica l m e m o ry a m o u n t o f th e s m a lle r siz e m e m o ry m o d u le
C – T h e re m a in in g p h y s ica l m e m o ry a m o u n t o f th e la rg e r s iz e m e m o ry m o d u le
Datasheet, Volume 1 of 2 91
Dual-Channel Symmetric Mode (Interleaved Mode)
Dual-Channel Symmetric mode, also known as interleaved mode, provides maximum
performance on real world applications. Addresses are ping-ponged between the
channels after each cache line (64-byte boundary). If there are two requests, and the
second request is to an address on the opposite channel from the first, that request can
be sent before data from the first request has returned. If two consecutive cache lines
are requested, both may be retrieved simultaneously, since they are ensured to be on
opposite channels. Use Dual-Channel Symmetric mode when both Channel A and
Channel B DIMM connectors are populated in any order, with the total amount of
memory in each channel being the same.
When both channels are populated with the same memory capacity and the boundary
between the dual channel zone and the single channel zone is the top of memory, IMC
operates completely in Dual-Channel Symmetric mode.
Notes:
1. The DRAM device technology and width may vary from one channel to another.
Different memory size between channels are relevant to DDR4 only.
The memory controller has an advanced command scheduler where all pending
requests are examined simultaneously to determine the most efficient request to be
issued next. The most efficient request is picked from all pending requests and issued
to system memory Just-in-Time to make optimal use of Command Overlapping. Thus,
instead of having all memory access requests go individually through an arbitration
mechanism forcing requests to be executed one at a time, they can be started without
interfering with the current request allowing for concurrent issuing of requests. This
allows for optimized bandwidth and reduced latency while maintaining appropriate
command spacing to meet system memory protocol.
Command Overlap
Command Overlap allows the insertion of the DRAM commands between the Activate,
Pre-charge, and Read/Write commands normally used, as long as the inserted
commands do not affect the currently executing command. Multiple commands can be
issued in an overlapping manner, increasing the efficiency of system memory protocol.
92 Datasheet, Volume 1 of 2
Out-of-Order Scheduling
While leveraging the Just-in-Time Scheduling and Command Overlap enhancements,
the IMC continuously monitors pending requests to system memory for the best use of
bandwidth and reduction of latency. If there are multiple requests to the same open
page, these requests would be launched in a back to back manner to make optimum
use of the open memory page. This ability to reorder requests on the fly allows the IMC
to further reduce latency and increase bandwidth efficiency.
0 No Error
1 64 37 26 81 2 146 53
2 65 38 46 82 18 148 4
4 66 41 61 84 34 152 20
7 60 42 9 88 50 161 49
8 67 44 16 97 21 162 1
11 36 47 23 98 38 164 17
13 27 49 63 100 54 168 33
14 3 50 47 104 5 176 44
16 68 52 14 112 52 193 8
19 55 56 30 128 71 194 24
21 10 64 70 131 22 196 40
22 29 67 6 133 58 200 56
25 45 69 42 134 13 208 19
26 57 70 62 137 28 224 11
28 0 73 12 138 41 241 7
31 15 74 25 140 48 242 31
32 69 76 32 143 43 244 59
35 39 79 51 145 37 248 35
Notes:
1. All other syndrome values indicate unrecoverable error (more than one error).
2. This table is relevant only for H-Processor ECC supported SKUs.
Datasheet, Volume 1 of 2 93
5.1.13 Data Swapping
By default, the processor supports on-board data swapping in two manners (for all
segments and DRAM technologies):
• Bit swapping is allowed within each Byte for all DDR technologies.
• LPDDR4x: Byte swapping is allowed within each x16 sub channel.
• LPDDR4x: Upper/Lower four x16 sub channels to be connected to x64 DRAM or
two x32 DRAMs. Swapping between four upper to four lower x16 sub channels is
not allowed.
• DDR4: Byte swapping is allowed within each x64 channel.
• ECC bits swap is allowed within DDR4 ECC[7...0].
Table 5-15. Interleave (IL) and Non-Interleave (NIL) Modes Pin Mapping
IL (DDR4) NIL (DDR4) NIL (LPDDR4x)
94 Datasheet, Volume 1 of 2
Figure 5-2. DDR4 Interleave (IL) and Non-Interleave (NIL) Modes Mapping
Datasheet, Volume 1 of 2 95
• Reduced possible overshoot/undershoot signal quality issues seen by the processor
I/O buffer receivers caused by reflections from potentially unterminated
transmission lines.
When a given rank is not populated, the corresponding control signals (CLK_P/CLK_N/
CKE/ODT/CS) are not driven.
At reset, all rows should be assumed to be populated, until it can be proven that they
are not populated. This is due to the fact that when CKE is tri-stated with a DRAMs
present, the DRAMs are not ensured to maintain data integrity. CKE tri-state should be
enabled by BIOS where appropriate, since at reset all rows should be assumed to be
populated.
The CKE is one of the power-saving means. When CKE is off, the internal DDR clock is
disabled and the DDR power is reduced. The power-saving differs according to the
selected mode and the DDR type used. For more information, refer to the IDD table in
the DDR specification.
The CKE is determined per rank, whenever it is inactive. Each rank has an idle counter.
The idle-counter starts counting as soon as the rank has no accesses, and if it expires,
the rank may enter power-down while no new transactions to the rank arrive to
queues. The idle-counter begins counting at the last incoming transaction arrival. It is
important to understand that since the power-down decision is per rank, the IMC can
find many opportunities to power down ranks, even while running memory intensive
applications; the savings are significant (may be few Watts, according to DDR
specification). This is significant when each channel is populated with more ranks.
96 Datasheet, Volume 1 of 2
• In a system which tries to minimize power-consumption, try using the deepest
power-down mode possible
• In high-performance systems with dense packaging (that is, tricky thermal design)
the power-down mode should be considered in order to reduce the heating and
avoid DDR throttling caused by the heating.
The idle timer expiration count defines the # of DCLKs that a rank is idle that causes
entry to the selected power mode. As this timer is set to a shorter time the IMC will
have more opportunities to put the DDR in power-down. There is no BIOS hook to set
this register. Customers choosing to change the value of this register can do it by
changing it in the BIOS. For experiments, this register can be modified in real time if
BIOS does not lock the IMC registers.
When entering the S0 conditional self-refresh, the processor IA core flushes pending
cycles and then enters SDRAM ranks that are not used by the processor graphics into
self-refresh. The CKE signals remain LOW so the SDRAM devices perform self-refresh.
The target behavior is to enter self-refresh for package C3 or deeper power states as
long as there are no memory requests to service.
The processor IA core controller can be configured to put the devices in active power
down (CKE de-assertion with open pages) or pre-charge power-down (CKE de-
assertion with all pages closed). Pre-charge power-down provides greater power
savings but has a bigger performance impact, since all pages will first be closed before
putting the devices in power-down mode.
If dynamic power-down is enabled, all ranks are powered up before doing a refresh
cycle and all ranks are powered down at the end of the refresh.
Datasheet, Volume 1 of 2 97
5.2.2.4 DRAM I/O Power Management
Unused signals should be disabled to save power and reduce electromagnetic
interference. This includes all signals associated with an unused memory channel.
Clocks, CKE, ODT, and CS signals are controlled per DIMM rank and will be powered
down for unused ranks.
The I/O buffer for an unused signal should be tri-stated (output driver disabled), the
input receiver (differential sense-amp) should be disabled. The input path should be
gated to prevent spurious results due to noise on the unused signals (typically handled
automatically when input receiver is disabled).
In C3 or deeper power state, the processor internally gates VDDQ and VDD2 for the
majority of the logic to reduce idle power while keeping all critical DDR pins such as
CKE and VREF in the appropriate state.
In C7 or deeper power state, the processor internally gates VCCSA for all non-critical
state to reduce idle power.
In C-state transitions, the DDR does not go through training mode and will restore the
previous training information.
§§
98 Datasheet, Volume 1 of 2
6 USB-C* Sub System
The USB-C sub-system supports all processor lines DPoC (DisplayPort over Type-C)
protocols. The USB-C sub-system can also support be configured as native DisplayPort
or HDMI v2.0b interfaces, for more information refer to Chapter 10, “Display”.
Note: If USB4 (20 Gbps) only solutions are implemented, Thunderbolt™ 3 compatibility as
defined by USB4/USB-PD specs and 15 W of bus power are still recommended
Note: USB-C sub-system support 2x USB 4 routers, each router can support up to two Type-
C ports.
TCP 3 N/A
Notes:
1. Supported on Type-C or Native connector
2. HDMI v2.0b is supported only on Native connector.
3. USB 3 supported link rates:
a. USB 3 Gen 1x1 (5 Gbps)
b. USB 3 Gen 2x1 (10 Gbps)
4. USB4 operating link rates (including both rounded and non-rounded modes for Thunderbolt™ 3 compatibility):
a. USB 4 Gen 2x2 (20 Gbps)
b. USB 4 Gen 3x2 (40 Gbps)
c. 10.3125 Gbps, 20.625 Gbps - Compatible to Thunderbolt™ 3 non-rounded modes.
5. USB 2 interface supported over Type-C connector, sourced from PCH.
6. USB Type-A connector is not supported.
7. Port group is defined as two ports sharing the same USB4 router, each router supports up to two display interfaces.
8. Display interface can be connected directly to a DP/HDMI/Type-C port or thru USB 4 router (Tunneled) on a Type-C
connector.
9. If two ports in the same group are configured to one as USB4 and the other as DP/HDMI fixed connection each port will
support single display interface.
USB 4 USB 4 Both lanes operate at Gen 2 (10G) or Gen 3 (20G) and also support non-
rounded frequencies (10.3125G / 20.625G) for TBT3 compatibility.
USB3 DPx2
Any of HBR3/HBR2/HBR1/RBR for DP and USB3.2 (10 Gbps)
DPx2 USB3
DPx4 Both lanes at the same DP rate - no support for 2x DPx2 USB-C connector
# PCIe* Gen3/2/1
No PCIe* native support
PCIe* Gen3/2/1 #
# USB4
No support for USB4 with any other protocol
USB4 #
USB4 controllers can be implemented in various systems such as PCs, laptops and
tablets, or devices such as storage, docks, displays, home entertainment, cameras,
computer peripherals, high end video editing systems, and any other PCIe based device
that can be used to extend system capabilities outside of the system's box.
The integrated connection maximum data rate is 20.625 Gbps per lane but supports
also 20.0 Gbps, 10.3125 Gbps, and 10.0 Gbps and is compatible with older
Thunderbolt™ device speeds.
In case that a device (example, USB3 mouse) was connected to the computer, the
computer will work as Host and the xHCI will be activated inside the CPU.
The xHCI controller support link rate of up to USB 3.2 Gen 2x1 (10G).
The xDCI controller support link rate of up to USB 3.2 Gen 1x1 (5G).
Notes: These controllers are instantiated in the processor die as a separate PCI function
functionality for the USB-C* capable ports.
§§
Table 7-1. PCI Express* 4 -lane Bifurcation and Lane Reversal Mapping
Link Width CFG Signals Lanes
Bifurcation
0:6:0 CFG [14] 0 1 2 3
1x4 x4 1 0 1 2 3
1x4 Reversed x4 0 3 2 1 0
Note: PCIe 060 is a single x4 port without bifurcation capabilities, thus bifurcation pin straps are not
applicable
The H processor X16 port supports the configurations shown in the following table:
Table 7-2. PCI Express* 16-lane Bifurcation and Lane Reversal Mapping
Link Width CFG Signals Lanes
Bifurcation
0:1:0 0:1:1 0:1:2 CFG CFG CFG 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
[6] [5] [2]
PCIe 010
PCIe Controller
Reversed
PCIe 010 PCIe 011
PCIe Controller
2x8 x8 x8 N/A 1 0 1 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
2x8 x8 x8 N/A 1 0 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0
Reversed
PCIe 010 PCIe 011 PCIe 012
PCIe Controller
1x8+2x4 x8 x4 x4 0 0 1 0 1 2 3 4 5 6 7 0 1 2 3 0 1 2 3
1x8+2x4 x8 x4 x4 0 0 0 3 2 1 0 3 2 1 0 7 6 5 4 3 2 1 0
Reversed
Notes:
1. For CFG bus details, refer to Section 6.4.
2. Support is also provided for narrow width and use devices with lower number of lanes (that is, usage on x4 configuration),
however further bifurcation is not supported.
3. In case that more than one device is connected, the device with the highest lane count, should always be connected to the
lower lanes, as follows:
— Connect lane 0 of 1st device to lane 0 (PCIe 010 controller).
— Connect lane 0 of 2nd device to lane 8 to (PCIe 011 controller).
— Connect lane 0 of 3rd device to lane 12 to i(PCIe 012 controller).
For example:
a. When using 1x8 + 2x4, the 8 lane device should use lanes 0:7.
b. When using 1x4 + 1x2, the 4 lane device should use lanes 0:3, and other 2 lanes device should use lanes 8:9.
c. When using 1x4 + 1x2 + 1x1, 4 lane device should use lanes 0:3, two lane device should use lanes 8:9, one lane
device should use lane 12.
4. For reversal lanes, for example: When using 1x8, the 8 lane device should use lanes 8:15, so lane 15 will be connected to
lane 0 of the Device.
TC/VC Mapping - Allows mapping Traffic Classes to different internal virtual channels.
While default configuration may apply to most use cases, the use of certain Traffic Class
may impact performance. This capability should be enabled using the BIOS.
The following table summarizes the transfer rates and theoretical bandwidth of PCI
Express* link.
Table 7-3. PCI Express* Maximum Transfer Rates and Theoretical Bandwidth
Theoretical Bandwidth [GB/s]
PCI Maximum
Express* Encoding Transfer Rate H-processor H-processor
Generation [GT/s] All processor lines
line line
x4
x8 x16
Note: Theoretical BW is the Maximum BW during data streaming, without considering utilization factor and overheads.
Note: The polarity inversion does not imply direction inversion or direction reversal; that is,
the Tx differential pair from one device must still connect to the Rx differential pair on
the receiving device, per the PCIe* Base Specification. Polarity Inversion is not the
same as “PCI Express* Controller Lane Reversal”.
The PCI Express* configuration uses standard mechanisms as defined in the PCI Plug-
and-Play specification. The processor PCI Express* port supports Gen 4 at 16 GT/s uses
a 128b/130b encoding.
The four lanes port can operate at 2.5 GT/s, 5 GT/s, 8 GT/s or 16 GT/s.
The PCI Express* architecture is specified in three layers – Transaction Layer, Data Link
Layer, and Physical Layer. Refer to the PCI Express Base Specification 4.0 for details of
PCI Express* architecture.
P C I - P C I B r id g e
P C I C o m p a t ib le
PCI r e p r e s e n t in g
PEG H o s t B r id g e
E x p re ss* ro o t P C I
D e v ic e
D e v ic e E x p re ss* p o rts
( D e v ic e 0 )
( D e v ic e 1 )
DMI
When a module is removed (using the physical layer detection), the root port clears
SLSTS.PDS and sets SLSTS.PDC. If SLCTL.PDE and SLCTL.HPE are both set, the root
port will also generate an interrupt.
Additionally, BIOS workaround for hot - plug can be supported by setting MPC.HPME.
When this bit is set, hot - plug events can cause SMI status bits in SMSCS to be set.
Supported hot - plug events and their corresponding SMSCS bit are:
When any of these bits are set, SMI# will be generated. These bits are set regardless of
whether interrupts or SCI is enabled for hot - plug events. The SMI# may occur
concurrently with an interrupt or SCI
Notes:
• SMI is referred to Serial management Interfaces
• SLSTS - Slot Status
• SLCTL – Slot Control
§§
Direct Media Interface (DMI) connects the processor and the PCH.
Main characteristics:
• 8 lanes Gen 3 DMI support
• Reduced 4 lane DMI support
• DC coupling - no capacitors between the processor and the PCH
• PCH end-to-end lane reversal across the link
• Half-Swing support (low-power/low-voltage)
Downstream transactions that had been successfully transmitted across the link prior
to the link going down may be processed as normal. No completions from downstream,
non-posted transactions are returned upstream over the DMI link after a link down
event.
§§
The processor graphics architecture delivers high dynamic range of scaling to address
segments spanning low power to high power, increased performance per watt, support
for next generation of APIs. Xe scalable architecture is partitioned by usage domains
along Render/Geometry, Media, and Display. The architecture also delivers very low-
power video playback and next generation analytics and filters for imaging related
applications. The new Graphics Architecture includes 3D compute elements, Multi-
format HW assisted decode/encode pipeline, and Mid-Level Cache (MLC) for superior
high definition playback, video quality, and improved 3D performance and media.
Note: HEVC and VP9 support additional 10 bpc, YCbCr 4:2:2 or 4:4:4 profiles. Refer
additional detail support matrix.
The HW decode is exposed by the graphics driver using the following APIs:
• Direct3D* 9 Video API (DXVA2)
• Direct3D11 Video API
• Intel Media SDK
• MFT (Media Foundation Transform) filters.
• Intel VA API
Main
MPEG2 Main 1080p
High
Advanced L3
WMV9 Main High 3840x3840
Simple Simple
High
4K
AVC/H264 Main L5.2
4:2:0 8 bit 4K@60
Main 12
Main 422 10
Main 422 12
Main 444
5K@60
Main 444 10
HEVC/H265 L6.2 8K@60
Main 444 12
SCC main
SCC main 10
SCC main 444
SCC main 444 10
Expected performance:
• More than 16 simultaneous decode streams @ 1080p.
Note: Actual performance depends on the processor SKU, content bit rate, and memory
frequency. Hardware decode for H264 SVC is not supported.
The HW encode is exposed by the graphics driver using the following APIs:
• Intel® Media SDK
• MFT (Media Foundation Transform) filters
High
AVC/H264 L5.1 2160p(4 K)
Main
Main
Main10 4320p(8 K)
HEVC/H265 Main 4:2:2 10 L5.1 16 Kx4 K @higher freq
Main 4:4:4
Main 4:4:4 10
The HW video processing is exposed by the graphics driver using the following APIs:
• Direct3D* 9 Video API (DXVA2).
• Direct3D* 11 Video API.
• Intel® Media SDK.
• MFT (Media Foundation Transform) filters.
• Intel® CUI SDK.
• Intel VA API
Note: Not all features are supported by all the above APIs. Refer to the relevant
documentation for more details.
§§
MIPI DSI MIPI* Display Serial Interface (DSI) Specification Version 1.3
eDP* up to HBR3
eDP* up to HBR3 eDP* up to HBR3
2 DP* up to HBR2
DDI B DP* up to HBR2 DP* up to HBR2
HDMI* up to 5.94 Gbps
HDMI* up to 5.94 Gbps HDMI* up to 5.94 Gbps
MIPI DSI up to 2.5 Gbps
Notes:
1. Dual low power embedded panel are supported (each can be eDP and/or MIPI DSI).
a. PSR2 can be supported only on single low power display.
b. Highest Package C state supported for dual embedded display configuration is PC8.
2. DDI - Digital Display Interface.
3. Each of the four TCP ports can be implemented as HDMI, DP, or DPoC (DisplayPort over Type-C)
4. DPIPx are DisplayPort* Rx ports referred as DP-in port, for more information refer to DP-IN section.
5. MIPI DSI supported on UP3 processor family but not fully validated.
Note: For port availability in each of the processor lines, refer Table 10-1, “Display Ports
Availability and Link Rate”.
A DisplayPort* consists of a Main Link (four lanes), Auxiliary channel, and a Hot-Plug
Detect signal. The Main Link is a unidirectional, high-bandwidth, and low-latency
channel used for transport of isochronous data streams such as uncompressed video
and audio. The Auxiliary Channel (AUX CH) is a half-duplex bi-directional channel used
for link management and device control. The Hot-Plug Detect (HPD) signal serves as an
interrupt request from the sink device to the source device.
The DisplayPort* support DisplayPort* Alt mode over Type-C and DP tunneling via TBT.
Refer to Chapter 6, “USB-C* Sub System” For DisplayPort* Alt mode support.
Hot-Plug Detect
(Interrupt Request)
Notes:
1. All the above is related to bit depth of 24.
2. The data rate for a given video mode can be calculated as- Data Rate = Pixel Frequency * Bit Depth.
3. The bandwidth requirements for a given video mode can be calculated as: Bandwidth = Data Rate * 1.25
(for 8B/10B coding overhead).
4. The link bandwidth depends if the standards is reduced blanking or not.
If the standard is not reduced blanking - the expected bandwidth may be higher.
For more details refer to VESA and Industry Standards and Guidelines for Computer Display Monitor
Timing (DMT). Version 1.0
5. To calculate what are the resolutions that can be supported in MST configurations, follow the below
guidelines:
a. Identify what is the link bandwidth column according to the requested display resolution.
b. Summarize the bandwidth for two of three displays accordingly, and make sure the final result is
below 21.6 Gbps. (for example: 4 lanes HBR2 bit rate)
For example:
a. Docking two displays: 3840x2160@60 Hz + 1920x1200@60 Hz = 16 + 4.62 = 20.62 Gbps
[Supported]
b. Docking three displays: 3840x2160@30 Hz + 3840x2160@30 Hz + 1920x1080@60 Hz = 7.88
+ 7.88 + 4.16 = 19.92 Gbps [Supported]
DP* with DSC 5120x3200 120 Hz 30 bpp 5120x3200 120 Hz 30 bpp 5120x3200 120 Hz 30 bpp
7680x4320 60 Hz 30 bpp 7680x4320 60 Hz 24 bpp 7680x4320 60 Hz 30 bpp
Notes:
1. Maximum resolution is based on the implementation of 4 lanes at HBR3 link data rate.
2. bpp - bit per pixel.
3. Resolution support is subject to memory BW availability.
HDMI* includes three separate communications channels: TMDS, DDC, and the
optional CEC (consumer electronics control). CEC is not supported on the processor. As
shown in the following figure, the HDMI* cable carries four differential pairs that make
up the TMDS data and clock channels. These channels are used to carry video, audio,
and auxiliary data. In addition, HDMI carries a VESA DDC. The DDC is used by an
HDMI* Source to determine the capabilities and characteristics of the Sink.
Audio, video, and auxiliary (control/status) data is transmitted across the three TMDS
data channels. The video pixel clock is transmitted on the TMDS clock channel and is
used by the receiver for data recovery on the three data channels. The digital display
data signals driven natively through the PCH are AC coupled and needs level shifting to
convert the AC coupled signals to the HDMI* compliant digital signals. The processor
HDMI* interface is designed in accordance with the High-Definition Multimedia
Interface.
H D M I So u rce H D M I S in k
HD M I Tx HDM I Rx
( P r o c e s s o r) T M D S D a ta C h a n n e l 0
T M D S D a ta C h a n n e l 1
T M D S D a ta C h a n n e l 2
T M D S C lo c k C h a n n e l
H o t -P lu g D e t e c t
D is p la y D a t a C h a n n e l (D D C )
C E C L in e (o p t io n a l)
HDMI 1.4 4 Kx2 K 24-30 Hz 24 bpp 4 Kx2 K 24-30 Hz 24 bpp 4 Kx2 K 24-30 Hz 24 bpp
Notes:
1. bpp - bit per pixel.
2. Resolution support is subject to memory BW availability.
3. HDMI2.1 Can be supported using LSPCON (DP1.4 to HDMI2.1 protocol converter).
eDP* with DSC5 5120x3200 120 Hz 30 bpp 5120x3200 120 Hz 30 bpp 5120x3200 120 Hz 30 bpp
7680x4320 60 Hz 30 bpp 7680x4320 60 Hz 24 bpp 7680x4320 60 Hz 30 bpp
Notes:
1. Maximum resolution is based on the implementation of 4 lanes at HBR3 link data rate.
2. PSR2 supported for up to 5 K resolutions.
3. bpp - bit per pixel.
4. Resolution support is subject to memory BW availability.
5. High resolution are supported, validation is depended on panel market availability.
Data Lane n
Notes:
1. bpp - bit per pixel.
2. Resolution support is subject to memory BW availability.
The processor will continue to support Silent stream. A Silent stream is an integrated
audio feature that enables short audio streams, such as system events to be heard
over the HDMI* and DisplayPort* monitors. The processor supports silent streams over
the HDMI and DisplayPort interfaces at 44.1 kHz, 48 kHz, 88.2 kHz, 96 kHz, 176.4 kHz,
and 192 kHz sampling rates and silent multi-stream support.
Each stream transmitted from the discrete GPU towards DP-IN Receiver interface can
be internally routed to each of USB-C* sub system ports, as long as a Type-C solution
have been implemented:
• DPoC port - Display Port Over Type-C.
• USB4 port - DisplayPort tunneled over USB4.
DP-IN interface support VESA* LTTPR (Link Training Tunable PHY Repeater).
Notes:
1. DP-IN is supported only in H processor line.
2. DP-IN requires an external display source that supports VESA* LTTPR (Link Training
Tunable PHY Repeater). The following modes are supported
i. Non-transparent mode – recommended model.
ii. Transparent mode – this mode is limited only for TCP ports which is
connected through a BBR re-timer.
DPIP x
Display
dGfx
DP-in PCIe
TGL SoC
USB4 controller
FIA
TypeC PHYs
TypeC Subsystem
Note: Supported for DisplayPort over Type-C (DPoC) and DisplayPort* tunneling via
thunderbolt on each of USB-C* ports.
§§
Port B Lane 0
Port B Lane 1
Port B Clock x4 x4
Port B Lane 2
Port B Lane 3
Port C Clock
Port C Lane 0
Port C Lane 1 x4 x4
Port C Lane 2
Port C Lane 3
Port E Lane 0 x2
Port E Lane 1
Port F Clock x4
Port F Lane 0 x2
Port F Lane 1
Port B Lane 0
Port B Lane 1
Port B Clock x4 x4
Port B Lane 2
Port B Lane 3
Port C Clock
Port C Lane 0
Port C Lane 1 x4 x4
Port C Lane 2
Port C Lane 3
Port E Lane 0 x2
Port E Lane 1
Port F Clock x4
Port F Lane 0 x2
Port F Lane 1
Port G Clock
Port G Lane 0 x2 x2
Port G Lane 1
Port H Lane 0
Not Used x1
Port H Clock
Notes:
1. Ports G and H available on UP4 only.
2. Port E,F selection of configuration 1 or 2 is orthogonal to Port G, H selection of
configuration 1 or 2.
Port A Clock
Port B Lane 0 x2
Port B Lane 1
x4
Port B Clock
§§
This chapter describes the processor signals. They are arranged in functional groups
according to their associated interface or category. The notations in the following table
are used to describe the signal type.
The signal description also includes the type of buffer used for the particular signal
(refer to the following table).
O Output only
Direction
I Input only
N/A Not applicable (Mostly for power rails and RSVD signals)
DMI DMI
CMOS CMOS
AUDIO AUDIO
DP/HDMI DP/HDMI
OD Open Drain
Power Power
Ground Ground
SE Single Ended
Link Type
DIFF Differential pair
Notes:
1. Qualifier for a buffer type1
2. CMOS - Complementary Metal Oxide Substrate
3. GTL - Gunning Transceiver Signaling Technology Logic
4. DP - DisplayPort
5. PECI - Platform Environment Control Interface
6. Async - Signal is not related to any clock in the system
7. DDR - Double Data Rate Synchronous Dynamic Random Access Memory
8. LPDDR - Low Power DDR
9. On some case I/O may be split into: I=GTL, O=OD
DDR0_DQ[1:0][7:0]
DDR1_DQ[1:0][7:0]
Data Buses: Data signals interface to
DDR2_DQ[1:0][7:0] the SDRAM data buses.
I/O UP3/UP4
DDR3_DQ[1:0][7:0] LPDDR4x SE
Example: DDR0_DQ1[5] refers to Processor Lines
DDR4_DQ[1:0][7:0] DDR channel 0, Byte 1, Bit 5.
DDR6_DQ[1:0][7:0]
DDR7_DQ[1:0][7:0]
DDR0_DQSP[1:0]
DDR1_DQSP[1:0]
DDR2_DQSP[1:0]
DDR3_DQSP[1:0]
DDR4_DQSP[1:0]
DDR6_DQSP[1:0] Data Strobes: Differential data strobe
DDR7_DQSP[1:0] pairs. The data is captured at the UP3/UP4
O LPDDR4x Diff
DDR0_DQSN[1:0] crossing point of DQS during reading Processor Lines
DDR1_DQSN[1:0] and write transactions.
DDR2_DQSN[1:0]
DDR3_DQSN[1:0]
DDR4_DQSN[1:0]
DDR6_DQSN[1:0]
DDR7_DQSN[1:0]
DDR0_CLK_N
DDR0_CLK_P
DDR1_CLK_N
DDR1_CLK_P
DDR2_CLK_N SDRAM Differential Clock:
DDR2_CLK_P Differential clocks signal pairs, pair per
channel and package. The crossing of
DDR3_CLK_N UP3/UP4
the positive edge and the negative O LPDDR4x Diff
DDR3_CLK_P Processor Lines
edge of their complement are used to
DDR4_CLK_N sample the command and control sig-
DDR4_CLK_P nals on the SDRAM.
DDR6_CLK_N
DDR6_CLK_P
DDR7_CLK_N
DDR7_CLK_P
DDR0_CKE[1:0]
Clock Enable: (1 per rank) These
DDR1_CKE[1:0]
signals are used to:
DDR2_CKE[1:0] • Initialize the SDRAMs during
O UP3/UP4
DDR3_CKE[1:0] power-up. LPDDR4x SE
Processor Lines
DDR4_CKE[1:0] • Power-down SDRAM ranks.
• Place all SDRAM ranks into and out
DDR6_CKE[1:0]
of self-refresh during STR.
DDR7_CKE[1:0]
DDR0_CS[1:0]
DDR1_CS[1:0] Chip Select: (1 per rank). These
signals are used to select particular
DDR2_CS[1:0]
SDRAM components during the active UP3/UP4
DDR3_CS[1:0] O LPDDR4x SE
state. There is one Chip Select for each Processor Lines
DDR4_CS[1:0] SDRAM rank.
DDR6_CS[1:0] The Chip select signal is Active High.
DDR7_CS[1:0]
DDR0_CA[5:0]
DDR1_CA[5:0]
DDR2_CA[5:0] Command Address: These signals
O UP3/UP4
DDR3_CA[5:0] are used to provide the multiplexed LPDDR4x SE
Processor Lines
DDR4_CA[5:0] command and address to the SDRAM.
DDR6_CA[5:0]
DDR7_CA[5:0]
O UP3/UP4
DRAM_RESET# Memory Reset CMOS SE
Processor Lines
Buffer Link
Signal Name Description Dir Availability
Type Type
Buffer Link
Signal Name Description Dir Availability
Type Type
All Processor
CFG_RCOMP Configuration Resistance Compensation I N/A SE
Lines
UP3/UP4
PROC_POPIRCOMP POPIO Resistance Compensation I N/A SE Processor
Lines
Buffer Link
Signal Name Description Dir. Availability
Type Type
Buffer
Signal Name Description Dir Link Type Availability
Type
Buffer
Signal Name Description Dir. Link Type Availability
Type
DDIA_TXP[3:0]
DDIA_TXN[3:0] Digital Display Interface Transmit:
O DP*/HDMI Diff
DDIB_TXP[3:0] DisplayPort and HDMI Differential Pairs
DDIB_TXN[3:0] All Processor
DDIA_AUX_P Lines
Digital Display Interface Display Port
DDIA_AUX_N Auxiliary: Half-duplex, bidirectional
I/O DP* Diff
DDIB_AUX_P channel consist of one differential pair for
DDIB_AUX_N each channel.
Buffer
Signal Name Description Dir. Link Type Availability
Type
DPIP0_RXP/N[3:0]
DPIP1_RXP/N[3:0] DisplayPort* Receiver:
I DP* Diff
DPIP2_RXP/N[3:0] DisplayPort Differential Pairs
DPIP3_RXP/N[3:0]
DPIP0_AUX_P/N
DPIP1_AUX_P/N DP-IN Display Port Auxiliary: Half-
duplex, bidirectional channel consist of one I/O DP* Diff
DPIP2_AUX_P/N differential pair for each channel.
DPIP3_AUX_P/N
H Processor Line
DPIP0_RCOMP
IO Compensation resistor, supporting
DPIP1_RCOMP
DP* channel. N/A Analog SE
DPIP2_RCOMP
DPIP3_RCOMP
DPIP0_HPD
DPIP1_HPD
DisplayPort* Hot Plug Detect O DP* SE
DPIP2_HPD
DPIP3_HPD
TCP[2:0]_TX_P[1:0]
TX Data Lane. O TCP Diff All Processor Lines
TCP[2:0]_TX_N[1:0]
TCP[3]_TX_P[1:0]
TX Data Lane. O TCP Diff UP3/H Processor Lines
TCP[3]_TX_N[1:0]
TCP[2:0]_AUX_P
Common Lane AUX-PAD. I/O TCP Diff All Processor Lines
TCP[2:0]_AUX_N
TCP[3]_AUX_P
Common Lane AUX-PAD. I/O TCP Diff UP3/H Processor Lines
TCP[3]_AUX_N
CSI_A_DP[1:0]
CSI-2 Ports Data lane I DPHY Diff H Processor Line
CSI_A_DN[1:0]
CSI_B_DP[3:0]
All Processor Lines
CSI_B_DN[3:0]
CSI_C_DP[3:0]
UP3/UP4 Processor Lines
CSI_C_DN[3:0]
CSI-2 Ports Data lane I DPHY Diff
CSI_E_DP[1:0]
UP3/UP4 Processor Lines
CSI_E_DN[1:0]
CSI_F_DP[3:0]
UP3/UP4 Processor Lines
CSI_F_DN[3:0]
CSI_G_DP[1:0]
UP4 Processor Line
CSI_G_DN[1:0] CSI-2 Ports Data lane
I DPHY Diff
CSI_H_DP[0]
UP4 Processor Line
CSI_H_DN[0]
CSI_A_CLK_P
CSI 2 Port A Clock lane I DPHY Diff H Processor Line
CSI_A_CLK_N
CSI_B_CLK_P
All Processor Lines
CSI_B_CLK_N
CSI-2 Ports B-C Clock lane I DPHY Diff
CSI_C_CLK_P
UP3/UP4 Processor Lines
CSI_C_CLK_N
CSI_E_CLK_P
UP3/UP4 Processor Lines
CSI_E_CLK_N
CSI-2 Ports E-H Clock lane I DPHY Diff
CSI_F_CLK_P
UP3/UP4 Processor Lines
CSI_F_CLK_N
CSI_G_CLK_P
UP4 Processor Line
CSI_G_CLK_N
CSI-2 Ports E-H Clock lane I DPHY Diff
CSI_H_CLK_P
UP4 Processor Line
CSI_H_CLK_N
BCLK_P
100 MHz Differential bus clock input to the processor I CMOS Diff
BCLK_N
CLK_XTAL_P
38.4 MHz Differential bus clock input to the processor I CMOS Diff
CLK_XTAL_N H Processor Line
PCI_BCLKP
100 MHz Clock for PCI Express* logic I CMOS Diff
PCI_BCLKN
Test Clock: This signal provides the clock input for the
processor Test Bus (also known as the Test Access Port). This
PROC_TCK I GTL SE All Processor Lines
signal should be driven low or allowed to float during power on
Reset.
Test Data In: This signal transfers serial test data into the
PROC_TDI processor. This signal provides the serial input needed for I GTL SE All Processor Lines
JTAG specification support.
Test Data Out: This signal transfers serial test data out of
PROC_TDO the processor. This signal provides the serial output needed O OD SE All Processor Lines
for JTAG specification support.
Test Reset: Resets the Test Access Port (TAP) logic. This
PROC_TRST# I GTL SE All Processor Lines
signal should be driven low during power on Reset.
Note: Refer to the electric spec in DC specification data for more details on the Buffer type
power spec requirement. For the buffer type, refer to CMOS Section 13.2.1.12, “CMOS
DC Specifications” data. For the buffer type, refer to TGTL in table, Section 13.2.1.13,
“GTL and OD DC Specification” for electric DC specification data.
VccIN_AUX Input FIVR, SA and PCH components. I Power - All Processor Lines
VccSTG_OUT
VCCSTG_OUT Power rail O Power UP3 Processor Line
Arbitrary connection of these signals to VCC, VDDQ, VSS, or to any other signal
(including each other) may result in component malfunction or incompatibility with
future processors. Refer Section 12-8, “GND, RSVD, and NCTF Signals”.
§§
The SVID bus consists of three open-drain signals: clock, data, and alert# to both set
voltage-levels and gather telemetry data from the voltage regulators. Voltages are
controlled per an 8-bit integer value, called a VID, that maps to an analog voltage level.
An offset field also exists that allows altering the VID table. Alert can be used to inform
the processor that a voltage-change request has been completed or to interrupt the
processor with a fault notification.
For VID coding and further information, refer to the IMVP9 PWM Specification and
Serial VID (SVID) Protocol Specification.
13.2 DC Specifications
The processor DC specifications in this section are defined at the processor signal pins,
unless noted otherwise. For pin listing, refer to 11th Generation Intel® Core™ Processor
Line Package Ballout Mechanical Specification.
• The DC specifications for the LPDDR4x/DDR4 signals are listed in the Voltage and
Current Specifications section.
• The Voltage and Current Specifications section lists the DC specifications for the
processor and are valid only while meeting specifications for junction temperature,
clock frequency, and input voltages. Read all notes associated with each parameter.
• AC tolerances for all rails include voltage transients and voltage regulator voltage
ripple up to 1 MHz. Refer additional guidance for each rail.
Voltage Range
Operating Voltage for Processor H Processor Line 0 — 2.05 V 1,2,3, 7,12
Operating Mode
Thermal Design
Current (TDC)
IccTDC — — — A 9
for processor
VccIN Rail
Loadline slope
within the VR UP3/UP3-Refresh/
DC_LL 0 — 2.0 mΩ 10,13,14
regulation loop H35 Processor Lines
capability
Loadline slope
within the VR
DC_LL UP4 Processor Line 0 — 2.0 mΩ 10,13,14
regulation loop
capability
Loadline slope
H Processor Line
within the VR (45W)
DC_LL 0 1.5 1.7 mΩ 10,13,14,15
regulation loop
8-Core GT1
capability
Loadline slope
H Processor Line
within the VR (45W)
DC_LL 0 1.5 — mΩ 10,13,14,15
regulation loop
6 Core GT1
capability
AC Loadline 3 UP3/UP3-Refresh/
AC_LL3 H35 4 Core — — 4.4 mΩ 10,13,14
(<1 MHz) Processor Lines
H Processor Line
AC Loadline 3 (45W)
AC_LL3 — — 1.7 mΩ 10,13,14
(<1 MHz)
8-Core GT1
H Processor Line
AC Loadline 3 (45W)
AC_LL3 — — 2.0 mΩ 10,13,14
(<1 MHz)
6 Core GT1
Maximum
Overshoot time
T_OVS_TDP_MAX — — — 500 μs
TDP/virus
mode
V_OVS Maximum
Overshoot at
TDP_MAX/ — — — 10 %
TDP/virus
virus_MAX mode
Notes:
1. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical data. These
specifications will be updated with characterized data from silicon measurements at a later date.
2. Each processor is programmed with a maximum valid voltage identification value (VID) that is set at manufacturing and cannot be
altered. Individual maximum VID values are calibrated during manufacturing such that two processors at the same frequency may
have different settings within the VID range. Note that this differs from the VID employed by the processor during a power
management event (Adaptive Thermal Monitor, Enhanced Intel Speed-step Technology, or low-power states).
3. The voltage specification requirements are measured across Vcc_SENSE and Vss_SENSE as near as possible to the processor. The
measurement needs to be performed with a 20 MHz bandwidth limit on the oscilloscope, 1.5 pF maximum probe capacitance, and
1 Ω minimum impedance. The maximum length of the ground wire on the probe should be less than 5 mm. Ensure external noise
from the system is not coupled into the oscilloscope probe.
4. Processor VccIN VR to be designed to electrically support this current.
5. Processor VccIN VR to be designed to thermally support this current indefinitely.
6. Long term reliability cannot be assured if tolerance, ripple, and core noise parameters are violated.
7. Long term reliability cannot be assured in conditions above or below Maximum/Minimum functional limits.
8. PSx refers to the voltage regulator power state as set by the SVID protocol.
9. LL measured at sense points.
10. Typ column represents IccMAX for commercial application it is NOT a specification - it's a characterization of limited samples using
limited set of benchmarks that can be exceeded.
11. Operating voltage range in steady state.
12. LL spec values should not be exceeded. If exceeded, power, performance and reliability penalty are expected.
13. Load Line (AC/DC) should be measured by the VRTT tool and programmed accordingly via the BIOS Load Line override setup
options. AC/DC Load Line BIOS programming directly affects operating voltages (AC) and power measurements (DC). A superior
board design with a shallower AC Load Line can improve on power, performance and thermals compared to boards designed for
POR impedance.
14. H-processor DC LL spec value is 1.5 mOhm, DC LL can be lower or equal to AC LL=1.7 mOhm.
Voltage Range
VCCINAUX H Processor Line — 1.8 — V 1,2,3,7
UP3/UP3-Refresh
Maximum Processor Line
IccMAX (28W) 0 — 27 A 1,2
VccIN_AUX Icc
4/2-Core GT2
H35 Processor
Maximum
IccMAX Line (35W) 0 — 27
VccIN_AUX Icc
4-Core GT2
TOBVCC Voltage
Tolerance UP4 Processor DC Min: -4
Budget — — % 1,3,6
Line AC+DC:± 7.5
TOBVCC Voltage
Tolerance
Budget H Processor Line — — AC+DC:+5/-10 % 1,3,6
H Processor Line — —
AC_LL AC Loadline 4.0 mΩ 4,5
H Processor Line —
DC_LL DC Loadline 0 2.1 mΩ 4,5
UP3/UP3-Refresh/ —
H35 Processor 0 3.3 mΩ 4,5
Lines
DC_LL DC Loadline
UP4 Processor —
0 2.1 mΩ 4,5
Line
Notes:
1. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical data. These
specifications will be updated with characterized data from silicon measurements at a later date.
2. Long term reliability cannot be assured in conditions above or below Maximum/Minimum functional limits.
3. The voltage specification requirements are measured across Vcc_SENSE and Vss_SENSE as near as possible to the
processor. The measurement needs to be performed with a 20 MHz bandwidth limit on the oscilloscope, 1.5 pF maximum
probe capacitance, and 1 Ω minimum impedance. The maximum length of the ground wire on the probe should be less than
5 mm. Ensure external noise from the system is not coupled into the oscilloscope probe.
4. LL measured at sense points. LL specification values should not be exceeded. If exceeded, power, performance, and
reliability penalty are expected.
5. The LL values are for reference. Must still need to meet the voltage tolerance specification.
6. Voltage Tolerance budget values Include ripples
7. VccIN_AUX is having few point of voltage define by PCH VID
8. VCCIN Aux ICCMAX includes both CPU and PCH, CPU will consume 27A and PCH 15A.
VDD2(LPDDR4x) UP3/UP3-
Processor I/O supply voltage for
Refresh/UP4 Typ-5% 1.115 Typ+5% V 3,4,5
LPDDR4x
Processor Lines
UP3/UP3-
IccMAX_VDD2 Maximum Current for VDD2 Rail 2
Refresh/UP4 — — 1.5 A
(LPDDR4x) (LPDDR4x)
Processor Lines
Notes:
1. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical data. These
specifications will be updated with characterized data from silicon measurements at a later date.
2. The current supplied to the DIMM modules is not included in this specification.
3. Includes AC and DC error, where the AC noise is bandwidth limited to under 1 MHz, measured on package pins.
4. No requirement on the breakdown of AC versus DC noise.
5. The voltage specification requirements are measured on package pins as near as possible to the processor with an
oscilloscope set to 100 MHz bandwidth, 1.5 pF maximum probe capacitance, and 1 MΩ minimum impedance. The maximum
length of ground wire on the probe should be less than 5 mm. Ensure external noise from the system is not coupled into the
oscilloscope probe.
6. For Voltage less than 1 V TOB will be 50 mV.
Processor Vcc
VccST UP3/UP3-Refresh/H35
Sustain supply Processor Lines — 1.025 — V 3
voltage
Notes:
1. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical data. These
specifications will be updated with characterized data from silicon measurements at a later date.
2. Long term reliability cannot be assured in conditions above or below Maximum/Minimum functional limits.
3. The voltage specification requirements are measured on package pins as near as possible to the processor with an oscilloscope
set to 100 MHz bandwidth, 1.5 pF maximum probe capacitance, and 1 MΩ minimum impedance. The maximum length of ground
wire on the probe should be less than 5 mm. Ensure external noise from the system is not coupled into the oscilloscope probe.
4. The maximum IccMAX_ST specification is preliminary and based on initial pre-silicon estimation and is subject to change.
5. For Voltage less than 1 V TOB will be 50 mV.
6. VCCST without PG will have typical of 1.025V, some collateral may indicate VCCST =1.025 V which present the typical voltage
without PG.
7. VCCST momentarily can go to 1.15V during certain scenarios, there is no side effect.
Table 13-5. Vcc Sustain Gated (VccSTG) Supply DC Voltage and Current Specifications
Symbol Parameter Segment Minimum Typical Maximum Units Notes 1,2
Processor Vcc
UP3/UP3-Refresh/
VccSTG Sustain gated — 1.025 — V 3,7
H35 Processor Lines
supply voltage
Processor Vcc
VccSTG Sustain gated UP4/H Processor
supply voltage — 1.065 — V 3,6,7
Lines
Notes:
1. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical data. These
specifications will be updated with characterized data from silicon measurements at a later date.
2. Long term reliability cannot be assured in conditions above or below Maximum/Minimum functional limits.
3. The voltage specification requirements are measured on package pins as near as possible to the processor with an oscilloscope
set to 100 MHz bandwidth, 1.5 pF maximum probe capacitance, and 1 MΩ minimum impedance. The maximum length of ground
wire on the probe should be less than 5 mm. Ensure external noise from the system is not coupled into the oscilloscope probe.
4. The maximum IccMAX_ST specification is preliminary and based on initial pre-silicon estimation and is subject to change.
5. For Voltage less than 1 V TOB will be 50 mV.
6. VCCSTG without PG will have typical of 1.025 V, some collateral may indicate VCCSTG =1.025 V which present the typical
voltage without PG.
7. VCCSTG momentarily can go to 1.15V during certain scenarios, there is no side effect.
Notes:
1. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical data. These
specifications will be updated with characterized data from silicon measurements at a later date.
2. Long term reliability cannot be assured in conditions above or below Maximum/Minimum functional limits.
3. The voltage specification requirements are measured on package pins as near as possible to the processor with an oscilloscope
set to 100 MHz bandwidth, 1.5 pF maximum probe capacitance, and 1 MΩ minimum impedance. The maximum length of
ground wire on the probe should be less than 5 mm. Ensure external noise from the system is not coupled into the oscilloscope
probe.
4. The maximum IccMAX_1P8A specification is preliminary and based on initial pre-silicon estimation and is subject to change.
5. For Voltage less than 1 V TOB will be 50 mV.
Notes:
1. Unless otherwise noted, all specifications in this table apply to all processor frequencies. Timing specifications only depend on
the operating frequency of the memory channel and not the maximum rated frequency
2. VIL is defined as the maximum voltage level at a receiving agent that will be interpreted as a logical low value.
3. VIH is defined as the minimum voltage level at a receiving agent that will be interpreted as a logical high value.
4. VIH and VOH may experience excursions above VDD2. However, input signal drivers should comply with the signal quality
specifications.
5. Pull up/down resistance after compensation (assuming ±5% COMP inaccuracy). Note that BIOS power training may change
these values significantly based on margin/power trade-off.
6. ODT values after COMP (assuming ±5% inaccuracy). BIOS MRC can reduce ODT strength towards
7. The minimum and maximum values for these signals are programmable by BIOS to one of the two sets.
8. SM_RCOMP[x] resistance should be provided on the system board with 1% resistors. SM_RCOMP[x] resistors are to VSS.
Values are pre-silicon estimations and are subject to change.
9. SM_DRAMPWROK must have a maximum of 15 ns rise or fall time over VDD2 * 0.30 ±100 mV and the edge must be
monotonic.
10. SM_VREF is defined as VDD2/2 for DDR4
11. RON tolerance is preliminary and might be subject to change.
12. Maximum-minimum range is correct but center point is subject to change during MRC boot training.
13. Processor may be damaged if VIH exceeds the maximum voltage for extended periods.
Notes:
1. Unless otherwise noted, all specifications in this table apply to all processor frequencies.Timing specifications only depend
on the operating frequency of the memory channel and not the maximum rated frequency
2. VIL is defined as the maximum voltage level at a receiving agent that will be interpreted as a logical low value.
3. VIH is defined as the minimum voltage level at a receiving agent that will be interpreted as a logical high value.
4. VIH and VOH may experience excursions above VDD2. However, input signal drivers should comply with the signal quality
specifications.
5. Pull up/down resistance after compensation (assuming ±5% COMP inaccuracy). Note that BIOS power training may change
these values significantly based on margin/power trade-off.
6. ODT values after COMP (assuming ±5% inaccuracy). BIOS MRC can reduce ODT strength towards
7. The minimum and maximum values for these signals are programmable by BIOS to one of the two sets.
8. SM_RCOMP[x] resistance should be provided on the system board with 1% resistors. SM_RCOMP[x] resistors are to VSS.
Values are pre-silicon estimations and are subject to change.
9. SM_DRAMPWROK must have a maximum of 15 ns rise or fall time over VDD2 * 0.30 ±100 mV and the edge must be
monotonic.
10. SM_VREF is defined as VDD2/2 for DDR4/LPDDR4x
11. RON tolerance is preliminary and might be subject to change.
12. Maximum-minimum range is correct but center point is subject to change during MRC boot training.
13. Processor may be damaged if VIH exceeds the maximum voltage for extended periods.
Notes:
1. Refer the PCI Express Base Specification for more details.
2. Low impedance defined during signaling. Parameter is captured for 5.0 GHz by RLTX-DIFF.
3. PEG_RCOMP resistance should be provided on the system board with 1% resistors. COMP resistors are to VCCIO_OUT.
PEG_RCOMP- Intel allows using 24.9 Ω 1% resistors.
4. DC impedance limits are needed to ensure Receiver detect.
5. The Rx DC Common Mode Impedance should be present when the Receiver terminations are first enabled to ensure that
the Receiver Detect occurs properly.Compensation of this impedance can start immediately and the 15 Rx Common Mode
Impedance (constrained by RLRX-CM to 50 Ω ±20%) should be within the specified range by the time Detect is entered.
Notes:
1. Value when driving into load impedance anywhere in the ZID range.
2. A transmitter should minimize ΔVOD and ΔVCMTX(1,0) in order to minimize radiation, and optimize signal integrity
0.95 — 1.3 V 2
Notes:
1. Applicable when the supported data rate <= 1.5 Gbps.
2. Applicable when the supported data rate > 1.5 Gbps.
3. Though no maximum value for ZOLP is specified, the LP transmitter output impedance shall ensure the TRLP/TFLP
specification is met.
4. The voltage overshoot and undershoot beyond the VPIN is only allowed during a single 20ns window after any LP-0 to LP-1
transition or vice versa. For all other situations it must stay within the VPIN range.
5. This value includes ground shift.
Notes:
1. VccIO_OUT depends on segment.
2. VOL and VOH levels depends on the level chosen by the Platform.
Notes:
1. VccIO_OUT depends on segment.
2. x is referring to ports 0-3.
Notes:
1. COMP resistance is to VCOMP_OUT.
2. eDP_RCOMP resistor should be provided on the system board.
— — 70 mV 3
VIDTH Differential input high threshold
— — 40 mV 4
-40 — — mV 4
Notes:
1. Excluding possible additional RF interference of 100 mV peak sine wave beyond 450 MHz
2. This table value includes a ground difference of 50 mV between the transmitter and the receiver, the static common-mode
level tolerance and variations below 450 MHz
3. For devices supporting data rates < 1.5 Gbps.
4. For devices supporting data rates > 1.5 Gbps.
5. Associated Signals: MIPI* CSI2: Refer to MIPI® Alliance D-PHY Specification 1.2.
Notes:
1. Unless otherwise noted, all specifications in this table apply to all processor frequencies.
2. The Vcc referred to in these specifications refers to instantaneous VccST/IO.
3. For VIN between “0” V and VccST. Measured when the driver is tri-stated.
4. VIH may experience excursions above VccST. However, input signal drivers should comply with the signal quality
specifications.
5. Voh and Vol need to comply with Vil and Vih specs.
Notes:
1. Unless otherwise noted, all specifications in this table apply to all processor frequencies.
2. The Vcc referred to in these specifications refers to instantaneous VccST/IO.
3. For VIN between 0 V and Vcc. Measured when the driver is tri-stated.
4. VIH and VOH may experience excursions above Vcc. However, input signal drivers should comply with the signal quality
specifications.
VccST nominal levels will vary between processor families. All PECI devices will operate
at the VccST level determined by the processor installed in the system.
Notes:
1. VccST supplies the PECI interface. PECI behavior does not affect VccST minimum/maximum
specifications.
2. The leakage specification applies to powered devices on the PECI bus.
3. The PECI buffer internal pull up resistance measured at 0.75* VccST.
4. Ileak100 represents the worst case leakage at 100 DegC.
The input buffers in both client and host models should use a Schmitt-triggered input
design for improved noise immunity. Use the following figure as a guide for input buffer
design.
Figure 13-1. Input Device Hysteresis
VTTD
Minimum VP
Minimum Valid Input
Hysteresis Signal Range
Maximum VN
PECI Ground
§§
Halogenated Flame
Yes Yes Yes
Retardant Free
15 Yes1 No 0.8/32
UP3-Processor Line
1
10 Yes No 0.7/28
10 No No 0.7/28
UP4-Processor Line
2
5 No Yes 0.6/24
Notes:
1. If using backing plate, recommended maximum back plate thickness is 0.5mm.
2. At a minimum, corner glue is required.
Static Compressive pressure refers to the long term steady state pressure applied to
the die from the thermal solution after system assembly is complete.
Transient Compressive pressure refers to the pressure on the dice at any moment
during the thermal solution assembly/disassembly procedures. Other system
procedures such as repair/rework can also cause high pressure loading to occur on the
die and should be evaluated to ensure these limits are not exceeded.
Note: This is the load and pressure that has been tested by Intel for a single assembly cycle. This metric is
a pressure over 2 mm2 (2 mm x 2 mm) area.
Moisture Sensitive
Devices: 60 months
Maximum time: associated with customer
from bag seal date;
TIMESUSTAINED STORAGE shelf life in Intel Original sealed moisture NA 1, 2, 3
Non-moisture
barrier bag and / or box
sensitive devices: 60
months from lot date
Processors in a non-operational state may be installed in a platform, in a tray, boxed, or loose and
may be sealed in airtight package or exposed to free air. Under these conditions, processor
landings should not be connected to any supply voltages, have any I/Os biased, or receive any
clocks. Upon exposure to “free air” (that is, unsealed packaging or a device removed from
Storage Conditions
packaging material), the processor should be handled in accordance with moisture sensitivity
labeling (MSL) as indicated on the packaging material. Boxed Land Grid Array packaged (LGA)
processors are MSL 1 (‘unlimited’ or unaffected) as they are not heated in order to be inserted in
the socket.
Notes:
1. TABSOLUTE STORAGE applies to the un-assembled component only and does not apply to the shipping media, moisture barrier
bags or desiccant. Refers to a component device that is not assembled in a board or socket that is not to be electrically
connected to a voltage reference or I/O signals.
2. Specified temperatures are based on data collected. Exceptions for surface mount re-flow are specified by applicable JEDEC
J-STD-020 and MAS documents. The JEDEC, J-STD-020 moisture level rating and associated handling practices apply to all
moisture sensitive devices removed from the moisture barrier bag.
3. Post board attaches storage temperature limits are not specified for non-Intel branded boards. Contact your board
manufacturer for storage specifications.
§§
15.1 CPUID
The processor ID and stepping can be identified by the following register contents:
Table 15-1. CPUID Format
Extended Extended Processor Family Model Stepping
Reserved Reserved
SKU CPUID Family Model Type Code Number ID
[31:28] [15:14]
[27:20] [19:16] [13:12] [11:8] [7:4] [3:0]
UP4/UP3/
H35 806C1h Reserved 0000000b 1000b Reserved 00b 0110b 1100b 0001b
UP3-Refresh 806C2h Reserved 0000000b 1000b Reserved 00b 0110b 1100b 0010b
H35-Refresh 806C2h Reserved 0000000b 1000b Reserved 00b 0110b 1100b 0010b
• The Extended Family, Bits [27:20] are used in conjunction with the Family Code,
specified in Bits[11:8], to indicate whether the processor belongs to the Pentium®,
Celeron®, or Intel® Core™ processor family.
• The Extended Model, Bits [19:16] in conjunction with the Model Number, specified
in Bits [7:4], are used to identify the model of the processor within the processor's
family.
• The Family Code corresponds to Bits [11:8] of the EDX register after RESET, Bits
[11:8] of the EAX register after the CPUID instruction is executed with a 1 in the
EAX register, and the generation field of the Device ID register accessible through
Boundary Scan.
• The Model Number corresponds to Bits [7:4] of the EDX register after RESET, Bits
[7:4] of the EAX register after the CPUID instruction is executed with a 1 in the EAX
register, and the model field of the Device ID register accessible through Boundary
Scan.
• The Stepping ID in Bits [3:0] indicates the revision number of that model.
• When EAX is initialized to a value of '1', the CPUID instruction returns the Extended
Family, Extended Model, Processor Type, Family Code, Model Number and Stepping
ID value in the EAX register. Note that the EDX processor signature value after
reset is equivalent to the processor signature output value in the EAX register.
Cache and TLB descriptor parameters are provided in the EAX, EBX, ECX and EDX
registers after the CPUID instruction is executed with a 2 in the EAX register.
Capabilities
Reserved 34h
Pointer
Reserved 38h
H 8 Cores 9A36h
H 6 Cores 9A26h
VMD 0 / 14 / 0 9A0Bh
UP4
96/80 B0 GT2 9A40h 1h
4+2/2+2
UP4
48 B0 GT2 9A78h 1h
4+2/2+2
UP3
96/80 B0 GT2 9A49h 1h
4+2/2+2
UP3
48 B0 GT2 9A78h 1h
4+2/2+2
H35
96/80 B0 GT2 9A49h 1h
4+2
UP3-Refresh
96/80 C0 GT2 9A49h 3h
4+2
H35-Refresh
96/80 C0 GT2 9A49h 3h
4+2
H
32 R0 GT1 9A60h 1h
8+1/6+1
H
16 R0 GT1 9A68h 1h
8+1/6+1
§§