03.03.2013 Views

Intel Xeon Processor E3-1200 Family

Intel Xeon Processor E3-1200 Family

Intel Xeon Processor E3-1200 Family

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Intel</strong> ® <strong>Xeon</strong> ® <strong>Processor</strong> <strong>E3</strong>-<strong>1200</strong><br />

<strong>Family</strong><br />

Datasheet, Volume 1<br />

This is Volume 1 of 2<br />

June 2011<br />

Document Number: 324970-002


Legal Lines and Disclaimers<br />

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,<br />

BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS<br />

PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER,<br />

AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING<br />

LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY<br />

PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.<br />

UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY<br />

APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH<br />

MAY OCCUR.<br />

<strong>Intel</strong> may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the<br />

absence or characteristics of any features or instructions marked reserved or undefined. <strong>Intel</strong> reserves these for future<br />

definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The<br />

information here is subject to change without notice. Do not finalize a design with this information.<br />

The products described in this document may contain design defects or errors known as errata which may cause the product to<br />

deviate from published specifications. Current characterized errata are available on request.<br />

Contact your local <strong>Intel</strong> sales office or your distributor to obtain the latest specifications and before placing your product order.<br />

All products, platforms, dates, and figures specified are preliminary based on current expectations, and are subject to change<br />

without notice. All dates specified are target dates, are provided for planning purposes only and are subject to change.<br />

This document contains information on products in the design phase of development. Do not finalize a design with this information.<br />

Revised information will be published when the product is available. Verify with your local sales office that you have the latest<br />

datasheet before finalizing a design.<br />

No computer system can provide absolute security under all conditions. <strong>Intel</strong> ® Trusted Execution Technology (<strong>Intel</strong> ® TXT) requires<br />

a computer system with <strong>Intel</strong> ® Virtualization Technology, an <strong>Intel</strong> TXT-enabled processor, chipset, BIOS, Authenticated Code<br />

Modules and an <strong>Intel</strong> TXT-compatible measured launched environment (MLE). The MLE could consist of a virtual machine monitor,<br />

an OS or an application. In addition, <strong>Intel</strong> TXT requires the system to contain a TPM v1.2, as defined by the Trusted Computing<br />

Group and specific software for some uses. For more information, see http://www.intel.com/technology/security/<br />

<strong>Intel</strong> ® Virtualization Technology requires a computer system with an enabled <strong>Intel</strong> ® processor, BIOS, virtual machine monitor<br />

(VMM) and, for some uses, certain computer system software enabled for it. Functionality, performance or other benefits will vary<br />

depending on hardware and software configurations and may require a BIOS update. Software applications may not be compatible<br />

with all operating systems. Please check with your application vendor.<br />

<strong>Intel</strong> ® Active Management Technology requires the computer system to have an <strong>Intel</strong>(R) AMT-enabled chipset, network hardware<br />

and software, as well as connection with a power source and a corporate network connection. Setup requires configuration by the<br />

purchaser and may require scripting with the management console or further integration into existing security frameworks to<br />

enable certain functionality. It may also require modifications of implementation of new business processes. With regard to<br />

notebooks, <strong>Intel</strong> AMT may not be available or certain capabilities may be limited over a host OS-based VPN or when connecting<br />

wirelessly, on battery power, sleeping, hibernating or powered off. For more information, see http://www.intel.com/technology/<br />

platform-technology/intel-amt/<br />

Hyper-Threading Technology requires a computer system with a processor supporting HT Technology and an HT Technologyenabled<br />

chipset, BIOS and operating system. Performance will vary depending on the specific hardware and software you use. For<br />

more information including details on which processors support HT Technology, see htp://www.intel.com/info/hyperthreading.<br />

<strong>Intel</strong> ® Turbo Boost Technology requires a PC with a processor with <strong>Intel</strong> Turbo Boost Technology capability. <strong>Intel</strong> Turbo Boost<br />

Technology performance varies depending on hardware, software and overall system configuration. Check with your PC<br />

manufacturer on whether your system delivers <strong>Intel</strong> Turbo Boost Technology.For more information, see http://www.intel.com/<br />

technology/turboboost.<br />

Enhanced <strong>Intel</strong> ® SpeedStep ® Technology See the <strong>Processor</strong> Spec Finder or contact your <strong>Intel</strong> representative for more information.<br />

<strong>Intel</strong> processor numbers are not a measure of performance. <strong>Processor</strong> numbers differentiate features within each processor family,<br />

not across different processor families. See www.intel.com/products/processor_number for details.<br />

64-bit computing on <strong>Intel</strong> architecture requires a computer system with a processor, chipset, BIOS, operating system, device<br />

drivers and applications enabled for <strong>Intel</strong> ® 64 architecture. Performance will vary depending on your hardware and software<br />

configurations. Consult with your system vendor for more information.<br />

Code names featured are used internally within <strong>Intel</strong> to identify products that are in development and not yet publicly announced<br />

for release. Customers, licensees and other third parties are not authorized by <strong>Intel</strong> to use code names in advertising, promotion or<br />

marketing of any product or services and any such use of <strong>Intel</strong>'s internal code names is at the sole risk of the user.<br />

<strong>Intel</strong>, <strong>Intel</strong> <strong>Xeon</strong>, and the <strong>Intel</strong> logo are trademarks of <strong>Intel</strong> Corporation in the U.S. and other countries.<br />

*Other names and brands may be claimed as the property of others.<br />

Copyright © 2011, <strong>Intel</strong> Corporation. All rights reserved.<br />

2 Datasheet, Volume 1


Contents<br />

1 Introduction ..............................................................................................................9<br />

1.1 <strong>Processor</strong> Feature Details ................................................................................... 11<br />

1.1.1 Supported Technologies .......................................................................... 11<br />

1.2 Interfaces ........................................................................................................ 11<br />

1.2.1 System Memory Support ......................................................................... 11<br />

1.2.2 PCI Express* ......................................................................................... 12<br />

1.2.3 Direct Media Interface (DMI).................................................................... 14<br />

1.2.4 Platform Environment Control Interface (PECI)........................................... 14<br />

1.2.5 <strong>Processor</strong> Graphics ................................................................................. 15<br />

1.2.6 <strong>Intel</strong> ® Flexible Display Interface (<strong>Intel</strong> ® FDI) ............................................. 15<br />

1.3 Power Management Support ............................................................................... 16<br />

1.3.1 <strong>Processor</strong> Core....................................................................................... 16<br />

1.3.2 System ................................................................................................. 16<br />

1.3.3 Memory Controller.................................................................................. 16<br />

1.3.4 PCI Express* ......................................................................................... 16<br />

1.3.5 DMI...................................................................................................... 16<br />

1.3.6 <strong>Processor</strong> Graphics Controller................................................................... 16<br />

1.4 Thermal Management Support ............................................................................ 16<br />

1.5 Package ........................................................................................................... 17<br />

1.6 Terminology ..................................................................................................... 17<br />

1.7 Related Documents ........................................................................................... 19<br />

2 Interfaces................................................................................................................ 21<br />

2.1 System Memory Interface .................................................................................. 21<br />

2.1.1 System Memory Technology Supported ..................................................... 21<br />

2.1.2 System Memory Timing Support............................................................... 22<br />

2.1.3 System Memory Organization Modes......................................................... 23<br />

2.1.3.1 Single-Channel Mode................................................................. 23<br />

2.1.3.2 Dual-Channel Mode <strong>Intel</strong> ® Flex Memory Technology Mode ........... 23<br />

2.1.4 Rules for Populating Memory Slots ............................................................ 24<br />

2.1.5 Technology Enhancements of <strong>Intel</strong> ® Fast Memory Access (<strong>Intel</strong> ® FMA).......... 24<br />

2.1.5.1 Just-in-Time Command Scheduling.............................................. 24<br />

2.1.5.2 Command Overlap .................................................................... 24<br />

2.1.5.3 Out-of-Order Scheduling ............................................................ 25<br />

2.1.6 Memory Type Range Registers (MTRRs) Enhancement................................. 25<br />

2.1.7 Data Scrambling .................................................................................... 25<br />

2.2 PCI Express* Interface....................................................................................... 25<br />

2.2.1 PCI Express* Architecture ....................................................................... 25<br />

2.2.1.1 Transaction Layer ..................................................................... 27<br />

2.2.1.2 Data Link Layer ........................................................................ 27<br />

2.2.1.3 Physical Layer .......................................................................... 27<br />

2.2.2 PCI Express* Configuration Mechanism ..................................................... 28<br />

2.2.3 PCI Express* Port................................................................................... 28<br />

2.2.4 PCI Express Lanes Connection.................................................................. 29<br />

2.3 Direct Media Interface (DMI)............................................................................... 29<br />

2.3.1 DMI Error Flow....................................................................................... 29<br />

2.3.2 <strong>Processor</strong>/PCH Compatibility Assumptions.................................................. 29<br />

2.3.3 DMI Link Down ...................................................................................... 30<br />

2.4 <strong>Processor</strong> Graphics Controller (GT) ...................................................................... 30<br />

2.4.1 3D and Video Engines for Graphics Processing............................................ 31<br />

2.4.1.1 3D Engine Execution Units ......................................................... 31<br />

Datasheet, Volume 1 3


2.4.1.2 3D Pipeline ...............................................................................31<br />

2.4.1.3 Video Engine ............................................................................32<br />

2.4.1.4 2D Engine ................................................................................32<br />

2.4.2 <strong>Processor</strong> Graphics Display ......................................................................33<br />

2.4.2.1 Display Planes ..........................................................................33<br />

2.4.2.2 Display Pipes ............................................................................34<br />

2.4.2.3 Display Ports ............................................................................34<br />

2.4.3 <strong>Intel</strong> ® Flexible Display Interface ...............................................................34<br />

2.4.4 Multi-Graphics Controller Multi-Monitor Support ..........................................34<br />

2.5 Platform Environment Control Interface (PECI) ......................................................35<br />

2.6 Interface Clocking..............................................................................................35<br />

2.6.1 Internal Clocking Requirements ................................................................35<br />

3 Technologies............................................................................................................37<br />

3.1 <strong>Intel</strong> ® Virtualization Technology ..........................................................................37<br />

3.1.1 <strong>Intel</strong> ® VT-x Objectives ............................................................................37<br />

3.1.2 <strong>Intel</strong> ® VT-x Features ...............................................................................38<br />

3.1.3 <strong>Intel</strong> ® VT-d Objectives ............................................................................38<br />

3.1.4 <strong>Intel</strong> ® VT-d Features...............................................................................38<br />

3.1.5 <strong>Intel</strong> ® VT-d Features Not Supported..........................................................39<br />

3.2 <strong>Intel</strong> ® Trusted Execution Technology (<strong>Intel</strong> ® TXT) .................................................40<br />

3.3 <strong>Intel</strong> ® Hyper-Threading Technology .....................................................................40<br />

3.4 <strong>Intel</strong> ® Turbo Boost Technology ............................................................................41<br />

3.4.1 <strong>Intel</strong> ® Turbo Boost Technology Frequency..................................................41<br />

3.4.2 <strong>Intel</strong> ® Turbo Boost Technology Graphics Frequency.....................................41<br />

3.5 <strong>Intel</strong> ® Advanced Vector Extensions (AVX) .............................................................42<br />

3.6 Advanced Encryption Standard New Instructions (AES-NI) ......................................42<br />

3.6.1 PCLMULQDQ Instruction ..........................................................................42<br />

3.7 <strong>Intel</strong> ® 64 Architecture x2APIC .............................................................................42<br />

4 Power Management .................................................................................................45<br />

4.1 Advanced Configuration and Power Interface (ACPI) States Supported ......................46<br />

4.1.1 System States........................................................................................46<br />

4.1.2 <strong>Processor</strong> Core/Package Idle States...........................................................46<br />

4.1.3 Integrated Memory Controller States .........................................................46<br />

4.1.4 PCIe Link States .....................................................................................46<br />

4.1.5 DMI States ............................................................................................47<br />

4.1.6 <strong>Processor</strong> Graphics Controller States .........................................................47<br />

4.1.7 Interface State Combinations ...................................................................47<br />

4.2 <strong>Processor</strong> Core Power Management ......................................................................48<br />

4.2.1 Enhanced <strong>Intel</strong> ® SpeedStep ® Technology ..................................................48<br />

4.2.2 Low-Power Idle States.............................................................................48<br />

4.2.3 Requesting Low-Power Idle States ............................................................50<br />

4.2.4 Core C-states .........................................................................................51<br />

4.2.4.1 Core C0 State ...........................................................................51<br />

4.2.4.2 Core C1/C1E State ....................................................................51<br />

4.2.4.3 Core C3 State ...........................................................................51<br />

4.2.4.4 Core C6 State ...........................................................................51<br />

4.2.4.5 C-State Auto-Demotion ..............................................................51<br />

4.2.5 Package C-States ...................................................................................52<br />

4.2.5.1 Package C0 ..............................................................................53<br />

4.2.5.2 Package C1/C1E........................................................................53<br />

4.2.5.3 Package C3 State ......................................................................54<br />

4.2.5.4 Package C6 State ......................................................................54<br />

4.3 IMC Power Management .....................................................................................54<br />

4.3.1 Disabling Unused System Memory Outputs.................................................54<br />

4.3.2 DRAM Power Management and Initialization ...............................................55<br />

4 Datasheet, Volume 1


4.3.2.1 Initialization Role of CKE ............................................................ 56<br />

4.3.2.2 Conditional Self-Refresh ............................................................ 56<br />

4.3.2.3 Dynamic Power-down Operation ................................................. 57<br />

4.3.2.4 DRAM I/O Power Management .................................................... 57<br />

4.4 PCIe* Power Management .................................................................................. 57<br />

4.5 DMI Power Management..................................................................................... 57<br />

4.6 Graphics Power Management .............................................................................. 58<br />

4.6.1 <strong>Intel</strong> ® Rapid Memory Power Management (<strong>Intel</strong> ® RMPM)<br />

(also known as CxSR) ............................................................................. 58<br />

4.6.2 <strong>Intel</strong> ® Graphics Performance Modulation Technology (<strong>Intel</strong> ® GPMT) .............. 58<br />

4.6.3 Graphics Render C-State ......................................................................... 58<br />

4.6.4 <strong>Intel</strong> ® Smart 2D Display Technology (<strong>Intel</strong> ® S2DDT) .................................. 58<br />

4.6.5 <strong>Intel</strong> ® Graphics Dynamic Frequency.......................................................... 59<br />

4.7 Thermal Power Management ............................................................................... 59<br />

5 Thermal Management .............................................................................................. 61<br />

6 Signal Description ................................................................................................... 63<br />

6.1 System Memory Interface .................................................................................. 64<br />

6.2 Memory Reference and Compensation.................................................................. 65<br />

6.3 Reset and Miscellaneous Signals .......................................................................... 66<br />

6.4 PCI Express* Based Interface Signals................................................................... 67<br />

6.5 <strong>Intel</strong> ® Flexible Display Interface Signals ............................................................... 67<br />

6.6 DMI................................................................................................................. 67<br />

6.7 PLL Signals....................................................................................................... 68<br />

6.8 TAP Signals ...................................................................................................... 68<br />

6.9 Error and Thermal Protection .............................................................................. 69<br />

6.10 Power Sequencing ............................................................................................. 69<br />

6.11 <strong>Processor</strong> Power Signals ..................................................................................... 70<br />

6.12 Sense Pins ....................................................................................................... 70<br />

6.13 Ground and NCTF .............................................................................................. 70<br />

6.14 <strong>Processor</strong> Internal Pull Up/Pull Down.................................................................... 71<br />

7 Electrical Specifications ........................................................................................... 73<br />

7.1 Power and Ground Lands.................................................................................... 73<br />

7.2 Decoupling Guidelines ........................................................................................ 73<br />

7.2.1 Voltage Rail Decoupling........................................................................... 73<br />

7.3 <strong>Processor</strong> Clocking (BCLK[0], BCLK#[0]) .............................................................. 74<br />

7.3.1 PLL Power Supply ................................................................................... 74<br />

7.4 V CC Voltage Identification (VID) .......................................................................... 74<br />

7.5 System Agent (SA) VCC VID ............................................................................... 78<br />

7.6 Reserved or Unused Signals................................................................................ 78<br />

7.7 Signal Groups ................................................................................................... 79<br />

7.8 Test Access Port (TAP) Connection....................................................................... 80<br />

7.9 Storage Conditions Specifications ........................................................................ 81<br />

7.10 DC Specifications .............................................................................................. 82<br />

7.10.1 Voltage and Current Specifications............................................................ 82<br />

7.11 Platform Environmental Control Interface (PECI) DC Specifications........................... 87<br />

7.11.1 PECI Bus Architecture ............................................................................. 87<br />

7.11.2 DC Characteristics .................................................................................. 88<br />

7.11.3 Input Device Hysteresis .......................................................................... 88<br />

8 <strong>Processor</strong> Pin and Signal Information...................................................................... 89<br />

8.1 <strong>Processor</strong> Pin Assignments ................................................................................. 89<br />

9 DDR Data Swizzling ............................................................................................... 109<br />

Datasheet, Volume 1 5


Figures<br />

1-1 <strong>Intel</strong> ® <strong>Xeon</strong> ® <strong>Processor</strong> <strong>E3</strong>-<strong>1200</strong> <strong>Family</strong> Platform ........................................................10<br />

2-1 <strong>Intel</strong> ® Flex Memory Technology Operation ..................................................................23<br />

2-2 PCI Express* Layering Diagram.................................................................................26<br />

2-3 Packet Flow through the Layers.................................................................................26<br />

2-4 PCI Express* Related Register Structures in the <strong>Processor</strong> ............................................28<br />

2-5 PCIe Typical Operation 16 lanes Mapping....................................................................29<br />

2-6 <strong>Processor</strong> Graphics Controller Unit Block Diagram ........................................................30<br />

2-7 <strong>Processor</strong> Display Block Diagram ...............................................................................33<br />

4-1 Power States ..........................................................................................................45<br />

4-2 Idle Power Management Breakdown of the <strong>Processor</strong> Cores ..........................................49<br />

4-3 Thread and Core C-State Entry and Exit .....................................................................49<br />

4-4 Package C-State Entry and Exit.................................................................................53<br />

7-1 Example for PECI Host-clients Connection...................................................................87<br />

7-2 Input Device Hysteresis ...........................................................................................88<br />

8-1 Socket Pinmap (Top View, Upper-Left Quadrant) .........................................................90<br />

8-2 Socket Pinmap (Top View, Upper-Right Quadrant) .......................................................91<br />

8-3 Socket Pinmap (Top View, Lower-Left Quadrant) .........................................................92<br />

8-4 Socket Pinmap (Top View, Lower-Right Quadrant) .......................................................93<br />

Tables<br />

1-1 PCIe Supported Configurations in Server/Workstation Products .....................................12<br />

1-2 Related Documents .................................................................................................19<br />

2-1 Supported UDIMM Module Configurations ...................................................................21<br />

2-2 DDR3 System Memory Timing Support.......................................................................22<br />

2-3 Reference Clock ......................................................................................................35<br />

4-1 System States ........................................................................................................46<br />

4-2 <strong>Processor</strong> Core/Package State Support.......................................................................46<br />

4-3 Integrated Memory Controller States .........................................................................46<br />

4-4 PCIe Link States......................................................................................................46<br />

4-5 DMI States .............................................................................................................47<br />

4-6 <strong>Processor</strong> Graphics Controller States..........................................................................47<br />

4-7 G, S, and C State Combinations ................................................................................47<br />

4-8 Coordination of Thread Power States at the Core Level .................................................50<br />

4-9 P_LVLx to MWAIT Conversion....................................................................................50<br />

4-10 Coordination of Core Power States at the Package Level ...............................................52<br />

6-1 Signal Description Buffer Types .................................................................................63<br />

6-2 Memory Channel A ..................................................................................................64<br />

6-3 Memory Channel B ..................................................................................................65<br />

6-4 Memory Reference and Compensation........................................................................65<br />

6-5 Reset and Miscellaneous Signals................................................................................66<br />

6-6 PCI Express* Graphics Interface Signals.....................................................................67<br />

6-7 <strong>Intel</strong>® Flexible Display Interface ...............................................................................67<br />

6-8 DMI - <strong>Processor</strong> to PCH Serial Interface......................................................................67<br />

6-9 PLL Signals.............................................................................................................68<br />

6-10 TAP Signals ............................................................................................................68<br />

6-11 Error and Thermal Protection ....................................................................................69<br />

6-12 Power Sequencing ...................................................................................................69<br />

6-13 <strong>Processor</strong> Power Signals...........................................................................................70<br />

6-14 Sense Pins .............................................................................................................70<br />

6-15 Ground and NCTF....................................................................................................70<br />

6-16 <strong>Processor</strong> Internal Pull Up/Pull Down..........................................................................71<br />

7-1 VR 12.0 Voltage Identification Definition.....................................................................75<br />

7-2 VCCSA_VID configuration.........................................................................................78<br />

6 Datasheet, Volume 1


7-3 Signal Groups 1...................................................................................................... 79<br />

7-4 Storage Condition Ratings........................................................................................ 81<br />

7-5 <strong>Processor</strong> Core Active and Idle Mode DC Voltage and Current Specifications.................... 82<br />

7-6 <strong>Processor</strong> System Agent I/O Buffer Supply DC Voltage and Current Specifications ........... 83<br />

7-7 <strong>Processor</strong> Graphics VID based (V AXG ) Supply DC Voltage and Current Specifications ........ 84<br />

7-8 DDR3 Signal Group DC Specifications ........................................................................ 84<br />

7-9 Control Sideband and TAP Signal Group DC Specifications ............................................ 85<br />

7-10 PCIe DC Specifications............................................................................................. 86<br />

7-11 PECI DC Electrical Limits.......................................................................................... 88<br />

8-1 <strong>Processor</strong> Pin List by Pin Name ................................................................................. 94<br />

9-1 DDR Data Swizzling Table Channel A .................................................................... 110<br />

9-2 DDR Data Swizzling Table Channel B .................................................................... 111<br />

Datasheet, Volume 1 7


Revision History<br />

Revision<br />

Number<br />

001 Initial release<br />

§ §<br />

Description<br />

Revision<br />

Date<br />

February<br />

2011<br />

002 Added <strong>Intel</strong> ® <strong>Xeon</strong> ® processor <strong>E3</strong>-1290 June 2011<br />

8 Datasheet, Volume 1


Introduction<br />

1 Introduction<br />

The <strong>Intel</strong> ® <strong>Xeon</strong> ® processor <strong>E3</strong>-<strong>1200</strong> family is the next generation of 64-bit, multi-core<br />

desktop processor built on 32- nanometer process technology. Based on a new microarchitecture,<br />

the processor is designed for a two-chip platform consisting of a processor<br />

and Platform Controller Hub (PCH). The platform enables higher performance, lower<br />

cost, easier validation, and improved x-y footprint. The processor includes Integrated<br />

Display Engine, <strong>Processor</strong> Graphics, PCI Express* ports, and Integrated Memory<br />

Controller. The processor is designed for server/workstation platforms. It supports up<br />

to 12 <strong>Processor</strong> Graphics execution units (EUs) and a range of PCIe configurations. The<br />

processor is offered in an 1155-land LGA package. Figure 1-1 shows an example<br />

server/workstation platform block diagram.<br />

This document provides DC electrical specifications, signal integrity, differential<br />

signaling specifications, pinout and signal definitions, interface functional descriptions,<br />

thermal specifications, and additional feature information pertinent to the<br />

implementation and operation of the processor on its respective platform.<br />

Note: Throughout this document, the <strong>Intel</strong> ® C200 Series Chipset Platform Controller Hub<br />

may also be referred to as PCH .<br />

Note: Throughout this document, <strong>Intel</strong> ® <strong>Xeon</strong> ® processor <strong>E3</strong>-<strong>1200</strong> family may be referred to<br />

as simply the processor.<br />

Note: Throughout this document, <strong>Intel</strong> ® <strong>Xeon</strong> ® processor <strong>E3</strong>-<strong>1200</strong> family refers to the <strong>Intel</strong> ®<br />

<strong>Xeon</strong> ® <strong>E3</strong>-1290, <strong>E3</strong>-1280, <strong>E3</strong>-1275, <strong>E3</strong>-1270, <strong>E3</strong>-1260L, <strong>E3</strong>-1245, <strong>E3</strong>-1240, <strong>E3</strong>-1235,<br />

<strong>E3</strong>-1230, <strong>E3</strong>-1225, <strong>E3</strong>-1220, and <strong>E3</strong>-1220L processors.<br />

Note: Some processor features are not available on all platforms. Refer to the processor<br />

specification update for details.<br />

Datasheet, Volume 1 9


Figure 1-1. <strong>Intel</strong> ® <strong>Xeon</strong> ® <strong>Processor</strong> <strong>E3</strong>-<strong>1200</strong> <strong>Family</strong> Platform<br />

PCI Express* 2.0<br />

1 x16 or 2x8,<br />

And additional 1x4<br />

Discrete Graphics<br />

(PEG)<br />

Digital Display x 3<br />

LVDS Flat Panel<br />

Analog CRT<br />

SPI Flash x 2<br />

FWH<br />

Super I/O<br />

GPIO<br />

SPI<br />

LPC<br />

<strong>Processor</strong><br />

<strong>Intel</strong>®<br />

Management<br />

Engine<br />

Platform<br />

Controller<br />

Hub (PCH)<br />

DMI2 x4<br />

PCI Express*<br />

8 PCI Express* 2.0 x1<br />

Ports<br />

(5 GT/s)<br />

Controller Link 1<br />

WiFi / WiMax<br />

Serial ATA<br />

USB 2.0<br />

<strong>Intel</strong>® HD Audio<br />

SMBUS 2.0<br />

Introduction<br />

DDR3<br />

Gigabit<br />

Network Connection<br />

10 Datasheet, Volume 1<br />

PECI<br />

PCI


Introduction<br />

1.1 <strong>Processor</strong> Feature Details<br />

Four or two execution cores<br />

A 32-KB instruction and 32-KB data first-level cache (L1) for each core<br />

A 256-KB shared instruction/data second-level cache (L2) for each core<br />

Up to 8 MB shared instruction/data third-level cache (L3), shared among all cores<br />

1.1.1 Supported Technologies<br />

<strong>Intel</strong> ® Virtualization Technology for Directed I/O (<strong>Intel</strong> ® VT-d)<br />

<strong>Intel</strong> ® Virtualization Technology (<strong>Intel</strong> ® VT-x)<br />

<strong>Intel</strong> ® Active Management Technology 7.0 (<strong>Intel</strong> ® AMT 7.0)<br />

<strong>Intel</strong> ® Trusted Execution Technology (<strong>Intel</strong> ® TXT)<br />

<strong>Intel</strong> ® Streaming SIMD Extensions 4.1 (<strong>Intel</strong> ® SSE4.1)<br />

<strong>Intel</strong> ® Streaming SIMD Extensions 4.2 (<strong>Intel</strong> ® SSE4.2)<br />

<strong>Intel</strong> ® Hyper-Threading Technology<br />

<strong>Intel</strong> ® 64 Architecture<br />

Execute Disable Bit<br />

<strong>Intel</strong> ® Turbo Boost Technology<br />

<strong>Intel</strong> ® Advanced Vector Extensions (<strong>Intel</strong> ® AVX)<br />

Advanced Encryption Standard New Instructions (AES-NI)<br />

PCLMULQDQ Instruction<br />

1.2 Interfaces<br />

1.2.1 System Memory Support<br />

Two channels of unbuffered DDR3 memory with a maximum of two UDIMMs per<br />

channel<br />

Single-channel and dual-channel memory organization modes<br />

Data burst length of eight for all memory organization modes<br />

Memory DDR3 data transfer rates of 1066 MT/s and 1333 MT/s<br />

64-bit wide channels<br />

DDR3 I/O Voltage of 1.5 V<br />

The type of memory supported by the processor is dependent on the PCH SKU in<br />

the target platform<br />

Advanced Server/Workstation PCH platforms support ECC and non-ECC unbuffered<br />

DIMMs<br />

Essential and Standard Server PCH platforms support ECC un-buffered DIMMs<br />

Maximum memory bandwidth of 10.6 GB/s in single-channel mode or 21 GB/s in<br />

dual-channel mode assuming DDR3 1333 MT/s<br />

Datasheet, Volume 1 11


1Gb, 2Gb, and 4Gb DDR3 DRAM technologies are supported<br />

Introduction<br />

Using 4Gb device technologies, the largest memory capacity possible is 32 GB,<br />

assuming Dual Channel Mode with four x8 dual ranked unbuffered DIMM<br />

memory configuration.<br />

Up to 64 simultaneous open pages, 32 per channel (assuming 8 ranks of 8 bank<br />

devices)<br />

Command launch modes of 1n/2n<br />

On-Die Termination (ODT)<br />

Asynchronous ODT<br />

<strong>Intel</strong> ® Fast Memory Access (<strong>Intel</strong> ® FMA)<br />

Just-in-Time Command Scheduling<br />

Command Overlap<br />

1.2.2 PCI Express*<br />

Out-of-Order Scheduling<br />

The PCI Express* port(s) are fully-compliant with the PCI Express Base<br />

Specification, Revision 2.0.<br />

<strong>Processor</strong> with Advanced Server/Workstation PCH supported configurations<br />

Table 1-1. PCIe Supported Configurations in Server/Workstation Products<br />

Configuration Organization Essential Server Standard Server<br />

1<br />

2<br />

The port may negotiate down to narrower widths<br />

Support for x16/x8/x4/x1 widths for a single PCI Express mode<br />

2.5 GT/s and 5.0 GT/s PCI Express* frequencies are supported<br />

Workstation/<br />

Advanced Server<br />

1x8 I/O I/O Graphics, I/O<br />

2x4 I/O I/O I/O<br />

1x8<br />

I/O Graphics, I/O<br />

Not Supported<br />

3x4 I/O I/O<br />

3 2x8 I/O I/O Graphics, I/O<br />

4<br />

2x8<br />

I/O Graphics, I/O<br />

Not Supported<br />

1x4 I/O I/O<br />

5 1x16 Graphics, I/O Graphics, I/O Graphics, I/O<br />

6<br />

1x16<br />

Graphics, I/O Graphics, I/O<br />

Not Supported<br />

1x4 I/O I/O<br />

Gen1 Raw bit-rate on the data pins of 2.5 GT/s, resulting in a real bandwidth per<br />

pair of 250 MB/s given the 8b/10b encoding used to transmit data across this<br />

interface. This also does not account for packet overhead and link maintenance.<br />

Maximum theoretical bandwidth on the interface of 4 GB/s in each direction<br />

simultaneously, for an aggregate of 8 GB/s when x16 Gen 1<br />

Gen 2 Raw bit-rate on the data pins of 5.0 GT/s, resulting in a real bandwidth per<br />

pair of 500 MB/s given the 8b/10b encoding used to transmit data across this<br />

interface. This also does not account for packet overhead and link maintenance.<br />

12 Datasheet, Volume 1


Introduction<br />

Maximum theoretical bandwidth on the interface of 8 GB/s in each direction<br />

simultaneously, for an aggregate of 16 GB/s when x16 Gen 2<br />

Hierarchical PCI-compliant configuration mechanism for downstream devices<br />

Traditional PCI style traffic (asynchronous snooped, PCI ordering)<br />

PCI Express* extended configuration space. The first 256 bytes of configuration<br />

space aliases directly to the PCI Compatibility configuration space. The remaining<br />

portion of the fixed 4-KB block of memory-mapped space above that (starting at<br />

100h) is known as extended configuration space.<br />

PCI Express* Enhanced Access Mechanism; accessing the device configuration<br />

space in a flat memory mapped fashion<br />

Automatic discovery, negotiation, and training of link out of reset<br />

Traditional AGP style traffic (asynchronous non-snooped, PCI-X Relaxed ordering)<br />

Peer segment destination posted write traffic (no peer-to-peer read traffic) in<br />

Virtual Channel 0<br />

DMI -> PCI Express* Port 0<br />

64-bit downstream address format, but the processor never generates an address<br />

above 64 GB (Bits 63:36 will always be zeros)<br />

64-bit upstream address format, but the processor responds to upstream read<br />

transactions to addresses above 64 GB (addresses where any of Bits 63:36 are<br />

nonzero) with an Unsupported Request response. Upstream write transactions to<br />

addresses above 64 GB will be dropped.<br />

Re-issues Configuration cycles that have been previously completed with the<br />

Configuration Retry status<br />

PCI Express* reference clock is 100-MHz differential clock<br />

Power Management Event (PME) functions<br />

Dynamic width capability<br />

Message Signaled Interrupt (MSI and MSI-X) messages<br />

Polarity inversion<br />

Note: The processor does not support PCI Express* Hot-Plug.<br />

Datasheet, Volume 1 13


1.2.3 Direct Media Interface (DMI)<br />

DMI 2.0 support<br />

Four lanes in each direction<br />

5 GT/s point-to-point DMI interface to PCH is supported<br />

Introduction<br />

Raw bit-rate on the data pins of 5.0 GB/s, resulting in a real bandwidth per pair of<br />

500 MB/s given the 8b/10b encoding used to transmit data across this interface.<br />

Does not account for packet overhead and link maintenance.<br />

Maximum theoretical bandwidth on interface of 2 GB/s in each direction<br />

simultaneously, for an aggregate of 4 GB/s when DMI x4<br />

Shares 100-MHz PCI Express* reference clock<br />

64-bit downstream address format, but the processor never generates an address<br />

above 64 GB (Bits 63:36 will always be zeros)<br />

64-bit upstream address format, but the processor responds to upstream read<br />

transactions to addresses above 64 GB (addresses where any of Bits 63:36 are<br />

nonzero) with an Unsupported Request response. Upstream write transactions to<br />

addresses above 64 GB will be dropped.<br />

Supports the following traffic types to or from the PCH<br />

DMI -> DRAM<br />

DMI -> processor core (Virtual Legacy Wires (VLWs), Resetwarn, or MSIs only)<br />

<strong>Processor</strong> core -> DMI<br />

APIC and MSI interrupt messaging support<br />

Message Signaled Interrupt (MSI and MSI-X) messages<br />

Downstream SMI, SCI and SERR error indication<br />

Legacy support for ISA regime protocol (PHOLD/PHOLDA) required for parallel port<br />

DMA, floppy drive, and LPC bus masters<br />

DC coupling no capacitors between the processor and the PCH<br />

Polarity inversion<br />

PCH end-to-end lane reversal across the link<br />

Supports Half Swing low-power/low-voltage<br />

1.2.4 Platform Environment Control Interface (PECI)<br />

The PECI is a one-wire interface that provides a communication channel between a<br />

PECI client (the processor) and a PECI master. The processors support the PECI 3.0<br />

Specification.<br />

14 Datasheet, Volume 1


Introduction<br />

1.2.5 <strong>Processor</strong> Graphics<br />

The <strong>Processor</strong> Graphics contains a refresh of the sixth generation graphics core<br />

enabling substantial gains in performance and lower power consumption.<br />

Next Generation <strong>Intel</strong> Clear Video Technology HD support is a collection of video<br />

playback and enhancement features that improve the end user s viewing<br />

experience.<br />

Encode/transcode HD content<br />

Playback of high definition content including Blu-ray Disc*<br />

Superior image quality with sharper, more colorful images<br />

Playback of Blu-ray disc S3D content using HDMI (V.1.4 with 3D)<br />

DirectX* Video Acceleration (DXVA) support for accelerating video processing<br />

Full AVC/VC1/MPEG2 HW Decode<br />

Advanced Scheduler 2.0, 1.0, XPDM support<br />

Windows* 7, XP, Windows Vista*, OSX, Linux OS Support<br />

DX10.1, DX10, DX9 support<br />

OGL 3.0 support<br />

1.2.6 <strong>Intel</strong> ® Flexible Display Interface (<strong>Intel</strong> ® FDI)<br />

For SKUs with graphics, FDI carries display traffic from the <strong>Processor</strong> Graphics in<br />

the processor to the legacy display connectors in the PCH<br />

Based on DisplayPort standard<br />

Two independent links one for each display pipe<br />

Four unidirectional downstream differential transmitter pairs<br />

Scalable down to 3X, 2X, or 1X based on actual display bandwidth<br />

requirements<br />

Fixed frequency 2.7 GT/s data rate<br />

Two sideband signals for Display synchronization<br />

FDI_FSYNC and FDI_LSYNC (Frame and Line Synchronization)<br />

One Interrupt signal used for various interrupts from the PCH<br />

FDI_INT signal shared by both <strong>Intel</strong> FDI Links<br />

PCH supports end-to-end lane reversal across both links<br />

Common 100-MHz reference clock<br />

Datasheet, Volume 1 15


1.3 Power Management Support<br />

1.3.1 <strong>Processor</strong> Core<br />

1.3.2 System<br />

Introduction<br />

Full support of Advanced Configuration and Power Interface (ACPI) C-states as<br />

implemented by the following processor C-states<br />

C0, C1, C1E, C3, C6<br />

Enhanced <strong>Intel</strong> SpeedStep ® Technology<br />

S0, S3, S4, S5<br />

1.3.3 Memory Controller<br />

Conditional self-refresh (<strong>Intel</strong> ® Rapid Memory Power Management (<strong>Intel</strong> ® RMPM))<br />

Dynamic power-down<br />

1.3.4 PCI Express*<br />

1.3.5 DMI<br />

L0s and L1 ASPM power management capability<br />

L0s and L1 ASPM power management capability<br />

1.3.6 <strong>Processor</strong> Graphics Controller<br />

Rapid Memory Power Management RMPM CxSR<br />

Graphics Performance Modulation Technology (GPMT)<br />

<strong>Intel</strong> Smart 2D Display Technology (<strong>Intel</strong> S2DDT)<br />

Graphics Render C-State (RC6)<br />

1.4 Thermal Management Support<br />

Digital Thermal Sensor<br />

<strong>Intel</strong> ® Adaptive Thermal Monitor<br />

THERMTRIP# and PROCHOT# support<br />

On-Demand Mode<br />

Memory Thermal Throttling<br />

External Thermal Sensor (TS-on-DIMM and TS-on-Board)<br />

Render Thermal Throttling<br />

Fan speed control with DTS<br />

16 Datasheet, Volume 1


Introduction<br />

1.5 Package<br />

The processor socket type is noted as LGA 1155. The package is a 37.5 x 37.5 mm<br />

Flip Chip Land Grid Array (FCLGA 1155).<br />

1.6 Terminology<br />

Term Description<br />

ACPI Advanced Configuration and Power Interface<br />

BLT Block Level Transfer<br />

CRT Cathode Ray Tube<br />

DDR3 Third-generation Double Data Rate SDRAM memory technology<br />

DMA Direct Memory Access<br />

DMI Direct Media Interface<br />

DP DisplayPort*<br />

DTS Digital Thermal Sensor<br />

ECC Error Correction Code<br />

Enhanced <strong>Intel</strong><br />

SpeedStep ® Technology<br />

Execute Disable Bit<br />

Technology that provides power management capabilities to laptops.<br />

The Execute Disable bit allows memory to be marked as executable or nonexecutable,<br />

when combined with a supporting operating system. If code<br />

attempts to run in non-executable memory the processor raises an error to the<br />

operating system. This feature can prevent some classes of viruses or worms<br />

that exploit buffer overrun vulnerabilities and can thus help improve the overall<br />

security of the system. See the <strong>Intel</strong> ® 64 and IA-32 Architectures Software<br />

Developer's Manuals for more detailed information.<br />

IMC Integrated Memory Controller<br />

<strong>Intel</strong> ® 64 Technology 64-bit memory extensions to the IA-32 architecture<br />

<strong>Intel</strong> ® FDI <strong>Intel</strong> ® Flexible Display Interface<br />

<strong>Intel</strong> ® TXT <strong>Intel</strong> ® Trusted Execution Technology<br />

<strong>Intel</strong> ® Virtualization<br />

Technology<br />

<strong>Intel</strong> ® VT-d<br />

IOV I/O Virtualization<br />

<strong>Processor</strong> virtualization which when used in conjunction with Virtual Machine<br />

Monitor software enables multiple, robust independent software environments<br />

inside a single platform.<br />

<strong>Intel</strong> ® Virtualization Technology (<strong>Intel</strong> ® VT) for Directed I/O. <strong>Intel</strong> VT-d is a<br />

hardware assist, under system software (Virtual Machine Manager or OS)<br />

control, for enabling I/O device virtualization. <strong>Intel</strong> VT-d also brings robust<br />

security by providing protection from errant DMAs by using DMA remapping, a<br />

key feature of <strong>Intel</strong> VT-d.<br />

ITPM Integrated Trusted Platform Module<br />

LCD Liquid Crystal Display<br />

LVDS<br />

NCTF<br />

PCH<br />

Low Voltage Differential Signaling. A high speed, low power data transmission<br />

standard used for display connections to LCD panels.<br />

Non-Critical to Function. NCTF locations are typically redundant ground or noncritical<br />

reserved, so the loss of the solder joint continuity at end of life conditions<br />

will not affect the overall product functionality.<br />

Platform Controller Hub. The new, 2009 chipset with centralized platform<br />

capabilities including the main I/O interfaces along with display connectivity,<br />

audio features, power management, manageability, security and storage<br />

features.<br />

PECI Platform Environment Control Interface<br />

Datasheet, Volume 1 17


PEG<br />

Introduction<br />

PCI Express* Graphics. External Graphics using PCI Express* Architecture. A<br />

high-speed serial interface whose configuration is software compatible with the<br />

existing PCI specifications.<br />

<strong>Processor</strong> The 64-bit, single-core or multi-core component (package).<br />

<strong>Processor</strong> Core<br />

<strong>Processor</strong> Graphics <strong>Intel</strong> ® <strong>Processor</strong> Graphics<br />

Rank<br />

The term processor core refers to Si die itself which can contain multiple<br />

execution cores. Each execution core has an instruction cache, data cache, and<br />

256-KB L2 cache. All execution cores share the L3 cache.<br />

A unit of DRAM corresponding four to eight devices in parallel, ignoring ECC.<br />

These devices are usually, but not always, mounted on a single side of a DIMM.<br />

SCI System Control Interrupt. Used in ACPI protocol.<br />

Storage Conditions<br />

A non-operational state. The processor may be installed in a platform, in a tray,<br />

or loose. <strong>Processor</strong>s may be sealed in packaging or exposed to free air. Under<br />

these conditions, processor landings should not be connected to any supply<br />

voltages, have any I/Os biased or receive any clocks. Upon exposure to free air<br />

(that is, unsealed packaging or a device removed from packaging material) the<br />

processor must be handled in accordance with moisture sensitivity labeling<br />

(MSL) as indicated on the packaging material.<br />

TAC Thermal Averaging Constant.<br />

TDP Thermal Design Power.<br />

V AXG<br />

V CC<br />

V CCIO<br />

V CCPLL<br />

V CCSA<br />

V DDQ<br />

Graphics core power supply.<br />

<strong>Processor</strong> core power supply.<br />

High Frequency I/O logic power supply<br />

PLL power supply<br />

System Agent (memory controller, DMI, PCIe controllers, and display engine)<br />

power supply<br />

DDR3 power supply.<br />

VLD Variable Length Decoding.<br />

V SS<br />

Term Description<br />

<strong>Processor</strong> ground.<br />

x1 Refers to a Link or Port with one Physical Lane.<br />

x16 Refers to a Link or Port with sixteen Physical Lanes.<br />

x4 Refers to a Link or Port with four Physical Lanes.<br />

x8 Refers to a Link or Port with eight Physical Lanes.<br />

18 Datasheet, Volume 1


Introduction<br />

1.7 Related Documents<br />

Refer to Table 1-2 for additional information.<br />

Table 1-2. Related Documents<br />

Document Document Number/ Location<br />

<strong>Intel</strong> ® <strong>Xeon</strong> ® <strong>Processor</strong> <strong>E3</strong> <strong>Family</strong> Datasheet, Volume 2 http://www.intel.com/Assets/en_<br />

US/PDF/datasheet/324971.pdf<br />

<strong>Intel</strong> ® <strong>Xeon</strong> ® <strong>Processor</strong> <strong>E3</strong> <strong>Family</strong> Specification Update http://www.intel.com/Assets/en_<br />

US/PDF/specupdate/324972.pdf<br />

<strong>Intel</strong> ® <strong>Xeon</strong> ® <strong>Processor</strong> <strong>E3</strong>-<strong>1200</strong> <strong>Family</strong> and LGA1155 Socket Thermal<br />

Mechanical Specifications and Design Guidelines<br />

Datasheet, Volume 1 19<br />

§ §<br />

http://www.intel.com/Assets/en_<br />

US/PDF/designguide/324973.pdf<br />

<strong>Intel</strong> ® 6 Series Chipset and <strong>Intel</strong> ® C200 Series Chipset Datasheet www.intel.com/Assets/PDF/datas<br />

heet/324645.pdf<br />

<strong>Intel</strong> ® 6 Series Chipset and <strong>Intel</strong> ® C200 Series Chipset Thermal<br />

Mechanical Specifications and Design Guidelines<br />

www.intel.com/Assets/PDF/desig<br />

nguide/324647.pdf<br />

Advanced Configuration and Power Interface Specification 3.0 http://www.acpi.info/<br />

PCI Local Bus Specification 3.0 http://www.pcisig.com/specifications<br />

PCI Express* Base Specification 2.0 http://www.pcisig.com<br />

DDR3 SDRAM Specification http://www.jedec.org<br />

DisplayPort* Specification http://www.vesa.org<br />

<strong>Intel</strong> ® 64 and IA-32 Architectures Software Developer's Manuals http://www.intel.com/products/pr<br />

ocessor/manuals/index.htm<br />

Volume 1: Basic Architecture 253665<br />

Volume 2A: Instruction Set Reference, A-M 253666<br />

Volume 2B: Instruction Set Reference, N-Z 253667<br />

Volume 3A: System Programming Guide 253668<br />

Volume 3B: System Programming Guide 253669


Introduction<br />

20 Datasheet, Volume 1


Interfaces<br />

2 Interfaces<br />

This chapter describes the interfaces supported by the processor.<br />

2.1 System Memory Interface<br />

2.1.1 System Memory Technology Supported<br />

The Integrated Memory Controller (IMC) supports DDR3 protocols with two<br />

independent, 64-bit wide channels each accessing one or two DIMMs. The type of<br />

memory supported by the processor is dependant on the PCH SKU in the target<br />

platform. Refer to Chapter 1 for supported memory configuration details.<br />

It supports a maximum of two DDR3 DIMMs per-channel; thus, allowing up to four<br />

device ranks per-channel.<br />

DDR3 Data Transfer Rates<br />

1066 MT/s (PC3-8500), 1333 MT/s (PC3-10600)<br />

Advanced Server/Workstation PCH platforms DDR3 DIMM Modules:<br />

Raw Card A - Single Ranked x8 unbuffered non-ECC<br />

Raw Card B - Dual Ranked x8 unbuffered non-ECC<br />

Raw Card C - Single Ranked x16 unbuffered non-ECC<br />

Raw Card D - Single Ranked x8 unbuffered ECC<br />

Raw Card E - Dual Ranked x8 unbuffered ECC<br />

Essential/Standard Server PCH platforms DDR3 DIMM Modules:<br />

Raw Card D - Single Ranked x8 unbuffered ECC<br />

Raw Card E - Dual Ranked x8 unbuffered ECC<br />

DDR3 DRAM Device Technology: 1-Gb, 2-Gb, and 4 Gb DDR3 DRAM Device<br />

technologies and addressing are supported. Table 2-1 shows the supported DIMM<br />

configurations.<br />

Table 2-1. Supported UDIMM Module Configurations (Sheet 1 of 2)<br />

Raw<br />

Card<br />

Version<br />

A<br />

B<br />

C<br />

DIMM<br />

Capacity<br />

DRAM Device<br />

Technology<br />

DRAM<br />

Organization<br />

# of<br />

DRAM<br />

Devices<br />

# of<br />

Physical<br />

Device<br />

Ranks<br />

# of<br />

Row/Col<br />

Address<br />

Bits<br />

Server/Workstation Platforms:<br />

Unbuffered/Non-ECC Supported DIMM Module Configurations<br />

# of<br />

Banks<br />

Inside<br />

DRAM<br />

Page Size<br />

1 GB 1 Gb 128 M X 8 8 2 14/10 8 8 K<br />

2 GB 2 Gb 128 M X 16 8 2 14/10 8 16 K<br />

2 GB 1 Gb 128 M X 8 16 2 14/10 8 8 K<br />

4 GB 2 Gb 256 M X 8 16 2 15/10 8 8 K<br />

8 GB 4 Gb 512 M X 8 16 2 16/10 8 8 K<br />

512 MB 1 Gb 64 M X 16 4 1 13/10 8 16 K<br />

1 GB 2 Gb 128 M X 16 4 1 14/10 8 16 K<br />

Datasheet, Volume 1 21


Table 2-1. Supported UDIMM Module Configurations (Sheet 2 of 2)<br />

Raw<br />

Card<br />

Version<br />

D<br />

E<br />

DIMM<br />

Capacity<br />

DRAM Device<br />

Technology<br />

Note: DIMM module support is based on availability and is subject to change.<br />

2.1.2 System Memory Timing Support<br />

The IMC supports the following DDR3 Speed Bin, CAS Write Latency (CWL), and<br />

command signal mode timings on the main memory interface:<br />

t CL = CAS Latency<br />

DRAM<br />

Organization<br />

Server and Workstation Platforms:<br />

Unbuffered/ECC Supported DIMM Module Configurations<br />

t RCD = Activate Command to READ or WRITE Command delay<br />

t RP = PRECHARGE Command Period<br />

CWL = CAS Write Latency<br />

# of<br />

DRAM<br />

Devices<br />

Interfaces<br />

1 GB 1 Gb 128 M X 8 9 1 14/10 8 8 K<br />

2 GB 2 Gb 256 M X 8 9 1 15/10 8 8 K<br />

2 GB 1 Gb 128 M X 8 18 2 14/10 8 8 K<br />

4 GB 2 Gb 256 M X 8 18 2 15/10 8 8 K<br />

8 GB 4 Gb 512 M X 8 18 2 16/10 8 8 K<br />

Command Signal modes = 1n indicates a new command may be issued every clock<br />

and 2n indicates a new command may be issued every 2 clocks. Command launch<br />

mode programming depends on the transfer rate and memory configuration.<br />

Table 2-2. DDR3 System Memory Timing Support<br />

Segment<br />

All Desktop<br />

segments<br />

Transfer<br />

Rate<br />

(MT/s)<br />

1066<br />

tCL<br />

(tCK)<br />

tRCD<br />

(tCK)<br />

# of<br />

Physical<br />

Device<br />

Ranks<br />

tRP<br />

(tCK)<br />

7 7 7 6<br />

8 8 8 6<br />

1333 9 9 9 7<br />

# of<br />

Row/Col<br />

Address<br />

Bits<br />

CWL<br />

(tCK) DPC<br />

Notes:<br />

1. System memory timing support is based on availability and is subject to change.<br />

# of<br />

Banks<br />

Inside<br />

DRAM<br />

CMD<br />

Mode<br />

1 1n/2n<br />

2 2n<br />

1 1n/2n<br />

2 2n<br />

1 1n/2n<br />

2 2n<br />

Page Size<br />

Notes 1<br />

22 Datasheet, Volume 1


Interfaces<br />

2.1.3 System Memory Organization Modes<br />

The IMC supports two memory organization modes single-channel and dual-channel.<br />

Depending upon how the DIMM Modules are populated in each memory channel, a<br />

number of different configurations can exist.<br />

2.1.3.1 Single-Channel Mode<br />

In this mode, all memory cycles are directed to a single-channel. Single-channel mode<br />

is used when either Channel A or Channel B DIMM connectors are populated in any<br />

order, but not both.<br />

2.1.3.2 Dual-Channel Mode – <strong>Intel</strong> ® Flex Memory Technology Mode<br />

The IMC supports <strong>Intel</strong> Flex Memory Technology Mode. Memory is divided into a<br />

symmetric and an asymmetric zone. The symmetric zone starts at the lowest address<br />

in each channel and is contiguous until the asymmetric zone begins or until the top<br />

address of the channel with the smaller capacity is reached. In this mode, the system<br />

runs with one zone of dual-channel mode and one zone of single-channel mode,<br />

simultaneously, across the whole memory array.<br />

Note: Channels A and B can be mapped for physical channels 0 and 1 respectively or vice<br />

versa; however, channel A size must be greater or equal to channel B size.<br />

Figure 2-1. <strong>Intel</strong> ® Flex Memory Technology Operation<br />

C<br />

B B<br />

C H A<br />

C H B<br />

Datasheet, Volume 1 23<br />

C<br />

B<br />

B<br />

T O M<br />

N o n in te r le a v e d<br />

a c c e s s<br />

D u a l c h a n n e l<br />

in te rle a v e d a c c e s s<br />

B – T h e la rg e s t p h y s ic a l m e m o ry a m o u n t o f th e s m a lle r s iz e m e m o ry m o d u le<br />

C – T h e re m a in in g p h y s ic a l m e m o ry a m o u n t o f th e la rg e r s iz e m e m o ry m o d u le


2.1.3.2.1 Dual-Channel Symmetric Mode<br />

Interfaces<br />

Dual-Channel Symmetric mode, also known as interleaved mode, provides maximum<br />

performance on real world applications. Addresses are ping-ponged between the<br />

channels after each cache line (64-byte boundary). If there are two requests, and the<br />

second request is to an address on the opposite channel from the first, that request can<br />

be sent before data from the first request has returned. If two consecutive cache lines<br />

are requested, both may be retrieved simultaneously since they are ensured to be on<br />

opposite channels. Use Dual-Channel Symmetric mode when both Channel A and<br />

Channel B DIMM connectors are populated in any order, with the total amount of<br />

memory in each channel being the same.<br />

When both channels are populated with the same memory capacity and the boundary<br />

between the dual channel zone and the single channel zone is the top of memory, IMC<br />

operates completely in Dual-Channel Symmetric mode.<br />

Note: The DRAM device technology and width may vary from one channel to the other.<br />

2.1.4 Rules for Populating Memory Slots<br />

In all modes, the frequency of system memory is the lowest frequency of all memory<br />

modules placed in the system, as determined through the SPD registers on the<br />

memory modules. The system memory controller supports one or two DIMM<br />

connectors per channel. The usage of DIMM modules with different latencies is allowed,<br />

but in that case, the worst latency (per channel) will be used. For dual-channel modes,<br />

both channels must have a DIMM connector populated and for single-channel mode,<br />

only a single-channel may have one or both DIMM connectors populated.<br />

Note: In a 2 DIMM Per Channel (2DPC) daisy chain layout memory configuration, the furthest<br />

DIMM from the processor of any given channel must always be populated first.<br />

2.1.5 Technology Enhancements of <strong>Intel</strong> ® Fast Memory Access<br />

(<strong>Intel</strong> ® FMA)<br />

The following sections describe the Just-in-Time Scheduling, Command Overlap, and<br />

Out-of-Order Scheduling <strong>Intel</strong> FMA technology enhancements.<br />

2.1.5.1 Just-in-Time Command Scheduling<br />

The memory controller has an advanced command scheduler where all pending<br />

requests are examined simultaneously to determine the most efficient request to be<br />

issued next. The most efficient request is picked from all pending requests and issued<br />

to system memory Just-in-Time to make optimal use of Command Overlapping. Thus,<br />

instead of having all memory access requests go individually through an arbitration<br />

mechanism forcing requests to be executed one at a time, they can be started without<br />

interfering with the current request allowing for concurrent issuing of requests. This<br />

allows for optimized bandwidth and reduced latency while maintaining appropriate<br />

command spacing to meet system memory protocol.<br />

2.1.5.2 Command Overlap<br />

Command Overlap allows the insertion of the DRAM commands between the Activate,<br />

Precharge, and Read/Write commands normally used, as long as the inserted<br />

commands do not affect the currently executing command. Multiple commands can be<br />

issued in an overlapping manner, increasing the efficiency of system memory protocol.<br />

24 Datasheet, Volume 1


Interfaces<br />

2.1.5.3 Out-of-Order Scheduling<br />

While leveraging the Just-in-Time Scheduling and Command Overlap enhancements,<br />

the IMC continuously monitors pending requests to system memory for the best use of<br />

bandwidth and reduction of latency. If there are multiple requests to the same open<br />

page, these requests would be launched in a back to back manner to make optimum<br />

use of the open memory page. This ability to reorder requests on the fly allows the IMC<br />

to further reduce latency and increase bandwidth efficiency.<br />

2.1.6 Memory Type Range Registers (MTRRs) Enhancement<br />

The processor has 2 additional MTRRs (total 10 MTRRs). These additional MTRRs are<br />

specially important in supporting larger system memory beyond 4 GB.<br />

2.1.7 Data Scrambling<br />

The memory controller incorporates a DDR3 Data Scrambling feature to minimize the<br />

impact of excessive di/dt on the platform DDR3 VRs due to successive 1s and 0s on the<br />

data bus. Past experience has demonstrated that traffic on the data bus is not random<br />

and can have energy concentrated at specific spectral harmonics creating high di/dt<br />

that is generally limited by data patterns that excite resonance between the package<br />

inductance and on-die capacitances. As a result, the memory controller uses a data<br />

scrambling feature to create pseudo-random patterns on the DDR3 data bus to reduce<br />

the impact of any excessive di/dt.<br />

2.2 PCI Express* Interface<br />

This section describes the PCI Express interface capabilities of the processor. See the<br />

PCI Express Base Specification for details of PCI Express.<br />

The number of PCI Express controllers is dependent on the platform. Refer to Chapter 1<br />

for details.<br />

2.2.1 PCI Express* Architecture<br />

Compatibility with the PCI addressing model is maintained to ensure that all existing<br />

applications and drivers operate unchanged.<br />

The PCI Express configuration uses standard mechanisms as defined in the PCI<br />

Plug-and-Play specification. The initial recovered clock speed of 1.25 GHz results in<br />

2.5 Gb/s/direction that provides a 250 MB/s communications channel in each direction<br />

(500 MB/s total). That is close to twice the data rate of classic PCI. The fact that<br />

8b/10b encoding is used accounts for the 250 MB/s where quick calculations would<br />

imply 300 MB/s. The external graphics ports support Gen2 speed as well. At 5.0 GT/s,<br />

Gen 2 operation results in twice as much bandwidth per lane as compared to Gen 1<br />

operation. When operating with two PCIe controllers, each controller can be operating<br />

at either 2.5 GT/s or 5.0 GT/s.<br />

The PCI Express architecture is specified in three layers Transaction Layer, Data Link<br />

Layer, and Physical Layer. The partitioning in the component is not necessarily along<br />

these same boundaries. Refer to Figure 2-2 for the PCI Express Layering Diagram.<br />

Datasheet, Volume 1 25


Figure 2-2. PCI Express* Layering Diagram<br />

Interfaces<br />

PCI Express uses packets to communicate information between components. Packets<br />

are formed in the Transaction and Data Link Layers to carry the information from the<br />

transmitting component to the receiving component. As the transmitted packets flow<br />

through the other layers, they are extended with additional information necessary to<br />

handle packets at those layers. At the receiving side, the reverse process occurs and<br />

packets get transformed from their Physical Layer representation to the Data Link<br />

Layer representation and finally (for Transaction Layer Packets) to the form that can be<br />

processed by the Transaction Layer of the receiving device.<br />

Figure 2-3. Packet Flow through the Layers<br />

26 Datasheet, Volume 1


Interfaces<br />

2.2.1.1 Transaction Layer<br />

The upper layer of the PCI Express architecture is the Transaction Layer. The<br />

Transaction Layer's primary responsibility is the assembly and disassembly of<br />

Transaction Layer Packets (TLPs). TLPs are used to communicate transactions, such as<br />

read and write, as well as certain types of events. The Transaction Layer also manages<br />

flow control of TLPs.<br />

2.2.1.2 Data Link Layer<br />

The middle layer in the PCI Express stack, the Data Link Layer, serves as an<br />

intermediate stage between the Transaction Layer and the Physical Layer.<br />

Responsibilities of Data Link Layer include link management, error detection, and error<br />

correction.<br />

The transmission side of the Data Link Layer accepts TLPs assembled by the<br />

Transaction Layer, calculates and applies data protection code and TLP sequence<br />

number, and submits them to Physical Layer for transmission across the Link. The<br />

receiving Data Link Layer is responsible for checking the integrity of received TLPs and<br />

for submitting them to the Transaction Layer for further processing. On detection of TLP<br />

error(s), this layer is responsible for requesting retransmission of TLPs until information<br />

is correctly received, or the Link is determined to have failed. The Data Link Layer also<br />

generates and consumes packets that are used for Link management functions.<br />

2.2.1.3 Physical Layer<br />

The Physical Layer includes all circuitry for interface operation, including driver and<br />

input buffers, parallel-to-serial and serial-to-parallel conversion, PLL(s), and impedance<br />

matching circuitry. It also includes logical functions related to interface initialization and<br />

maintenance. The Physical Layer exchanges data with the Data Link Layer in an<br />

implementation-specific format, and is responsible for converting this to an appropriate<br />

serialized format and transmitting it across the PCI Express Link at a frequency and<br />

width compatible with the remote device.<br />

Datasheet, Volume 1 27


2.2.2 PCI Express* Configuration Mechanism<br />

The PCI Express (external graphics) link is mapped through a PCI-to-PCI bridge<br />

structure.<br />

Figure 2-4. PCI Express* Related Register Structures in the <strong>Processor</strong><br />

PCI<br />

Express<br />

Device<br />

PEG0<br />

Interfaces<br />

PCI Express extends the configuration space to 4096 bytes per-device/function, as<br />

compared to 256 bytes allowed by the Conventional PCI Specification. PCI Express<br />

configuration space is divided into a PCI-compatible region (that consists of the first<br />

256 bytes of a logical device's configuration space) and an extended PCI Express region<br />

(that consists of the remaining configuration space). The PCI-compatible region can be<br />

accessed using either the mechanisms defined in the PCI specification or using the<br />

enhanced PCI Express configuration access mechanism described in the PCI Express<br />

Enhanced Configuration Mechanism section.<br />

The PCI Express Host Bridge is required to translate the memory-mapped PCI Express<br />

configuration space accesses from the host processor to PCI Express configuration<br />

cycles. To maintain compatibility with PCI configuration addressing mechanisms, it is<br />

recommended that system software access the enhanced configuration space using<br />

32-bit operations (32-bit aligned) only. See the PCI Express Base Specification for<br />

details of both the PCI-compatible and PCI Express Enhanced configuration<br />

mechanisms and transaction rules.<br />

2.2.3 PCI Express* Port<br />

PCI-PCI<br />

Bridge<br />

representing<br />

root PCI<br />

Express ports<br />

(Device 1 and<br />

Device 6)<br />

PCI<br />

Compatible<br />

Host Bridge<br />

Device<br />

(Device 0)<br />

The PCI Express interface on the processor is a single, 16-lane (x16) port that can also<br />

be configured at narrower widths. The PCI Express port is being designed to be<br />

compliant with the PCI Express Base Specification, Revision 2.0. Advanced<br />

Server/Workstation and Essential/Standard Server SKUs support an additional x4 port.<br />

28 Datasheet, Volume 1<br />

DMI


Interfaces<br />

2.2.4 PCI Express Lanes Connection<br />

Figure 2-5 demonstrates the PCIe lanes mapping.<br />

Figure 2-5. PCIe Typical Operation 16 lanes Mapping<br />

2.3 Direct Media Interface (DMI)<br />

Direct Media Interface (DMI) connects the processor and the PCH. Next generation<br />

DMI2 is supported.<br />

Note: Only DMI x4 configuration is supported.<br />

2.3.1 DMI Error Flow<br />

DMI can only generate SERR in response to errors, never SCI, SMI, MSI, PCI INT, or<br />

GPE. Any DMI related SERR activity is associated with Device 0.<br />

2.3.2 <strong>Processor</strong>/PCH Compatibility Assumptions<br />

0<br />

1<br />

2<br />

3<br />

The processor is compatible with the <strong>Intel</strong> ® C200 Series Chipset PCH. The processor is<br />

not compatible with any previous PCH products.<br />

Datasheet, Volume 1 29<br />

0<br />

1<br />

2<br />

3<br />

4<br />

5<br />

6<br />

7<br />

0<br />

1<br />

2<br />

3<br />

4<br />

5<br />

6<br />

7<br />

8<br />

9<br />

10<br />

11<br />

12<br />

13<br />

14<br />

15<br />

Lane 0<br />

Lane 1<br />

Lane 2<br />

Lane 3<br />

Lane 4<br />

Lane 5<br />

Lane 6<br />

Lane 7<br />

Lane 8<br />

Lane 9<br />

Lane 10<br />

Lane 11<br />

Lane 12<br />

Lane 13<br />

Lane 14<br />

Lane 15<br />

0<br />

1<br />

2<br />

3<br />

4<br />

5<br />

6<br />

7<br />

8<br />

9<br />

10<br />

11<br />

12<br />

13<br />

14<br />

15


2.3.3 DMI Link Down<br />

Interfaces<br />

The DMI link going down is a fatal, unrecoverable error. If the DMI data link goes to<br />

data link down, after the link was up, then the DMI link hangs the system by not<br />

allowing the link to retrain to prevent data corruption. This link behavior is controlled<br />

by the PCH.<br />

Downstream transactions that had been successfully transmitted across the link prior<br />

to the link going down may be processed as normal. No completions from downstream,<br />

non-posted transactions are returned upstream over the DMI link after a link down<br />

event.<br />

2.4 <strong>Processor</strong> Graphics Controller (GT)<br />

New Graphics Engine Architecture includes 3D compute elements, Multi-format<br />

hardware-assisted decode/encode Pipeline, and Mid-Level Cache (MLC) for superior<br />

high definition playback, video quality, and improved 3D performance and Media.<br />

Display Engine in the Uncore handles delivering the pixels to the screen. GSA (Graphics<br />

in System Agent) is the primary Channel interface for display memory accesses and<br />

PCI-like traffic in and out.<br />

Figure 2-6. <strong>Processor</strong> Graphics Controller Unit Block Diagram<br />

30 Datasheet, Volume 1


Interfaces<br />

2.4.1 3D and Video Engines for Graphics Processing<br />

The 3D graphics pipeline architecture simultaneously operates on different primitives or<br />

on different portions of the same primitive. All the cores are fully programmable,<br />

increasing the versatility of the 3D Engine. The Gen 6.0 3D engine provides the<br />

following performance and power-management enhancements:<br />

Hierarchal-Z<br />

Video quality enhancements<br />

2.4.1.1 3D Engine Execution Units<br />

2.4.1.2 3D Pipeline<br />

Supports up to 12 EUs. The EUs perform 128-bit wide execution per clock.<br />

Support SIMD8 instructions for vertex processing and SIMD16 instructions for pixel<br />

processing.<br />

2.4.1.2.1 Vertex Fetch (VF) Stage<br />

The VF stage executes 3DPRIMITIVE commands. Some enhancements have been<br />

included to better support legacy D3D APIs as well as SGI OpenGL*.<br />

2.4.1.2.2 Vertex Shader (VS) Stage<br />

The VS stage performs shading of vertices output by the VF function. The VS unit<br />

produces an output vertex reference for every input vertex reference received from the<br />

VF unit, in the order received.<br />

2.4.1.2.3 Geometry Shader (GS) Stage<br />

2.4.1.2.4 Clip Stage<br />

The GS stage receives inputs from the VS stage. Compiled application-provided GS<br />

programs, specifying an algorithm to convert the vertices of an input object into some<br />

output primitives. For example, a GS shader may convert lines of a line strip into<br />

polygons representing a corresponding segment of a blade of grass centered on the<br />

line. Or it could use adjacency information to detect silhouette edges of triangles and<br />

output polygons extruding out from the edges.<br />

The Clip stage performs general processing on incoming 3D objects. However, it also<br />

includes specialized logic to perform a Clip Test function on incoming objects. The Clip<br />

Test optimizes generalized 3D Clipping. The Clip unit examines the position of incoming<br />

vertices, and accepts/rejects 3D objects based on its Clip algorithm.<br />

2.4.1.2.5 Strips and Fans (SF) Stage<br />

The SF stage performs setup operations required to rasterize 3D objects. The outputs<br />

from the SF stage to the Windower stage contain implementation-specific information<br />

required for the rasterization of objects and also supports clipping of primitives to some<br />

extent.<br />

Datasheet, Volume 1 31


2.4.1.2.6 Windower/IZ (WIZ) Stage<br />

Interfaces<br />

The WIZ unit performs an early depth test, which removes failing pixels and eliminates<br />

unnecessary processing overhead.<br />

The Windower uses the parameters provided by the SF unit in the object-specific<br />

rasterization algorithms. The WIZ unit rasterizes objects into the corresponding set of<br />

pixels. The Windower is also capable of performing dithering, whereby the illusion of a<br />

higher resolution when using low-bpp channels in color buffers is possible. Color<br />

dithering diffuses the sharp color bands seen on smooth-shaded objects.<br />

2.4.1.3 Video Engine<br />

2.4.1.4 2D Engine<br />

The Video Engine handles the non-3D (media/video) applications. It includes support<br />

for VLD and MPEG2 decode in hardware.<br />

The 2D Engine contains BLT (Block Level Transfer) functionality and an extensive set of<br />

2D instructions. To take advantage of the 3D during engine s functionality, some BLT<br />

functions make use of the 3D renderer.<br />

2.4.1.4.1 <strong>Processor</strong> Graphics VGA Registers<br />

The 2D registers consists of original VGA registers and others to support graphics<br />

modes that have color depths, resolutions, and hardware acceleration features that go<br />

beyond the original VGA standard.<br />

2.4.1.4.2 Logical 128-Bit Fixed BLT and 256 Fill Engine<br />

This BLT engine accelerates the GUI of Microsoft Windows* operating systems. The<br />

128-bit BLT engine provides hardware acceleration of block transfers of pixel data for<br />

many common Windows operations. The BLT engine can be used for the following:<br />

Move rectangular blocks of data between memory locations<br />

Data alignment<br />

To perform logical operations (raster ops)<br />

The rectangular block of data does not change, as it is transferred between memory<br />

locations. The allowable memory transfers are between: cacheable system memory<br />

and frame buffer memory, frame buffer memory and frame buffer memory, and within<br />

system memory. Data to be transferred can consist of regions of memory, patterns, or<br />

solid color fills. A pattern is always 8 x 8 pixels wide and may be 8, 16, or 32 bits per<br />

pixel.<br />

The BLT engine expands monochrome data into a color depth of 8, 16, or 32 bits. BLTs<br />

can be either opaque or transparent. Opaque transfers move the data specified to the<br />

destination. Transparent transfers compare destination color to source color and write<br />

according to the mode of transparency selected.<br />

Data is horizontally and vertically aligned at the destination. If the destination for the<br />

BLT overlaps with the source memory location, the BLT engine specifies which area in<br />

memory to begin the BLT transfer. Hardware is included for all 256 raster operations<br />

(source, pattern, and destination) defined by Microsoft, including transparent BLT.<br />

The BLT engine has instructions to invoke BLT and stretch BLT operations, permitting<br />

software to set up instruction buffers and use batch processing. The BLT engine can<br />

perform hardware clipping during BLTs.<br />

32 Datasheet, Volume 1


Interfaces<br />

2.4.2 <strong>Processor</strong> Graphics Display<br />

The <strong>Processor</strong> Graphics controller display pipe can be broken down into three<br />

components:<br />

Display Planes<br />

Display Pipes<br />

DisplayPort and <strong>Intel</strong> ® FDI<br />

Figure 2-7. <strong>Processor</strong> Display Block Diagram<br />

2.4.2.1 Display Planes<br />

A display plane is a single displayed surface in memory and contains one image<br />

(desktop, cursor, overlay). It is the portion of the display hardware logic that defines<br />

the format and location of a rectangular region of memory that can be displayed on<br />

display output device and delivers that data to a display pipe. This is clocked by the<br />

Core Display Clock.<br />

2.4.2.1.1 Planes A and B<br />

Planes A and B are the main display planes and are associated with Pipes A and B<br />

respectively. The two display pipes are independent, allowing for support of two<br />

independent display streams. They are both double-buffered, which minimizes latency<br />

and improves visual quality.<br />

2.4.2.1.2 Sprite A and B<br />

Sprite A and Sprite B are planes optimized for video decode, and are associated with<br />

Planes A and B respectively. Sprite A and B are also double-buffered.<br />

2.4.2.1.3 Cursors A and B<br />

Display<br />

Arbiter<br />

Display<br />

Planes<br />

& VGA<br />

Display<br />

Pipe A<br />

Display<br />

Pipe B<br />

Display<br />

Port<br />

Control<br />

A<br />

Display<br />

Port<br />

Control<br />

B<br />

Cursors A and B are small, fixed-sized planes dedicated for mouse cursor acceleration,<br />

and are associated with Planes A and B respectively. These planes support resolutions<br />

up to 256 x 256 each.<br />

Datasheet, Volume 1 33<br />

DMI<br />

<strong>Intel</strong><br />

FDI<br />

(Tx<br />

Side)


2.4.2.1.4 VGA<br />

VGA is used for boot, safe mode, legacy games, etc. It can be changed by an<br />

application without OS/driver notification, due to legacy requirements.<br />

2.4.2.2 Display Pipes<br />

Interfaces<br />

The display pipe blends and synchronizes pixel data received from one or more display<br />

planes and adds the timing of the display output device upon which the image is<br />

displayed. This is clocked by the Display Reference clock inputs.<br />

The display pipes A and B operate independently of each other at the rate of 1 pixel per<br />

clock. They can attach to any of the display ports. Each pipe sends display data to the<br />

PCH over the <strong>Intel</strong> Flexible Display Interface (<strong>Intel</strong> ® FDI).<br />

2.4.2.3 Display Ports<br />

The display ports consist of output logic and pins that transmit the display data to the<br />

associated encoding logic and send the data to the display device (that is, LVDS,<br />

HDMI*, DVI, SDVO, etc.). All display interfaces connecting external displays are now<br />

repartitioned and driven from the PCH.<br />

2.4.3 <strong>Intel</strong> ® Flexible Display Interface<br />

The <strong>Intel</strong> Flexible Display Interface (<strong>Intel</strong> ® FDI) is a proprietary link for carrying display<br />

traffic from the <strong>Processor</strong> Graphics controller to the PCH display I/Os. <strong>Intel</strong> ® FDI<br />

supports two independent channels one for pipe A and one for pipe B.<br />

Each channel has four transmit (Tx) differential pairs used for transporting pixel<br />

and framing data from the display engine.<br />

Each channel has one single-ended LineSync and one FrameSync input (1-V CMOS<br />

signaling).<br />

One display interrupt line input (1-V CMOS signaling).<br />

<strong>Intel</strong> ® FDI may dynamically scalable down to 2X or 1X based on actual display<br />

bandwidth requirements.<br />

Common 100-MHz reference clock.<br />

Each channel transports at a rate of 2.7 Gbps.<br />

PCH supports end-to-end lane reversal across both channels (no reversal support<br />

required in the processor).<br />

2.4.4 Multi-Graphics Controller Multi-Monitor Support<br />

The processor supports simultaneous use of the <strong>Processor</strong> Graphics Controller (GT) and<br />

a x16 PCI Express Graphics (PEG) device.<br />

The processor supports a maximum of 2 displays connected to the PEG card in parallel<br />

with up to 2 displays connected to the PCH.<br />

Note: When supporting Multi Graphics controllers Multi-Monitors, drag and drop between<br />

monitors and the 2x8 PEG is not supported.<br />

34 Datasheet, Volume 1


Interfaces<br />

2.5 Platform Environment Control Interface (PECI)<br />

The PECI is a one-wire interface that provides a communication channel between a<br />

PECI client (processor) and a PECI master. The processor implements a PECI interface<br />

to:<br />

Allow communication of processor thermal and other information to the PECI<br />

master.<br />

Read averaged Digital Thermal Sensor (DTS) values for fan speed control.<br />

2.6 Interface Clocking<br />

2.6.1 Internal Clocking Requirements<br />

Table 2-3. Reference Clock<br />

Reference Input Clock Input Frequency Associated PLL<br />

BCLK[0]/BCLK#[0] 100 MHz <strong>Processor</strong>/Memory/Graphics/PCIe/DMI/FDI<br />

Datasheet, Volume 1 35<br />

§ §


Interfaces<br />

36 Datasheet, Volume 1


Technologies<br />

3 Technologies<br />

This chapter provides a high-level description of <strong>Intel</strong> technologies implemented in the<br />

processor.<br />

The implementation of the features may vary between the processor SKUs.<br />

Details on the different technologies of <strong>Intel</strong> processors and other relevant external<br />

notes are located at the <strong>Intel</strong> technology web site: http://www.intel.com/technology/<br />

3.1 <strong>Intel</strong> ® Virtualization Technology<br />

<strong>Intel</strong> Virtualization Technology (<strong>Intel</strong> ® VT) makes a single system appear as multiple<br />

independent systems to software. This allows multiple, independent operating systems<br />

to run simultaneously on a single system. <strong>Intel</strong> VT comprises technology components<br />

to support virtualization of platforms based on <strong>Intel</strong> architecture microprocessors and<br />

chipsets. <strong>Intel</strong> Virtualization Technology (<strong>Intel</strong> VT-x) added hardware support in the<br />

processor to improve the virtualization performance and robustness. <strong>Intel</strong> Virtualization<br />

Technology for Directed I/O (<strong>Intel</strong> VT-d) adds chipset hardware implementation to<br />

support and improve I/O virtualization performance and robustness.<br />

<strong>Intel</strong> VT-x specifications and functional descriptions are included in the <strong>Intel</strong> ® 64 and<br />

IA-32 Architectures Software Developer’s Manual, Volume 3B and is available at:<br />

http://www.intel.com/products/processor/manuals/index.htm<br />

The <strong>Intel</strong> VT-d specification and other VT documents can be referenced at:<br />

http://www.intel.com/technology/virtualization/index.htm<br />

3.1.1 <strong>Intel</strong> ® VT-x Objectives<br />

<strong>Intel</strong> VT-x provides hardware acceleration for virtualization of IA platforms. Virtual<br />

Machine Monitor (VMM) can use <strong>Intel</strong> VT-x features to provide improved a reliable<br />

virtualized platform. By using <strong>Intel</strong> VT-x, a VMM is:<br />

Robust: VMMs no longer need to use paravirtualization or binary translation. This<br />

means that they will be able to run off-the-shelf OSs and applications without any<br />

special steps.<br />

Enhanced: <strong>Intel</strong> VT enables VMMs to run 64-bit guest operating systems on IA x86<br />

processors.<br />

More reliable: Due to the hardware support, VMMs can now be smaller, less<br />

complex, and more efficient. This improves reliability and availability and reduces<br />

the potential for software conflicts.<br />

More secure: The use of hardware transitions in the VMM strengthens the isolation<br />

of VMs and further prevents corruption of one VM from affecting others on the<br />

same system.<br />

Datasheet, Volume 1 37


3.1.2 <strong>Intel</strong> ® VT-x Features<br />

The processor core supports the following <strong>Intel</strong> VT-x features:<br />

Extended Page Tables (EPT)<br />

EPT is hardware assisted page table virtualization<br />

It eliminates VM exits from guest OS to the VMM for shadow page-table<br />

maintenance<br />

Virtual <strong>Processor</strong> IDs (VPID)<br />

Technologies<br />

Ability to assign a VM ID to tag processor core hardware structures (such as<br />

TLBs)<br />

This avoids flushes on VM transitions to give a lower-cost VM transition time<br />

and an overall reduction in virtualization overhead.<br />

Guest Preemption Timer<br />

Mechanism for a VMM to preempt the execution of a guest OS after an amount<br />

of time specified by the VMM. The VMM sets a timer value before entering a<br />

guest<br />

The feature aids VMM developers in flexibility and Quality of Service (QoS)<br />

assurances<br />

Descriptor-Table Exiting<br />

Descriptor-table exiting allows a VMM to protect a guest OS from internal<br />

(malicious software based) attack by preventing relocation of key system data<br />

structures like IDT (interrupt descriptor table), GDT (global descriptor table),<br />

LDT (local descriptor table), and TSS (task segment selector).<br />

A VMM using this feature can intercept (by a VM exit) attempts to relocate<br />

these data structures and prevent them from being tampered by malicious<br />

software.<br />

3.1.3 <strong>Intel</strong> ® VT-d Objectives<br />

The key <strong>Intel</strong> VT-d objectives are domain-based isolation and hardware-based<br />

virtualization. A domain can be abstractly defined as an isolated environment in a<br />

platform to which a subset of host physical memory is allocated. Virtualization allows<br />

for the creation of one or more partitions on a single system. This could be multiple<br />

partitions in the same operating system, or there can be multiple operating system<br />

instances running on the same system offering benefits such as system<br />

consolidation, legacy migration, activity partitioning, or security.<br />

3.1.4 <strong>Intel</strong> ® VT-d Features<br />

The processor supports the following <strong>Intel</strong> VT-d features:<br />

Memory controller and <strong>Processor</strong> Graphics comply with <strong>Intel</strong> ® VT-d 1.2<br />

specification.<br />

Two VT-d DMA remap engines.<br />

iGraphics DMA remap engine<br />

DMI/PEG<br />

Support for root entry, context entry, and default context<br />

39-bit guest physical address and host physical address widths<br />

38 Datasheet, Volume 1


Technologies<br />

Support for 4K page sizes only<br />

Support for register-based fault recording only (for single entry only) and support<br />

for MSI interrupts for faults<br />

Support for both leaf and non-leaf caching<br />

Support for boot protection of default page table<br />

Support for non-caching of invalid page table entries<br />

Support for hardware based flushing of translated but pending writes and pending<br />

reads, on IOTLB invalidation<br />

Support for page-selective IOTLB invalidation<br />

MSI cycles (MemWr to address FEEx_xxxxh) not translated<br />

Translation faults result in cycle forwarding to VBIOS region (byte enables<br />

masked for writes). Returned data may be bogus for internal agents, PEG/DMI<br />

interfaces return unsupported request status<br />

Interrupt Remapping is supported<br />

Queued invalidation is supported.<br />

VT-d translation bypass address range is supported (Pass Through)<br />

Note: <strong>Intel</strong> VT-d Technology may not be available on all SKUs.<br />

3.1.5 <strong>Intel</strong> ® VT-d Features Not Supported<br />

The following features are not supported by the processor with <strong>Intel</strong> VT-d:<br />

No support for PCISIG endpoint caching (ATS)<br />

No support for <strong>Intel</strong> VT-d read prefetching/snarfing (that is, translations within a<br />

cacheline are not stored in an internal buffer for reuse for subsequent translations).<br />

No support for advance fault reporting<br />

No support for super pages<br />

No support for <strong>Intel</strong> VT-d translation bypass address range (such usage models<br />

need to be resolved with VMM help in setting up the page tables correctly)<br />

Datasheet, Volume 1 39


Technologies<br />

3.2 <strong>Intel</strong> ® Trusted Execution Technology (<strong>Intel</strong> ® TXT)<br />

<strong>Intel</strong> Trusted Execution Technology (<strong>Intel</strong> TXT) defines platform-level enhancements<br />

that provide the building blocks for creating trusted platforms.<br />

The <strong>Intel</strong> TXT platform helps to provide the authenticity of the controlling environment<br />

such that those wishing to rely on the platform can make an appropriate trust decision.<br />

The <strong>Intel</strong> TXT platform determines the identity of the controlling environment by<br />

accurately measuring and verifying the controlling software.<br />

Another aspect of the trust decision is the ability of the platform to resist attempts to<br />

change the controlling environment. The <strong>Intel</strong> TXT platform will resist attempts by<br />

software processes to change the controlling environment or bypass the bounds set by<br />

the controlling environment.<br />

<strong>Intel</strong> TXT is a set of extensions designed to provide a measured and controlled launch<br />

of system software that will then establish a protected environment for itself and any<br />

additional software that it may execute.<br />

These extensions enhance two areas:<br />

The launching of the Measured Launched Environment (MLE)<br />

The protection of the MLE from potential corruption<br />

The enhanced platform provides these launch and control interfaces using Safer Mode<br />

Extensions (SMX).<br />

The SMX interface includes the following functions:<br />

Measured/Verified launch of the MLE<br />

Mechanisms to ensure the above measurement is protected and stored in a secure<br />

location<br />

Protection mechanisms that allow the MLE to control attempts to modify itself<br />

For more information, refer to the <strong>Intel</strong> ® TXT Measured Launched Environment<br />

Developer’s Guide in http://www.intel.com/technology/security.<br />

3.3 <strong>Intel</strong> ® Hyper-Threading Technology<br />

The processor supports <strong>Intel</strong> ® Hyper-Threading Technology (<strong>Intel</strong> ® HT Technology),<br />

that allows an execution core to function as two logical processors. While some<br />

execution resources (such as caches, execution units, and buses) are shared, each<br />

logical processor has its own architectural state with its own set of general-purpose<br />

registers and control registers. This feature must be enabled using the BIOS and<br />

requires operating system support.<br />

<strong>Intel</strong> recommends enabling Hyper-Threading Technology with Microsoft Windows 7*,<br />

Microsoft Windows Vista*, Microsoft Windows* XP Professional/Windows* XP Home,<br />

and disabling Hyper-Threading Technology using the BIOS for all previous versions of<br />

Windows operating systems. For more information on Hyper-Threading Technology, see<br />

http://www.intel.com/technology/platform-technology/hyper-threading/.<br />

40 Datasheet, Volume 1


Technologies<br />

3.4 <strong>Intel</strong> ® Turbo Boost Technology<br />

<strong>Intel</strong> ® Turbo Boost Technology is a feature that allows the processor core to<br />

opportunistically and automatically run faster than its rated operating frequency/render<br />

clock if it is operating below power, temperature, and current limits. The <strong>Intel</strong> Turbo<br />

Boost Technology feature is designed to increase performance of both multi-threaded<br />

and single-threaded workloads. Maximum frequency is dependant on the SKU and<br />

number of active cores. No special hardware support is necessary for <strong>Intel</strong> Turbo Boost<br />

Technology. BIOS and the OS can enable or disable <strong>Intel</strong> Turbo Boost Technology.<br />

Compared with previous generation products, <strong>Intel</strong> Turbo Boost Technology will<br />

increase the ratio of application power to TDP. Thus, thermal solutions and platform<br />

cooling that are designed to less than thermal design guidance might experience<br />

thermal and performance issues since more applications will tend to run at the<br />

maximum power limit for significant periods of time.<br />

Note: <strong>Intel</strong> Turbo Boost Technology may not be available on all SKUs.<br />

3.4.1 <strong>Intel</strong> ® Turbo Boost Technology Frequency<br />

The processor s rated frequency assumes that all execution cores are running an<br />

application at the thermal design power (TDP). However, under typical operation, not<br />

all cores are active. Therefore, most applications are consuming less than the TDP at<br />

the rated frequency. To take advantage of the available thermal headroom, the active<br />

cores can increase their operating frequency.<br />

To determine the highest performance frequency amongst active cores, the processor<br />

takes the following into consideration:<br />

The number of cores operating in the C0 state.<br />

The estimated current consumption.<br />

The estimated power consumption.<br />

The temperature.<br />

Any of these factors can affect the maximum frequency for a given workload. If the<br />

power, current, or thermal limit is reached, the processor will automatically reduce the<br />

frequency to stay with its TDP limit.<br />

Note: <strong>Intel</strong> Turbo Boost Technology processor frequencies are only active if the operating<br />

system is requesting the P0 state. For more information on P-states and C-states, refer<br />

to Chapter 4, Power Management .<br />

3.4.2 <strong>Intel</strong> ® Turbo Boost Technology Graphics Frequency<br />

Graphics render frequency is selected by the processor dynamically based on graphics<br />

workload demand. The processor can optimize both processor and <strong>Processor</strong> Graphics<br />

performance by managing power for the overall package. For the <strong>Processor</strong> Graphics,<br />

this allows an increase in the render core frequency and increased graphics<br />

performance for graphics intensive workloads. In addition, during processor intensive<br />

workloads when the graphics power is low, the processor core can increase its<br />

frequency higher within the package power limit. Enabling <strong>Intel</strong> Turbo Boost Technology<br />

will maximize the performance of the processor core and the graphics render frequency<br />

within the specified package power levels.<br />

Datasheet, Volume 1 41


3.5 <strong>Intel</strong> ® Advanced Vector Extensions (AVX)<br />

Technologies<br />

<strong>Intel</strong> ® Advanced Vector Extensions (AVX) is the latest expansion of the <strong>Intel</strong> instruction<br />

set. It extends the <strong>Intel</strong> ® Streaming SIMD Extensions (SSE) from 128-bit vectors into<br />

256-bit vectors. <strong>Intel</strong> AVX addresses the continued need for vector floating-point<br />

performance in mainstream scientific and engineering numerical applications, visual<br />

processing, recognition, data-mining/synthesis, gaming, physics, cryptography and<br />

other areas of applications. The enhancement in <strong>Intel</strong> AVX allows for improved<br />

performance due to wider vectors, new extensible syntax, and rich functionality<br />

including the ability to better manage, rearrange, and sort data. For more information<br />

on <strong>Intel</strong> AVX, see http://www.intel.com/software/avx<br />

3.6 Advanced Encryption Standard New Instructions<br />

(AES-NI)<br />

The processor supports Advanced Encryption Standard New Instructions (AES-NI) that<br />

are a set of Single Instruction Multiple Data (SIMD) instructions that enable fast and<br />

secure data encryption and decryption based on the Advanced Encryption Standard<br />

(AES). AES-NI are valuable for a wide range of cryptographic applications; such as,<br />

applications that perform bulk encryption/decryption, authentication, random number<br />

generation, and authenticated encryption. AES is broadly accepted as the standard for<br />

both government and industry applications, and is widely deployed in various protocols.<br />

AES-NI consists of six <strong>Intel</strong> ® SSE instructions. Four instructions, AESENC,<br />

AESENCLAST, AESDEC, and AESDELAST facilitate high performance AES encryption and<br />

decryption. The other two, AESIMC and AESKEYGENASSIST, support the AES key<br />

expansion procedure. Together, these instructions provide a full hardware for<br />

supporting AES, offering security, high performance, and a great deal of flexibility.<br />

3.6.1 PCLMULQDQ Instruction<br />

The processor supports the carry-less multiplication instruction, PCLMULQDQ.<br />

PCLMULQDQ is a Single Instruction Multiple Data (SIMD) instruction that computes the<br />

128-bit carry-less multiplication of two, 64-bit operands without generating and<br />

propagating carries. Carry-less multiplication is an essential processing component of<br />

several cryptographic systems and standards. Hence, accelerating carry-less<br />

multiplication can significantly contribute to achieving high speed secure computing<br />

and communication.<br />

3.7 <strong>Intel</strong> ® 64 Architecture x2APIC<br />

The x2APIC architecture extends the xAPIC architecture that provides a key mechanism<br />

for interrupt delivery. This extension is intended primarily to increase processor<br />

addressability.<br />

Specifically, x2APIC:<br />

Retains all key elements of compatibility to the xAPIC architecture<br />

delivery modes<br />

interrupt and processor priorities<br />

interrupt sources<br />

interrupt destination types<br />

42 Datasheet, Volume 1


Technologies<br />

Provides extensions to scale processor addressability for both the logical and<br />

physical destination modes<br />

Adds new features to enhance performance of interrupt delivery<br />

Reduces complexity of logical destination mode interrupt delivery on link based<br />

architectures<br />

The key enhancements provided by the x2APIC architecture over xAPIC are the<br />

following:<br />

Support for two modes of operation to provide backward compatibility and<br />

extensibility for future platform innovations<br />

In xAPIC compatibility mode, APIC registers are accessed through a memory<br />

mapped interface to a 4 KB page, identical to the xAPIC architecture.<br />

In x2APIC mode, APIC registers are accessed through Model Specific Register<br />

(MSR) interfaces. In this mode, the x2APIC architecture provides significantly<br />

increased processor addressability and some enhancements on interrupt<br />

delivery.<br />

Increased range of processor addressability in x2APIC mode<br />

Physical xAPIC ID field increases from 8 bits to 32 bits, allowing for interrupt<br />

processor addressability up to 4G-1 processors in physical destination mode. A<br />

processor implementation of x2APIC architecture can support fewer than 32bits<br />

in a software transparent fashion.<br />

Logical xAPIC ID field increases from 8 bits to 32 bits. The 32-bit logical x2APIC<br />

ID is partitioned into two sub-fields a 16-bit cluster ID and a 16-bit logical ID<br />

within the cluster. Consequently, ((2^20) -16) processors can be addressed in<br />

logical destination mode. <strong>Processor</strong> implementations can support fewer than<br />

16 bits in the cluster ID sub-field and logical ID sub-field in a software agnostic<br />

fashion.<br />

More efficient MSR interface to access APIC registers<br />

To enhance inter-processor and self directed interrupt delivery as well as the<br />

ability to virtualize the local APIC, the APIC register set can be accessed only<br />

through MSR based interfaces in the x2APIC mode. The Memory Mapped IO<br />

(MMIO) interface used by xAPIC is not supported in the x2APIC mode.<br />

The semantics for accessing APIC registers have been revised to simplify the<br />

programming of frequently-used APIC registers by system software. Specifically,<br />

the software semantics for using the Interrupt Command Register (ICR) and End Of<br />

Interrupt (EOI) registers have been modified to allow for more efficient delivery<br />

and dispatching of interrupts.<br />

The x2APIC extensions are made available to system software by enabling the local<br />

x2APIC unit in the x2APIC mode. To benefit from x2APIC capabilities, a new<br />

Operating System and a new BIOS are both needed, with special support for the<br />

x2APIC mode.<br />

The x2APIC architecture provides backward compatibility to the xAPIC architecture and<br />

forward extendibility for future <strong>Intel</strong> platform innovations.<br />

Note: <strong>Intel</strong> x2APIC technology may not be available on all processor SKUs.<br />

For more information, refer to the <strong>Intel</strong> ® 64 Architecture x2APIC Specification at<br />

http://www.intel.com/products/processor/manuals/<br />

§ §<br />

Datasheet, Volume 1 43


Technologies<br />

44 Datasheet, Volume 1


Power Management<br />

4 Power Management<br />

This chapter provides information on the following power management topics:<br />

Advanced Configuration and Power Interface (ACPI) States<br />

<strong>Processor</strong> Core<br />

Integrated Memory Controller (IMC)<br />

PCI Express*<br />

Figure 4-1. Power States<br />

Direct Media Interface (DMI)<br />

<strong>Processor</strong> Graphics Controller<br />

G0 – Working<br />

S0 – CPU Fully powered on<br />

C0 – Active mode<br />

P0<br />

Pn<br />

C1 – Auto halt<br />

G1 – Sleeping<br />

C1E – Auto halt, low freq, low voltage<br />

C3 – L1/L2 caches flush, clocks off<br />

C6 – save core states before shutdown<br />

C7 – similar to C6, L3 flush<br />

S3 cold – Sleep – Suspend To Ram (STR)<br />

S4 – Hibernate – Suspend To Disk (STD),<br />

Wakeup on PCH<br />

S5 – Soft Off – no power,<br />

Wakeup on PCH<br />

G3 – Mechanical Off<br />

* Note: Power states availability may vary between the different SKUs<br />

Datasheet, Volume 1 45


Power Management<br />

4.1 Advanced Configuration and Power Interface<br />

(ACPI) States Supported<br />

The ACPI states supported by the processor are described in this section.<br />

4.1.1 System States<br />

Table 4-1. System States<br />

State Description<br />

G0/S0 Full On<br />

G1/S3-Cold Suspend-to-RAM (STR). Context saved to memory (S3-Hot is not supported by the processor).<br />

G1/S4 Suspend-to-Disk (STD). All power lost (except wakeup on PCH).<br />

G2/S5 Soft off. All power lost (except wakeup on PCH). Total reboot.<br />

G3 Mechanical off. All power removed from system.<br />

4.1.2 <strong>Processor</strong> Core/Package Idle States<br />

Table 4-2. <strong>Processor</strong> Core/Package State Support<br />

State Description<br />

C0 Active mode, processor executing code<br />

C1 AutoHALT state<br />

C1E AutoHALT state with lowest frequency and voltage operating point<br />

C3<br />

4.1.3 Integrated Memory Controller States<br />

4.1.4 PCIe Link States<br />

Execution cores in C3 flush their L1 instruction cache, L1 data cache, and L2 cache to the L3<br />

shared cache. Clocks are shut off to each core.<br />

C6 Execution cores in this state save their architectural state before removing core voltage.<br />

Table 4-3. Integrated Memory Controller States<br />

State Description<br />

Power up CKE asserted. Active mode<br />

Pre-charge<br />

Power-down<br />

Active Power-<br />

Down<br />

Table 4-4. PCIe Link States<br />

CKE de-asserted (not self-refresh) with all banks closed<br />

CKE de-asserted (not self-refresh) with minimum one bank active<br />

Self-Refresh CKE de-asserted using device self-refresh<br />

State Description<br />

L0 Full on Active transfer state<br />

L0s First Active Power Management low power state Low exit latency<br />

L1 Lowest Active Power Management Longer exit latency<br />

L3 Lowest power state (power-off) Longest exit latency<br />

46 Datasheet, Volume 1


Power Management<br />

4.1.5 DMI States<br />

Table 4-5. DMI States<br />

State Description<br />

L0 Full on Active transfer state<br />

L0s First Active Power Management low power state Low exit latency<br />

L1 Lowest Active Power Management Longer exit latency<br />

L3 Lowest power state (power-off) Longest exit latency<br />

4.1.6 <strong>Processor</strong> Graphics Controller States<br />

Table 4-6. <strong>Processor</strong> Graphics Controller States<br />

State Description<br />

D0 Full on, display active<br />

D3 Cold Power-off<br />

4.1.7 Interface State Combinations<br />

Table 4-7. G, S, and C State Combinations<br />

Global (G)<br />

State<br />

Sleep<br />

(S) State<br />

<strong>Processor</strong><br />

Package<br />

(C) State<br />

<strong>Processor</strong><br />

State<br />

System Clocks Description<br />

G0 S0 C0 Full On On Full On<br />

G0 S0 C1/C1E Auto-Halt On Auto-Halt<br />

G0 S0 C3 Deep Sleep On Deep Sleep<br />

G0 S0 C6<br />

Deep Powerdown<br />

On Deep Power-down<br />

G1 S3 Power off Off, except RTC Suspend to RAM<br />

G1 S4 Power off Off, except RTC Suspend to Disk<br />

G2 S5 Power off Off, except RTC Soft Off<br />

G3 NA Power off Power off Hard off<br />

Datasheet, Volume 1 47


4.2 <strong>Processor</strong> Core Power Management<br />

Power Management<br />

While executing code, Enhanced <strong>Intel</strong> SpeedStep Technology optimizes the processor s<br />

frequency and core voltage based on workload. Each frequency and voltage operating<br />

point is defined by ACPI as a P-state. When the processor is not executing code, it is<br />

idle. A low-power idle state is defined by ACPI as a C-state. In general, lower power<br />

C-states have longer entry and exit latencies.<br />

4.2.1 Enhanced <strong>Intel</strong> ® SpeedStep ® Technology<br />

The following are the key features of Enhanced <strong>Intel</strong> SpeedStep Technology:<br />

Multiple frequency and voltage points for optimal performance and power<br />

efficiency. These operating points are known as P-states.<br />

Frequency selection is software controlled by writing to processor MSRs. The<br />

voltage is optimized based on the selected frequency and the number of active<br />

processor cores.<br />

If the target frequency is higher than the current frequency, V CC is ramped up<br />

in steps to an optimized voltage. This voltage is signaled by the SVID bus to the<br />

voltage regulator. Once the voltage is established, the PLL locks on to the<br />

target frequency.<br />

If the target frequency is lower than the current frequency, the PLL locks to the<br />

target frequency, then transitions to a lower voltage by signaling the target<br />

voltage on SVID bus.<br />

All active processor cores share the same frequency and voltage. In a multicore<br />

processor, the highest frequency P-state requested amongst all active<br />

cores is selected.<br />

Software-requested transitions are accepted at any time. If a previous<br />

transition is in progress, the new transition is deferred until the previous<br />

transition is completed.<br />

The processor controls voltage ramp rates internally to ensure glitch-free<br />

transitions.<br />

Because there is low transition latency between P-states, a significant number of<br />

transitions per-second are possible.<br />

4.2.2 Low-Power Idle States<br />

When the processor is idle, low-power idle states (C-states) are used to save power.<br />

More power savings actions are taken for numerically higher C-states. However, higher<br />

C-states have longer exit and entry latencies. Resolution of C-states occur at the<br />

thread, processor core, and processor package level. Thread-level C-states are<br />

available if <strong>Intel</strong> Hyper-Threading Technology is enabled.<br />

Caution: Long term reliability cannot be assured unless all the Low Power Idle States are<br />

enabled.<br />

48 Datasheet, Volume 1


Power Management<br />

Figure 4-2. Idle Power Management Breakdown of the <strong>Processor</strong> Cores<br />

Thread 0<br />

Thread 1<br />

Core 0 State<br />

Thread 0<br />

Core 1 State<br />

<strong>Processor</strong> Package State<br />

Thread 1<br />

Entry and exit of the C-States at the thread and core level are shown in Figure 4-3.<br />

Figure 4-3. Thread and Core C-State Entry and Exit<br />

MWAIT(C1), HLT<br />

MWAIT(C1), HLT<br />

(C1E Enabled)<br />

C1 C1E C3<br />

C6<br />

While individual threads can request low power C-states, power saving actions only<br />

take place once the core C-state is resolved. Core C-states are automatically resolved<br />

by the processor. For thread and core C-states, a transition to and from C0 is required<br />

before entering any other C-state.<br />

Datasheet, Volume 1 49<br />

C0<br />

MWAIT(C6),<br />

P_LVL3 I/O Read<br />

MWAIT(C3),<br />

P_LV2 I/O Read


Table 4-8. Coordination of Thread Power States at the Core Level<br />

<strong>Processor</strong> Core<br />

C-State<br />

Thread 0<br />

Power Management<br />

Note:<br />

1. If enabled, the core C-state will be C1E if all enabled cores have also resolved a core C1 state or higher.<br />

4.2.3 Requesting Low-Power Idle States<br />

Thread 1<br />

C0 C1 C3 C6<br />

C0 C0 C0 C0 C0<br />

C1 C0 C1 1<br />

C3 C0 C1 1<br />

The primary software interfaces for requesting low power idle states are through the<br />

MWAIT instruction with sub-state hints and the HLT instruction (for C1 and C1E).<br />

However, software may make C-state requests using the legacy method of I/O reads<br />

from the ACPI-defined processor clock control registers, referred to as P_LVLx. This<br />

method of requesting C-states provides legacy support for operating systems that<br />

initiate C-state transitions using I/O reads.<br />

For legacy operating systems, P_LVLx I/O reads are converted within the processor to<br />

the equivalent MWAIT C-state request. Therefore, P_LVLx reads do not directly result in<br />

I/O reads to the system. The feature, known as I/O MWAIT redirection, must be<br />

enabled in the BIOS.<br />

Note: The P_LVLx I/O Monitor address needs to be set up before using the P_LVLx I/O read<br />

interface. Each P-LVLx is mapped to the supported MWAIT(Cx) instruction as shown in<br />

Table 4-9.<br />

Table 4-9. P_LVLx to MWAIT Conversion<br />

The BIOS can write to the C-state range field of the PMG_IO_CAPTURE MSR to restrict<br />

the range of I/O addresses that are trapped and emulate MWAIT like functionality. Any<br />

P_LVLx reads outside of this range does not cause an I/O redirection to MWAIT(Cx) like<br />

request. They fall through like a normal I/O instruction.<br />

Note: When P_LVLx I/O instructions are used, MWAIT substates cannot be defined. The<br />

MWAIT substate is always zero if I/O MWAIT redirection is used. By default, P_LVLx I/O<br />

redirections enable the MWAIT 'break on EFLAGS.IF feature that triggers a wakeup on<br />

an interrupt, even if interrupts are masked by EFLAGS.IF.<br />

50 Datasheet, Volume 1<br />

C1 1<br />

C1 1<br />

C3 C3<br />

C6 C0 C1 1 C3 C6<br />

P_LVLx MWAIT(Cx) Notes<br />

P_LVL2 MWAIT(C3)<br />

P_LVL3 MWAIT(C6) C6. No sub-states allowed.


Power Management<br />

4.2.4 Core C-states<br />

The following are general rules for all core C-states, unless specified otherwise:<br />

4.2.4.1 Core C0 State<br />

A core C-State is determined by the lowest numerical thread state (such as Thread<br />

0 requests C1E while Thread 1 requests C3, resulting in a core C1E state). See<br />

Table 4-7.<br />

A core transitions to C0 state when:<br />

An interrupt occurs<br />

There is an access to the monitored address if the state was entered using an<br />

MWAIT instruction<br />

For core C1/C1E, core C3, and core C6, an interrupt directed toward a single thread<br />

wakes only that thread. However, since both threads are no longer at the same<br />

core C-state, the core resolves to C0.<br />

A system reset re-initializes all processor cores.<br />

The normal operating state of a core where code is being executed.<br />

4.2.4.2 Core C1/C1E State<br />

C1/C1E is a low power state entered when all threads within a core execute a HLT or<br />

MWAIT(C1/C1E) instruction.<br />

A System Management Interrupt (SMI) handler returns execution to either Normal<br />

state or the C1/C1E state. See the <strong>Intel</strong> ® 64 and IA-32 Architecture Software<br />

Developer’s Manual, Volume 3A/3B: System Programmer’s Guide for more information.<br />

While a core is in C1/C1E state, it processes bus snoops and snoops from other<br />

threads. For more information on C1E, see Section 4.2.5.2.<br />

4.2.4.3 Core C3 State<br />

Individual threads of a core can enter the C3 state by initiating a P_LVL2 I/O read to<br />

the P_BLK or an MWAIT(C3) instruction. A core in C3 state flushes the contents of its<br />

L1 instruction cache, L1 data cache, and L2 cache to the shared L3 cache, while<br />

maintaining its architectural state. All core clocks are stopped at this point. Because the<br />

core s caches are flushed, the processor does not wake any core that is in the C3 state<br />

when either a snoop is detected or when another core accesses cacheable memory.<br />

4.2.4.4 Core C6 State<br />

Individual threads of a core can enter the C6 state by initiating a P_LVL3 I/O read or an<br />

MWAIT(C6) instruction. Before entering core C6, the core will save its architectural<br />

state to a dedicated SRAM. Once complete, a core will have its voltage reduced to zero<br />

volts. During exit, the core is powered on and its architectural state is restored.<br />

4.2.4.5 C-State Auto-Demotion<br />

In general, deeper C-states such as C6 have long latencies and have higher energy<br />

entry/exit costs. The resulting performance and energy penalties become significant<br />

when the entry/exit frequency of a deeper C-state is high. Therefore, incorrect or<br />

inefficient usage of deeper C-states have a negative impact on power. To increase<br />

residency and improve power in deeper C-states, the processor supports C-state autodemotion.<br />

Datasheet, Volume 1 51


There are two C-State auto-demotion options:<br />

C6 to C3<br />

C6/C3 To C1<br />

Power Management<br />

The decision to demote a core from C6 to C3 or C3/C6 to C1 is based on each core s<br />

immediate residency history. Upon each core C6 request, the core C-state is demoted<br />

to C3 or C1 until a sufficient amount of residency has been established. At that point, a<br />

core is allowed to go into C3/C6. Each option can be run concurrently or individually.<br />

This feature is disabled by default. BIOS must enable it in the<br />

PMG_CST_CONFIG_CONTROL register. The auto-demotion policy is also configured by<br />

this register.<br />

4.2.5 Package C-States<br />

The processor supports C0, C1/C1E, C3, and C6 power states. The following is a<br />

summary of the general rules for package C-state entry. These apply to all package Cstates<br />

unless specified otherwise:<br />

A package C-state request is determined by the lowest numerical core C-state<br />

amongst all cores.<br />

A package C-state is automatically resolved by the processor depending on the<br />

core idle power states and the status of the platform components.<br />

Each core can be at a lower idle power state than the package if the platform<br />

does not grant the processor permission to enter a requested package C-state.<br />

The platform may allow additional power savings to be realized in the<br />

processor.<br />

For package C-states, the processor is not required to enter C0 before entering<br />

any other C-state.<br />

The processor exits a package C-state when a break event is detected. Depending on<br />

the type of break event, the processor does the following:<br />

If a core break event is received, the target core is activated and the break event<br />

message is forwarded to the target core.<br />

If the break event is not masked, the target core enters the core C0 state and<br />

the processor enters package C0.<br />

If the break event was due to a memory access or snoop request.<br />

But the platform did not request to keep the processor in a higher package Cstate,<br />

the package returns to its previous C-state.<br />

And the platform requests a higher power C-state, the memory access or snoop<br />

request is serviced and the package remains in the higher power C-state.<br />

Table 4-10. Coordination of Core Power States at the Package Level<br />

Package C-State<br />

Core 0<br />

Core 1<br />

C0 C1 C3 C6<br />

C0 C0 C0 C0 C0<br />

C1 C0 C1 1 C1 1 C1 1<br />

C3 C0 C1 1<br />

C6 C0 C1 1<br />

C3 C3<br />

C3 C6<br />

Note:<br />

1. If enabled, the package C-state will be C1E if all cores have resolved a core C1 state or higher.<br />

52 Datasheet, Volume 1


Power Management<br />

Figure 4-4. Package C-State Entry and Exit<br />

4.2.5.1 Package C0<br />

This is the normal operating state for the processor. The processor remains in the<br />

normal state when at least one of its cores is in the C0 or C1 state or when the platform<br />

has not granted permission to the processor to go into a low power state. Individual<br />

cores may be in lower power idle states while the package is in C0.<br />

4.2.5.2 Package C1/C1E<br />

No additional power reduction actions are taken in the package C1 state. However, if<br />

the C1E sub-state is enabled, the processor automatically transitions to the lowest<br />

supported core clock frequency, followed by a reduction in voltage.<br />

The package enters the C1 low power state when:<br />

At least one core is in the C1 state.<br />

The other cores are in a C1 or lower power state.<br />

The package enters the C1E state when:<br />

All cores have directly requested C1E using MWAIT(C1) with a C1E sub-state hint.<br />

All cores are in a power state lower that C1/C1E but the package low power state is<br />

limited to C1/C1E using the PMG_CST_CONFIG_CONTROL MSR.<br />

All cores have requested C1 using HLT or MWAIT(C1) and C1E auto-promotion is<br />

enabled in IA32_MISC_ENABLES.<br />

No notification to the system occurs upon entry to C1/C1E.<br />

Datasheet, Volume 1 53<br />

C3<br />

C1 C6<br />

C0


4.2.5.3 Package C3 State<br />

A processor enters the package C3 low power state when:<br />

At least one core is in the C3 state.<br />

Power Management<br />

The other cores are in a C3 or lower power state, and the processor has been<br />

granted permission by the platform.<br />

The platform has not granted a request to a package C6 state but has allowed a<br />

package C6 state.<br />

In package C3-state, the L3 shared cache is valid.<br />

4.2.5.4 Package C6 State<br />

A processor enters the package C6 low power state when:<br />

At least one core is in the C6 state.<br />

The other cores are in a C6 or lower power state, and the processor has been<br />

granted permission by the platform.<br />

In package C6 state, all cores have saved their architectural state and have had their<br />

core voltages reduced to zero volts. The L3 shared cache is still powered and snoopable<br />

in this state. The processor remains in package C6 state as long as any part of the L3<br />

cache is active.<br />

4.3 IMC Power Management<br />

The main memory is power managed during normal operation and in low-power ACPI<br />

Cx states.<br />

4.3.1 Disabling Unused System Memory Outputs<br />

Any system memory (SM) interface signal that goes to a memory module connector in<br />

which it is not connected to any actual memory devices (such as DIMM connector is<br />

unpopulated, or is single-sided) is tri-stated. The benefits of disabling unused SM<br />

signals are:<br />

Reduced power consumption.<br />

Reduced possible overshoot/undershoot signal quality issues seen by the processor<br />

I/O buffer receivers caused by reflections from potentially un-terminated<br />

transmission lines.<br />

When a given rank is not populated, the corresponding chip select and CKE signals are<br />

not driven.<br />

At reset, all rows must be assumed to be populated, until it can be proven that they are<br />

not populated. This is due to the fact that when CKE is tristated with an DIMM present,<br />

the DIMM is not ensured to maintain data integrity.<br />

SCKE tri-state should be enabled by BIOS where appropriate, since at reset all rows<br />

must be assumed to be populated.<br />

54 Datasheet, Volume 1


Power Management<br />

4.3.2 DRAM Power Management and Initialization<br />

The processor implements extensive support for power management on the SDRAM<br />

interface. There are four SDRAM operations associated with the Clock Enable (CKE)<br />

signals that the SDRAM controller supports. The processor drives four CKE pins to<br />

perform these operations.<br />

The CKE is one of the power-save means. When CKE is off the internal DDR clock is<br />

disabled and the DDR power is reduced. The power-saving differs according the<br />

selected mode and the DDR type used. For more information, please refer to the IDD<br />

table in the DDR specification.<br />

The DDR specification defines 3 levels of power-down that differ in power-saving and in<br />

wakeup time:<br />

1. Active power-down (APD): This mode is entered if there are open pages when<br />

de-asserting CKE. In this mode the open pages are retained. Power-saving in this<br />

mode is the lowest. Power consumption of DDR is defined by IDD3P. Exiting this<br />

mode is fined by tXP small number of cycles.<br />

2. Precharged power-down (PPD): This mode is entered if all banks in DDR are<br />

precharged when de-asserting CKE. Power-saving in this mode is intermediate<br />

better than APD, but less than DLL-off. Power consumption is defined by IDD2P1.<br />

Exiting this mode is defined by tXP. Difference from APD mode is that when wakingup<br />

all page-buffers are empty<br />

3. DLL-off: In this mode the data-in DLLs on DDR are off. Power-saving in this mode<br />

is the best among all power-modes. Power consumption is defined by IDD2P1.<br />

Exiting this mode is defined by tXP, but also tXPDLL (10 20 according to DDR<br />

type) cycles until first data transfer is allowed.<br />

The processor supports 5 different types of power-down. The different modes are the<br />

power-down modes supported by DDR3 and combinations of these. The type of CKE<br />

power-down is defined by the configuration. The are options are:<br />

1. No power-down<br />

2. APD: The rank enters power-down as soon as idle-timer expires, no matter what is<br />

the bank status<br />

3. PPD: When idle timer expires the MC sends PRE-all to rank and then enters powerdown<br />

4. DLL-off: same as option (2) but DDR is configured to DLL-off<br />

5. APD, change to PPD (APD-PPD): Begins as option (1), and when all page-close<br />

timers of the rank are expired, it wakes the rank, issues PRE-all, and returns to PPD<br />

APD, change to DLL-off (APD_DLLoff) Begins as option (1), and when all pageclose<br />

timers of the rank are expired, it wakes the rank, issues PRE-all and returns<br />

to DLL-off power-down<br />

The CKE is determined per rank when it is inactive. Each rank has an idle-counter. The<br />

idle-counter starts counting as soon as the rank has no accesses, and if it expires, the<br />

rank may enter power-down while no new transactions to the rank arrive to queues.<br />

Note that the idle-counter begins counting at the last incoming transaction arrival.<br />

It is important to understand that since the power-down decision is per rank, the MC<br />

can find many opportunities to power-down ranks even while running memory<br />

intensive applications, and savings are significant (may be a few watts, according to<br />

the DDR specification). This is significant when each channel is populated with more<br />

ranks.<br />

Datasheet, Volume 1 55


Power Management<br />

Selection of power modes should be according to power-performance or thermal tradeoffs<br />

of a given system:<br />

When trying to achieve maximum performance and power or thermal consideration<br />

is not an issue: use no power-down.<br />

In a system that tries to minimize power-consumption, try to use the deepest<br />

power-down mode possible DLL-off or APD_DLLoff.<br />

In high-performance systems with dense packaging (that is, complex thermal<br />

design) the power-down mode should be considered in order to reduce the heating<br />

and avoid DDR throttling caused by the heating.<br />

Control of the power-mode through CRB-BIOS: The BIOS selects by default no-powerdown.<br />

There are knobs to change the power-down selected mode.<br />

Another control is the idle timer expiration count. This is set through PM_PDWN_config<br />

bits 7:0 (MCHBAR +4CB0). As this timer is set to a shorter time, the MC will have more<br />

opportunities to put DDR in power-down. The minimum recommended value for this<br />

register is 15. There is no BIOS hook to set this register. Customers who choose to<br />

change the value of this register can do it by changing the BIOS. For experiments, this<br />

register can be modified in real time if BIOS did not lock the MC registers.<br />

Note that in APD, APD-PPD, and APD-DLLoff there is no point in setting the idle-counter<br />

in the same range of page-close idle timer.<br />

Another option associated with CKE power-down is the S_DLL-off. When this option is<br />

enabled, the SBR I/O slave DLLs go off when all channel ranks are in power-down. (Do<br />

not confuse it with the DLL-off mode, in which the DDR DLLs are off). This mode<br />

requires to define the I/O slave DLL wakeup time.<br />

4.3.2.1 Initialization Role of CKE<br />

During power-up, CKE is the only input to the SDRAM that has its level recognized<br />

(other than the DDR3 reset pin) once power is applied. It must be driven LOW by the<br />

DDR controller to make sure the SDRAM components float DQ and DQS during powerup.<br />

CKE signals remain LOW (while any reset is active) until the BIOS writes to a<br />

configuration register. Using this method, CKE is ensured to remain inactive for much<br />

longer than the specified 200 micro-seconds after power and clocks to SDRAM devices<br />

are stable.<br />

4.3.2.2 Conditional Self-Refresh<br />

<strong>Intel</strong> Rapid Memory Power Management (<strong>Intel</strong> RMPM) conditionally places memory into<br />

self-refresh in the package C3 and C6 low-power states. RMPM functionality depends<br />

on the graphics/display state (relevant only when processor graphics is being used), as<br />

well as memory traffic patterns generated by other connected I/O devices. The target<br />

behavior is to enter self-refresh as long as there are no memory requests to service.<br />

When entering the S3 Suspend-to-RAM (STR) state or S0 conditional self-refresh, the<br />

processor core flushes pending cycles and then enters all SDRAM ranks into selfrefresh.<br />

The CKE signals remain LOW so the SDRAM devices perform self-refresh.<br />

56 Datasheet, Volume 1


Power Management<br />

4.3.2.3 Dynamic Power-down Operation<br />

Dynamic power-down of memory is employed during normal operation. Based on idle<br />

conditions, a given memory rank may be powered down. The IMC implements<br />

aggressive CKE control to dynamically put the DRAM devices in a power-down state.<br />

The processor core controller can be configured to put the devices in active powerdown<br />

(CKE de-assertion with open pages) or precharge power-down (CKE de-assertion<br />

with all pages closed). Precharge power-down provides greater power savings but has<br />

a bigger performance impact, since all pages will first be closed before putting the<br />

devices in power-down mode.<br />

If dynamic power-down is enabled, all ranks are powered up before doing a refresh<br />

cycle and all ranks are powered down at the end of refresh.<br />

4.3.2.4 DRAM I/O Power Management<br />

Unused signals should be disabled to save power and reduce electromagnetic<br />

interference. This includes all signals associated with an unused memory channel.<br />

Clocks can be controlled on a per DIMM basis. Exceptions are made for per DIMM<br />

control signals such as CS#, CKE, and ODT for unpopulated DIMM slots.<br />

The I/O buffer for an unused signal should be tri-stated (output driver disabled), the<br />

input receiver (differential sense-amp) should be disabled, and any DLL circuitry<br />

related ONLY to unused signals should be disabled. The input path must be gated to<br />

prevent spurious results due to noise on the unused signals (typically handled<br />

automatically when input receiver is disabled).<br />

4.4 PCIe* Power Management<br />

Active power management support using L0s, and L1 states.<br />

All inputs and outputs disabled in L2/L3 Ready state.<br />

Note: PEG interface does not support Hot Plug.<br />

Note: Power impact may be observed when PEG link disable power management state is<br />

used.<br />

4.5 DMI Power Management<br />

Active power management support using L0s/L1 state.<br />

Datasheet, Volume 1 57


4.6 Graphics Power Management<br />

Power Management<br />

4.6.1 <strong>Intel</strong> ® Rapid Memory Power Management (<strong>Intel</strong> ® RMPM)<br />

(also known as CxSR)<br />

The <strong>Intel</strong> ® Rapid Memory Power Management puts rows of memory into self refresh<br />

mode during C3/C6 to allow the system to remain in the lower power states longer.<br />

Server processors routinely save power during runtime conditions by entering the C3,<br />

C6 state. <strong>Intel</strong> ® RMPM is an indirect method of power saving that can have a significant<br />

effect on the system as a whole.<br />

4.6.2 <strong>Intel</strong> ® Graphics Performance Modulation Technology<br />

(<strong>Intel</strong> ® GPMT)<br />

<strong>Intel</strong> ® Graphics Power Modulation Technology (<strong>Intel</strong> ® GPMT) is a method for saving<br />

power in the graphics adapter while continuing to display and process data in the<br />

adapter. This method will switch the render frequency and/or render voltage<br />

dynamically between higher and lower power states supported on the platform based<br />

on render engine workload.<br />

In products where <strong>Intel</strong> ® Graphics Dynamic Frequency (also known as Turbo Boost<br />

Technology) is supported and enabled, the functionality of <strong>Intel</strong> ® GPMT will be<br />

maintained by <strong>Intel</strong> ® Graphics Dynamic Frequency (also known as Turbo Boost<br />

Technology).<br />

4.6.3 Graphics Render C-State<br />

Render C-State (RC6) is a technique designed to optimize the average power to the<br />

graphics render engine during times of idleness of the render engine. Render C-state is<br />

entered when the graphics render engine, blitter engine and the video engine have no<br />

workload being currently worked on and no outstanding graphics memory transactions.<br />

When the idleness condition is met, the Integrated Graphics will program the VR into a<br />

low voltage state (~0.4 V) through the SVID bus.<br />

4.6.4 <strong>Intel</strong> ® Smart 2D Display Technology (<strong>Intel</strong> ® S2DDT)<br />

<strong>Intel</strong> S2DDT reduces display refresh memory traffic by reducing memory reads<br />

required for display refresh. Power consumption is reduced by less accesses to the IMC.<br />

S2DDT is only enabled in single pipe mode.<br />

<strong>Intel</strong> S2DDT is most effective with:<br />

Display images well suited to compression, such as text windows, slide shows, and<br />

so on. Poor examples are 3D games.<br />

Static screens such as screens with significant portions of the background showing<br />

2D applications, processor benchmarks, and so on, or conditions when the<br />

processor is idle. Poor examples are full-screen 3D games and benchmarks that flip<br />

the display image at or near display refresh rates.<br />

58 Datasheet, Volume 1


Power Management<br />

4.6.5 <strong>Intel</strong> ® Graphics Dynamic Frequency<br />

<strong>Intel</strong> ® Graphics Dynamic Frequency Technology is the ability of the processor and<br />

graphics cores to opportunistically increase frequency and/or voltage above the<br />

ensured processor and graphics frequency for the given part. <strong>Intel</strong> ® Graphics Dynamic<br />

Frequency Technology is a performance feature that makes use of unused package<br />

power and thermals to increase application performance. The increase in frequency is<br />

determined by how much power and thermal budget is available in the package, and<br />

the application demand for additional processor or graphics performance. The<br />

processor core control is maintained by an embedded controller. The graphics driver<br />

dynamically adjusts between P-States to maintain optimal performance, power, and<br />

thermals. The graphics driver will always place the graphics engine in its lowest<br />

possible P-State; thereby, acting in the same capacity as <strong>Intel</strong> ® GPMT.<br />

4.7 Thermal Power Management<br />

See Section 4.6 for all graphics thermal power management-related features.<br />

§ §<br />

Datasheet, Volume 1 59


Power Management<br />

60 Datasheet, Volume 1


Thermal Management<br />

5 Thermal Management<br />

For thermal specifications and design guidelines, refer to the <strong>Intel</strong> ® <strong>Xeon</strong> ® <strong>Processor</strong><br />

<strong>E3</strong>-<strong>1200</strong> <strong>Family</strong> and LGA1155 Socket Thermal and Mechanical Specifications and<br />

Design Guidelines.<br />

§ §<br />

Datasheet, Volume 1 61


Thermal Management<br />

62 Datasheet, Volume 1


Signal Description<br />

6 Signal Description<br />

This chapter describes the processor signals. They are arranged in functional groups<br />

according to their associated interface or category. The following notations are used to<br />

describe the signal type.<br />

Notations Signal Type<br />

The signal description also includes the type of buffer used for the particular signal (see<br />

Table 6-1).<br />

Notes:<br />

1. Qualifier for a buffer type.<br />

I Input Pin<br />

O Output Pin<br />

I/O Bi-directional Input/Output Pin<br />

Table 6-1. Signal Description Buffer Types<br />

Signal Description<br />

PCI Express*<br />

DMI<br />

PCI Express interface signals. These signals are compatible with PCI Express* 2.0<br />

Signalling Environment AC Specifications and are AC coupled. The buffers are not<br />

3.3-V tolerant. Refer to the PCIe specification.<br />

Direct Media Interface signals. These signals are based on PCI Express* 2.0 Signaling<br />

Environment AC Specifications (5 GT/s), but are DC coupled. The buffers are not<br />

3.3-V tolerant.<br />

CMOS CMOS buffers. 1.1-V tolerant<br />

DDR3 DDR3 buffers: 1.5-V tolerant<br />

A<br />

Analog reference or output. May be used as a threshold voltage or for buffer<br />

compensation<br />

Ref Voltage reference signal<br />

Asynchronous 1 Signal has no timing relationship with any reference clock.<br />

Datasheet, Volume 1 63


6.1 System Memory Interface<br />

Table 6-2. Memory Channel A<br />

Signal Name Description<br />

SA_BS[2:0]<br />

SA_WE#<br />

SA_RAS#<br />

SA_CAS#<br />

SA_DQS[8:0]<br />

SA_DQS#[8:0]<br />

SA_DQ[63:0]<br />

SA_ECC_CB[7:0]<br />

SA_MA[15:0]<br />

SA_CK[3:0]<br />

SA_CK#[3:0]<br />

SA_CKE[3:0]<br />

SA_CS#[3:0]<br />

SA_ODT[3:0]<br />

Bank Select: These signals define which banks are selected within<br />

each SDRAM rank.<br />

Write Enable Control Signal: This signal is used with SA_RAS# and<br />

SA_CAS# (along with SA_CS#) to define the SDRAM Commands.<br />

RAS Control Signal: This signal is used with SA_CAS# and SA_WE#<br />

(along with SA_CS#) to define the SRAM Commands.<br />

CAS Control Signal: This signal is used with SA_RAS# and SA_WE#<br />

(along with SA_CS#) to define the SRAM Commands.<br />

Data Strobes: SA_DQS[8:0] and its complement signal group make<br />

up a differential strobe pair. The data is captured at the crossing point<br />

of SA_DQS[8:0] and its SA_DQS#[8:0] during read and write<br />

transactions.<br />

Signal Description<br />

Direction/<br />

Buffer Type<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

I/O<br />

DDR3<br />

Data Bus: Channel A data signal interface to the SDRAM data bus. I/O<br />

DDR3<br />

ECC Data Lines: Data Lines for ECC Check Byte. I/O<br />

DDR3<br />

Memory Address: These signals are used to provide the multiplexed<br />

row and column address to the SDRAM.<br />

SDRAM Differential Clock: Channel A SDRAM Differential clock signal<br />

pair. The crossing of the positive edge of SA_CK and the negative edge<br />

of its complement SA_CK# are used to sample the command and<br />

control signals on the SDRAM.<br />

SDRAM Inverted Differential Clock: Channel A SDRAM Differential<br />

clock signal-pair complement.<br />

Clock Enable: (1 per rank). Used to:<br />

Initialize the SDRAMs during power-up<br />

Power-down SDRAM ranks<br />

Place all SDRAM ranks into and out of self-refresh during STR<br />

Chip Select: (1 per rank). Used to select particular SDRAM<br />

components during the active state. There is one Chip Select for each<br />

SDRAM rank.<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

On Die Termination: Active Termination Control. O<br />

DDR3<br />

64 Datasheet, Volume 1


Signal Description<br />

Table 6-3. Memory Channel B<br />

Signal Name Description<br />

SB_BS[2:0]<br />

SB_WE#<br />

SB_RAS#<br />

SB_CAS#<br />

SB_DQS[8:0]<br />

SB_DQS#[8:0]<br />

SB_DQ[63:0]<br />

SB_ECC_CB[7:0]<br />

SB_MA[15:0]<br />

SB_CK[3:0]<br />

SB_CK#[3:0]<br />

SB_CKE[3:0]<br />

SB_CS#[3:0]<br />

SB_ODT[3:0]<br />

Bank Select: These signals define which banks are selected within<br />

each SDRAM rank.<br />

Write Enable Control Signal: This signal is used with SB_RAS# and<br />

SB_CAS# (along with SB_CS#) to define the SDRAM Commands.<br />

RAS Control Signal: This signal is used with SB_CAS# and SB_WE#<br />

(along with SB_CS#) to define the SRAM Commands.<br />

CAS Control Signal: This signal is used with SB_RAS# and SB_WE#<br />

(along with SB_CS#) to define the SRAM Commands.<br />

Data Strobes: SB_DQS[8:0] and its complement signal group make<br />

up a differential strobe pair. The data is captured at the crossing point<br />

of SB_DQS[8:0] and its SB_DQS#[8:0] during read and write<br />

transactions.<br />

6.2 Memory Reference and Compensation<br />

Table 6-4. Memory Reference and Compensation<br />

Direction/<br />

Buffer Type<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

I/O<br />

DDR3<br />

Data Bus: Channel B data signal interface to the SDRAM data bus. I/O<br />

DDR3<br />

ECC Data Lines: Data Lines for ECC Check Byte. I/O<br />

DDR3<br />

Memory Address: These signals are used to provide the multiplexed<br />

row and column address to the SDRAM.<br />

SDRAM Differential Clock: Channel B SDRAM Differential clock signal<br />

pair. The crossing of the positive edge of SB_CK and the negative edge<br />

of its complement SB_CK# are used to sample the command and<br />

control signals on the SDRAM.<br />

SDRAM Inverted Differential Clock: Channel B SDRAM Differential<br />

clock signal-pair complement.<br />

Clock Enable: (1 per rank). Used to:<br />

Initialize the SDRAMs during power-up.<br />

Power-down SDRAM ranks.<br />

Place all SDRAM ranks into and out of self-refresh during STR.<br />

Chip Select: (1 per rank). Used to select particular SDRAM<br />

components during the active state. There is one Chip Select for each<br />

SDRAM rank.<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

O<br />

DDR3<br />

On Die Termination: Active Termination Control. O<br />

DDR3<br />

Signal Name Description<br />

SM_VREF<br />

DDR3 Reference Voltage: This provides reference voltage to the<br />

DDR3 interface and is defined as V DDQ/2.<br />

Direction/<br />

Buffer Type<br />

Datasheet, Volume 1 65<br />

I<br />

A


6.3 Reset and Miscellaneous Signals<br />

Table 6-5. Reset and Miscellaneous Signals<br />

Signal Name Description<br />

CFG[17:0]<br />

FC_x<br />

PM_SYNC<br />

RESET#<br />

RSVD<br />

RSVD_NCTF<br />

SM_DRAMRST#<br />

Configuration Signals: The CFG signals have a default value of '1' if not<br />

terminated on the board.<br />

• CFG[1:0]: Reserved configuration lane. A test point may be placed on<br />

the board for this lane.<br />

• CFG[2]: PCI Express* Static x16 Lane Numbering Reversal<br />

1 = Normal operation<br />

0 = Lane numbers reversed<br />

• CFG[3]: PCI Express* Static x4 Lane Numbering Reversal<br />

1 = Normal operation<br />

0 = Lane numbers reversed<br />

• CFG[4]: Reserved configuration lane. A test point may be placed on<br />

the board for this lane.<br />

CFG[6:5]: PCI Express Bifurcation Note1<br />

00 = 1 x8, 2 x4 PCI Express<br />

01 = Reserved<br />

10 = 2 x8 PCI Express<br />

11 = 1 x16 PCI Express<br />

CFG[17:7]: Reserved configuration lanes. A test point may be placed<br />

on the board for these lands.<br />

FC signals are signals that are available for compatibility with other<br />

processors. A test point may be placed on the board for these lands.<br />

Power Management Sync: A sideband signal to communicate power<br />

management status from the platform to the processor.<br />

Notes:<br />

1. PCIe bifurcation support varies with the processor and PCH SKUs used.<br />

Signal Description<br />

Direction/<br />

Buffer Type<br />

I<br />

CMOS<br />

I<br />

CMOS<br />

Platform Reset pin driven by the PCH I<br />

CMOS<br />

RESERVED: All signals that are RSVD and RSVD_NCTF must be left<br />

unconnected on the board.<br />

DDR3 DRAM Reset: Reset signal from processor to DRAM devices. One<br />

common to all channels.<br />

No Connect<br />

Non-Critical<br />

to Function<br />

O<br />

CMOS<br />

66 Datasheet, Volume 1


Signal Description<br />

6.4 PCI Express* Based Interface Signals<br />

Table 6-6. PCI Express* Graphics Interface Signals<br />

Notes:<br />

1. PE_TX[3:0] and PE_RX[3:0] are only used for platforms that support 20 PCIe lanes.<br />

6.5 <strong>Intel</strong> ® Flexible Display Interface Signals<br />

6.6 DMI<br />

Signal Name Description<br />

PEG_ICOMPI<br />

PEG_ICOMPO<br />

PEG_RCOMPO<br />

PEG_RX[15:0]<br />

PEG_RX#[15:0]<br />

PE_RX[3:0] 1<br />

PE_RX#[3:0] 1<br />

PEG_TX[15:0]<br />

PEG_TX#[15:0]<br />

PE_TX[3:0] 1<br />

PE_TX#[3:0] 1<br />

Direction/<br />

Buffer Type<br />

PCI Express Input Current Compensation I<br />

A<br />

PCI Express Current Compensation I<br />

A<br />

PCI Express Resistance Compensation I<br />

A<br />

PCI Express Receive Differential Pair<br />

PCI Express Transmit Differential Pair<br />

Table 6-7. <strong>Intel</strong> ® Flexible Display Interface<br />

Signal Name Description<br />

FDI0_FSYNC[0]<br />

FDI0_LSYNC[0]<br />

FDI_TX[7:0]<br />

FDI_TX#[7:0]<br />

FDI1_FSYNC[1]<br />

FDI1_LSYNC[1]<br />

FDI_INT<br />

Table 6-8. DMI - <strong>Processor</strong> to PCH Serial Interface<br />

I<br />

PCI Express<br />

O<br />

PCI Express<br />

Direction/<br />

Buffer Type<br />

<strong>Intel</strong> ® Flexible Display Interface Frame Sync Pipe A I<br />

CMOS<br />

<strong>Intel</strong> ® Flexible Display Interface Line Sync Pipe A I<br />

CMOS<br />

<strong>Intel</strong> ® Flexible Display Interface Transmit Differential Pairs O<br />

FDI<br />

<strong>Intel</strong> ® Flexible Display Interface Frame Sync Pipe B I<br />

CMOS<br />

<strong>Intel</strong> ® Flexible Display Interface Line Sync Pipe B I<br />

CMOS<br />

<strong>Intel</strong> ® Flexible Display Interface Hot Plug Interrupt I<br />

Asynchronous<br />

CMOS<br />

Signal Name Description<br />

DMI_RX[3:0]<br />

DMI_RX#[3:0]<br />

DMI_TX[3:0]<br />

DMI_TX#[3:0]<br />

Direction/<br />

Buffer Type<br />

DMI Input from PCH: Direct Media Interface receive differential pair. I<br />

DMI<br />

DMI Output to PCH: Direct Media Interface transmit differential pair. O<br />

DMI<br />

Datasheet, Volume 1 67


6.7 PLL Signals<br />

Table 6-9. PLL Signals<br />

6.8 TAP Signals<br />

Signal Name Description<br />

BCLK<br />

BCLK#<br />

Table 6-10. TAP Signals<br />

Signal Description<br />

Direction/<br />

Buffer Type<br />

Differential bus clock input to the processor I<br />

Diff Clk<br />

Signal Name Description<br />

BPM#[7:0]<br />

BCLK_ITP<br />

BCLK_ITP#<br />

DBR#<br />

PRDY#<br />

PREQ#<br />

TCK<br />

TDI<br />

TDO<br />

TMS<br />

TRST#<br />

Breakpoint and Performance Monitor Signals: These signals are<br />

outputs from the processor that indicate the status of breakpoints<br />

and programmable counters used for monitoring processor<br />

performance.<br />

These pins are connected in parallel to the top side debug probe to<br />

enable debug capacities.<br />

DBR# is used only in systems where no debug port is implemented<br />

on the system board. DBR# is used by a debug port interposer so<br />

that an in-target probe can drive system reset.<br />

PRDY# is a processor output used by debug tools to determine<br />

processor debug readiness.<br />

PREQ# is used by debug tools to request debug operation of the<br />

processor.<br />

TCK (Test Clock): This signal provides the clock input for the<br />

processor Test Bus (also known as the Test Access Port). TCK must be<br />

driven low or allowed to float during power on Reset.<br />

TDI (Test Data In): This signal transfers serial test data into the<br />

processor. TDI provides the serial input needed for JTAG specification<br />

support.<br />

TDO (Test Data Out): This signal transfers serial test data out of the<br />

processor. TDO provides the serial output needed for JTAG<br />

specification support.<br />

TMS (Test Mode Select): A JTAG specification support signal used by<br />

debug tools.<br />

TRST# (Test Reset): This signal resets the Test Access Port (TAP)<br />

logic. TRST# must be driven low during power on Reset.<br />

Direction/<br />

Buffer Type<br />

I/O<br />

CMOS<br />

68 Datasheet, Volume 1<br />

I<br />

O<br />

O<br />

Asynchronous<br />

CMOS<br />

I<br />

Asynchronous<br />

CMOS<br />

I<br />

CMOS<br />

I<br />

CMOS<br />

O<br />

Open Drain<br />

I<br />

CMOS<br />

I<br />

CMOS


Signal Description<br />

6.9 Error and Thermal Protection<br />

Table 6-11. Error and Thermal Protection<br />

Signal Name Description<br />

CATERR#<br />

PECI<br />

PROCHOT#<br />

THERMTRIP#<br />

6.10 Power Sequencing<br />

Table 6-12. Power Sequencing<br />

Catastrophic Error: This signal indicates that the system has<br />

experienced a catastrophic error and cannot continue to operate. The<br />

processor will set this for non-recoverable machine check errors or<br />

other unrecoverable internal errors.<br />

On the processor, CATERR# is used for signaling the following types of<br />

errors:<br />

Legacy MCERRs CATERR# is asserted for 16 BCLKs.<br />

Legacy IERRs CATERR# remains asserted until warm or cold<br />

reset.<br />

PECI (Platform Environment Control Interface): A serial sideband<br />

interface to the processor, it is used primarily for thermal, power, and<br />

error management.<br />

<strong>Processor</strong> Hot: PROCHOT# goes active when the processor<br />

temperature monitoring sensor(s) detects that the processor has<br />

reached its maximum safe operating temperature. This indicates that<br />

the processor Thermal Control Circuit (TCC) has been activated, if<br />

enabled. This signal can also be driven to the processor to activate the<br />

TCC.<br />

Thermal Trip: The processor protects itself from catastrophic<br />

overheating by use of an internal thermal sensor. This sensor is set<br />

well above the normal operating temperature to ensure that there are<br />

no false trips. The processor will stop all execution when the junction<br />

temperature exceeds approximately 130 °C. This is signaled to the<br />

system by the THERMTRIP# pin.<br />

Signal Name Description<br />

SM_DRAMPWROK<br />

UNCOREPWRGOOD<br />

SKTOCC#<br />

SM_DRAMPWROK <strong>Processor</strong> Input: Connects to PCH<br />

DRAMPWROK.<br />

The processor requires this input signal to be a clean indication that<br />

the V CCSA, V CCIO, V AXG, and V DDQ, power supplies are stable and<br />

within specifications. This requirement applies, regardless of the Sstate<br />

of the processor. 'Clean' implies that the signal will remain low<br />

(capable of sinking leakage current), without glitches, from the time<br />

that the power supplies are turned on until they come within<br />

specification. The signal must then transition monotonically to a high<br />

state. This is connected to the PCH PROCPWRGD signal.<br />

SKTOCC# (Socket Occupied): Pulled down directly (0 Ohms) on<br />

the processor package to ground. There is no connection to the<br />

processor silicon for this signal. System board designers may use this<br />

signal to determine if the processor is present.<br />

Direction/<br />

Buffer Type<br />

O<br />

CMOS<br />

I/O<br />

Asynchronous<br />

CMOS Input/<br />

Open-Drain<br />

Output<br />

O<br />

Asynchronous<br />

CMOS<br />

Direction/<br />

Buffer Type<br />

I<br />

Asynchronous<br />

CMOS<br />

I<br />

Asynchronous<br />

CMOS<br />

Datasheet, Volume 1 69


6.11 <strong>Processor</strong> Power Signals<br />

Table 6-13. <strong>Processor</strong> Power Signals<br />

6.12 Sense Pins<br />

Signal Name Description<br />

6.13 Ground and NCTF<br />

Signal Description<br />

Direction/<br />

Buffer Type<br />

VCC <strong>Processor</strong> core power rail Ref<br />

VCCIO <strong>Processor</strong> power for I/O Ref<br />

VDDQ <strong>Processor</strong> I/O supply voltage for DDR3 Ref<br />

VAXG Graphics core power supply. Ref<br />

VCCPLL VCCPLL provides isolated power for internal processor PLLs Ref<br />

VCCSA System Agent power supply Ref<br />

VIDSOUT<br />

VIDSCLK<br />

VIDALERT#<br />

Table 6-14. Sense Pins<br />

VIDALERT#, VIDSCLK, and VIDSCLK comprise a three signal serial<br />

synchronous interface used to transfer power management information<br />

between the processor and the voltage regulator controllers. This serial<br />

VID interface replaces the parallel VID interface on previous<br />

processors.<br />

VCCSA_VID Voltage selection for VCCSA O<br />

Signal Name Description<br />

VCC_SENSE<br />

VSS_SENSE<br />

VAXG_SENSE<br />

VSSAXG_SENSE<br />

VCCIO_SENSE<br />

VSS_SENSE_VCCIO<br />

VDDQ_SENSE<br />

VSSD_SENSE<br />

VCCSA_SENSE<br />

Table 6-15. Ground and NCTF<br />

VCC_SENSE and VSS_SENSE provide an isolated, low impedance<br />

connection to the processor core voltage and ground. They can be<br />

used to sense or measure voltage near the silicon.<br />

VAXG_SENSE and VSSAXG_SENSE provide an isolated, low<br />

impedance connection to the V AXG voltage and ground. They can<br />

be used to sense or measure voltage near the silicon.<br />

VCCIO_SENSE and VSS_SENSE_VCCIO provide an isolated, low<br />

impedance connection to the processor V CCIO voltage and ground.<br />

They can be used to sense or measure voltage near the silicon.<br />

VDDQ_SENSE and VSSD_SENSE provides an isolated, low<br />

impedance connection to the V DDQ voltage and ground. They can<br />

be used to sense or measure voltage near the silicon.<br />

VCCSA_SENSE provide an isolated, low impedance connection to<br />

the processor system agent voltage. It can be used to sense or<br />

measure voltage near the silicon.<br />

Signal Name Description<br />

I/O<br />

O<br />

I<br />

CMOS<br />

Direction/<br />

Buffer Type<br />

O<br />

Analog<br />

O<br />

Analog<br />

O<br />

Analog<br />

O<br />

Analog<br />

O<br />

Analog<br />

Direction/<br />

Buffer Type<br />

VSS <strong>Processor</strong> ground node GND<br />

VSS_NCTF<br />

Non-Critical to Function: These pins are for package mechanical<br />

reliability.<br />

70 Datasheet, Volume 1


Signal Description<br />

6.14 <strong>Processor</strong> Internal Pull Up/Pull Down<br />

Table 6-16. <strong>Processor</strong> Internal Pull Up/Pull Down<br />

Signal Name Pull Up/Pull Down Rail Value<br />

BPM[7:0] Pull Up VCCIO 65 165<br />

PRDY# Pull Up VCCIO 65 165<br />

PREQ# Pull Up VCCIO 65 165<br />

TCK Pull Down VSS 5 15 k<br />

TDI Pull Up VCCIO 5 15 k<br />

TMS Pull Up VCCIO 5 15 k<br />

TRST# Pull Up VCCIO 5 15 k<br />

CFG[17:0] Pull Up VCCIO 5 15 k<br />

Datasheet, Volume 1 71<br />

§ §


Signal Description<br />

72 Datasheet, Volume 1


Electrical Specifications<br />

7 Electrical Specifications<br />

7.1 Power and Ground Lands<br />

The processor has V CC , V DDQ, V CCPLL, VCC SA , V CCAXG, VCC IO and V SS (ground) inputs for<br />

on-chip power distribution. All power lands must be connected to their respective<br />

processor power planes, while all VSS lands must be connected to the system ground<br />

plane. Use of multiple power and ground planes is recommended to reduce I*R drop.<br />

The VCC and VCCAXG lands must be supplied with the voltage determined by the<br />

processor Serial Voltage IDentification (SVID) interface. Note that a new serial VID<br />

interface is implemented on the processor. Table 7-1 specifies the voltage level for the<br />

various VIDs.<br />

7.2 Decoupling Guidelines<br />

Due to its large number of transistors and high internal clock speeds, the processor is<br />

capable of generating large current swings between low- and full-power states. This<br />

may cause voltages on power planes to sag below their minimum values, if bulk<br />

decoupling is not adequate. Larger bulk storage (C BULK ), such as electrolytic capacitors,<br />

supply current during longer lasting changes in current demand (for example, coming<br />

out of an idle condition). Similarly, capacitors act as a storage well for current when<br />

entering an idle condition from a running condition. To keep voltages within<br />

specification, output decoupling must be properly designed.<br />

Caution: Design the board to ensure that the voltage provided to the processor remains within<br />

the specifications listed in Table 7-5. Failure to do so can result in timing violations or<br />

reduced lifetime of the processor.<br />

7.2.1 Voltage Rail Decoupling<br />

The voltage regulator solution needs to provide:<br />

bulk capacitance with low effective series resistance (ESR).<br />

a low interconnect resistance from the regulator to the socket.<br />

bulk decoupling to compensate for large current swings generated during poweron,<br />

or low-power idle state entry/exit.<br />

The power delivery solution must ensure that the voltage and current specifications are<br />

met, as defined in Table 7-5.<br />

Datasheet, Volume 1 73


7.3 <strong>Processor</strong> Clocking (BCLK[0], BCLK#[0])<br />

Electrical Specifications<br />

The processor uses a differential clock to generate the processor core operating<br />

frequency, memory controller frequency, system agent frequencies, and other internal<br />

clocks. The processor core frequency is determined by multiplying the processor core<br />

ratio by the BCLK frequency. Clock multiplying within the processor is provided by an<br />

internal phase locked loop (PLL) that requires a constant frequency input, with<br />

exceptions for Spread Spectrum Clocking (SSC).<br />

The processor s maximum non-turbo core frequency is configured during power-on<br />

reset by using its manufacturing default value. This value is the highest non-turbo core<br />

multiplier at which the processor can operate. If lower maximum speeds are desired,<br />

the appropriate ratio can be configured using the FLEX_RATIO MSR.<br />

7.3.1 PLL Power Supply<br />

An on-die PLL filter solution is implemented on the processor. Refer to Table 7-6 for DC<br />

specifications.<br />

7.4 V CC Voltage Identification (VID)<br />

The processor uses three signals for the serial voltage identification interface to support<br />

automatic selection of voltages. Table 7-1 specifies the voltage level corresponding to<br />

the eight bit VID value transmitted over serial VID. A 1 in this table refers to a high<br />

voltage level and a 0 refers to a low voltage level. If the voltage regulation circuit<br />

cannot supply the voltage that is requested, the voltage regulator must disable itself.<br />

VID signals are CMOS push/pull drivers. Refer to Table 7-9 for the DC specifications for<br />

these signals. The VID codes will change due to temperature and/or current load<br />

changes in order to minimize the power of the part. A voltage range is provided in<br />

Table 7-5. The specifications are set so that one voltage regulator can operate with all<br />

supported frequencies.<br />

Individual processor VID values may be set during manufacturing so that two devices<br />

at the same core frequency may have different default VID settings. This is shown in<br />

the VID range values in Table 7-5. The processor provides the ability to operate while<br />

transitioning to an adjacent VID and its associated voltage. This will represent a DC<br />

shift in the loadline.<br />

See the VR12/IMVP7 SVID Protocol for further details.<br />

74 Datasheet, Volume 1


Electrical Specifications<br />

Table 7-1. VR 12.0 Voltage Identification Definition (Sheet 1 of 3)<br />

VID<br />

7<br />

VID<br />

6<br />

VID<br />

5<br />

VID<br />

4<br />

VID<br />

3<br />

VID<br />

2<br />

VID<br />

1<br />

VID<br />

0<br />

HEX V CC_MAX<br />

Datasheet, Volume 1 75<br />

VID<br />

7<br />

VID<br />

6<br />

VID<br />

5<br />

VID<br />

4<br />

VID<br />

3<br />

VID<br />

2<br />

VID<br />

1<br />

VID<br />

0<br />

HEX V CC_MAX<br />

0 0 0 0 0 0 0 0 0 0 0.00000 1 0 0 0 0 0 0 0 8 0 0.88500<br />

0 0 0 0 0 0 0 1 0 1 0.25000 1 0 0 0 0 0 0 1 8 1 0.89000<br />

0 0 0 0 0 0 1 0 0 2 0.25500 1 0 0 0 0 0 1 0 8 2 0.89500<br />

0 0 0 0 0 0 1 1 0 3 0.26000 1 0 0 0 0 0 1 1 8 3 0.90000<br />

0 0 0 0 0 1 0 0 0 4 0.26500 1 0 0 0 0 1 0 0 8 4 0.90500<br />

0 0 0 0 0 1 0 1 0 5 0.27000 1 0 0 0 0 1 0 1 8 5 0.91000<br />

0 0 0 0 0 1 1 0 0 6 0.27500 1 0 0 0 0 1 1 0 8 6 0.91500<br />

0 0 0 0 0 1 1 1 0 7 0.28000 1 0 0 0 0 1 1 1 8 7 0.92000<br />

0 0 0 0 1 0 0 0 0 8 0.28500 1 0 0 0 1 0 0 0 8 8 0.92500<br />

0 0 0 0 1 0 0 1 0 9 0.29000 1 0 0 0 1 0 0 1 8 9 0.93000<br />

0 0 0 0 1 0 1 0 0 A 0.29500 1 0 0 0 1 0 1 0 8 A 0.93500<br />

0 0 0 0 1 0 1 1 0 B 0.30000 1 0 0 0 1 0 1 1 8 B 0.94000<br />

0 0 0 0 1 1 0 0 0 C 0.30500 1 0 0 0 1 1 0 0 8 C 0.94500<br />

0 0 0 0 1 1 0 1 0 D 0.31000 1 0 0 0 1 1 0 1 8 D 0.95000<br />

0 0 0 0 1 1 1 0 0 E 0.31500 1 0 0 0 1 1 1 0 8 E 0.95500<br />

0 0 0 0 1 1 1 1 0 F 0.32000 1 0 0 0 1 1 1 1 8 F 0.96000<br />

0 0 0 1 0 0 0 0 1 0 0.32500 1 0 0 1 0 0 0 0 9 0 0.96500<br />

0 0 0 1 0 0 0 1 1 1 0.33000 1 0 0 1 0 0 0 1 9 1 0.97000<br />

0 0 0 1 0 0 1 0 1 2 0.33500 1 0 0 1 0 0 1 0 9 2 0.97500<br />

0 0 0 1 0 0 1 1 1 3 0.34000 1 0 0 1 0 0 1 1 9 3 0.98000<br />

0 0 0 1 0 1 0 0 1 4 0.34500 1 0 0 1 0 1 0 0 9 4 0.98500<br />

0 0 0 1 0 1 0 1 1 5 0.35000 1 0 0 1 0 1 0 1 9 5 0.99000<br />

0 0 0 1 0 1 1 0 1 6 0.35500 1 0 0 1 0 1 1 0 9 6 0.99500<br />

0 0 0 1 0 1 1 1 1 7 0.36000 1 0 0 1 0 1 1 1 9 7 1.00000<br />

0 0 0 1 1 0 0 0 1 8 0.36500 1 0 0 1 1 0 0 0 9 8 1.00500<br />

0 0 0 1 1 0 0 1 1 9 0.37000 1 0 0 1 1 0 0 1 9 9 1.01000<br />

0 0 0 1 1 0 1 0 1 A 0.37500 1 0 0 1 1 0 1 0 9 A 1.01500<br />

0 0 0 1 1 0 1 1 1 B 0.38000 1 0 0 1 1 0 1 1 9 B 1.02000<br />

0 0 0 1 1 1 0 0 1 C 0.38500 1 0 0 1 1 1 0 0 9 C 1.02500<br />

0 0 0 1 1 1 0 1 1 D 0.39000 1 0 0 1 1 1 0 1 9 D 1.03000<br />

0 0 0 1 1 1 1 0 1 E 0.39500 1 0 0 1 1 1 1 0 9 E 1.03500<br />

0 0 0 1 1 1 1 1 1 F 0.40000 1 0 0 1 1 1 1 1 9 F 1.04000<br />

0 0 1 0 0 0 0 0 2 0 0.40500 1 0 1 0 0 0 0 0 A 0 1.04500<br />

0 0 1 0 0 0 0 1 2 1 0.41000 1 0 1 0 0 0 0 1 A 1 1.05000<br />

0 0 1 0 0 0 1 0 2 2 0.41500 1 0 1 0 0 0 1 0 A 2 1.05500<br />

0 0 1 0 0 0 1 1 2 3 0.42000 1 0 1 0 0 0 1 1 A 3 1.06000<br />

0 0 1 0 0 1 0 0 2 4 0.42500 1 0 1 0 0 1 0 0 A 4 1.06500<br />

0 0 1 0 0 1 0 1 2 5 0.43000 1 0 1 0 0 1 0 1 A 5 1.07000<br />

0 0 1 0 0 1 1 0 2 6 0.43500 1 0 1 0 0 1 1 0 A 6 1.07500<br />

0 0 1 0 0 1 1 1 2 7 0.44000 1 0 1 0 0 1 1 1 A 7 1.08000<br />

0 0 1 0 1 0 0 0 2 8 0.44500 1 0 1 0 1 0 0 0 A 8 1.08500<br />

0 0 1 0 1 0 0 1 2 9 0.45000 1 0 1 0 1 0 0 1 A 9 1.09000<br />

0 0 1 0 1 0 1 0 2 A 0.45500 1 0 1 0 1 0 1 0 A A 1.09500


Table 7-1. VR 12.0 Voltage Identification Definition (Sheet 2 of 3)<br />

VID<br />

7<br />

VID<br />

6<br />

VID<br />

5<br />

VID<br />

4<br />

VID<br />

3<br />

VID<br />

2<br />

VID<br />

1<br />

VID<br />

0<br />

HEX V CC_MAX<br />

Electrical Specifications<br />

0 0 1 0 1 0 1 1 2 B 0.46000 1 0 1 0 1 0 1 1 A B 1.10000<br />

0 0 1 0 1 1 0 0 2 C 0.46500 1 0 1 0 1 1 0 0 A C 1.10500<br />

0 0 1 0 1 1 0 1 2 D 0.47000 1 0 1 0 1 1 0 1 A D 1.11000<br />

0 0 1 0 1 1 1 0 2 E 0.47500 1 0 1 0 1 1 1 0 A E 1.11500<br />

0 0 1 0 1 1 1 1 2 F 0.48000 1 0 1 0 1 1 1 1 A F 1.<strong>1200</strong>0<br />

0 0 1 1 0 0 0 0 3 0 0.48500 1 0 1 1 0 0 0 0 B 0 1.12500<br />

0 0 1 1 0 0 0 1 3 1 0.49000 1 0 1 1 0 0 0 1 B 1 1.13000<br />

0 0 1 1 0 0 1 0 3 2 0.49500 1 0 1 1 0 0 1 0 B 2 1.13500<br />

0 0 1 1 0 0 1 1 3 3 0.50000 1 0 1 1 0 0 1 1 B 3 1.14000<br />

0 0 1 1 0 1 0 0 3 4 0.50500 1 0 1 1 0 1 0 0 B 4 1.14500<br />

0 0 1 1 0 1 0 1 3 5 0.51000 1 0 1 1 0 1 0 1 B 5 1.15000<br />

0 0 1 1 0 1 1 0 3 6 0.51500 1 0 1 1 0 1 1 0 B 6 1.15500<br />

0 0 1 1 0 1 1 1 3 7 0.52000 1 0 1 1 0 1 1 1 B 7 1.16000<br />

0 0 1 1 1 0 0 0 3 8 0.52500 1 0 1 1 1 0 0 0 B 8 1.16500<br />

0 0 1 1 1 0 0 1 3 9 0.53000 1 0 1 1 1 0 0 1 B 9 1.17000<br />

0 0 1 1 1 0 1 0 3 A 0.53500 1 0 1 1 1 0 1 0 B A 1.17500<br />

0 0 1 1 1 0 1 1 3 B 0.54000 1 0 1 1 1 0 1 1 B B 1.18000<br />

0 0 1 1 1 1 0 0 3 C 0.54500 1 0 1 1 1 1 0 0 B C 1.18500<br />

0 0 1 1 1 1 0 1 3 D 0.55000 1 0 1 1 1 1 0 1 B D 1.19000<br />

0 0 1 1 1 1 1 0 3 E 0.55500 1 0 1 1 1 1 1 0 B E 1.19500<br />

0 0 1 1 1 1 1 1 3 F 0.56000 1 0 1 1 1 1 1 1 B F 1.20000<br />

0 1 0 0 0 0 0 0 4 0 0.56500 1 1 0 0 0 0 0 0 C 0 1.20500<br />

0 1 0 0 0 0 0 1 4 1 0.57000 1 1 0 0 0 0 0 1 C 1 1.21000<br />

0 1 0 0 0 0 1 0 4 2 0.57500 1 1 0 0 0 0 1 0 C 2 1.21500<br />

0 1 0 0 0 0 1 1 4 3 0.58000 1 1 0 0 0 0 1 1 C 3 1.22000<br />

0 1 0 0 0 1 0 0 4 4 0.58500 1 1 0 0 0 1 0 0 C 4 1.22500<br />

0 1 0 0 0 1 0 1 4 5 0.59000 1 1 0 0 0 1 0 1 C 5 1.23000<br />

0 1 0 0 0 1 1 0 4 6 0.59500 1 1 0 0 0 1 1 0 C 6 1.23500<br />

0 1 0 0 0 1 1 1 4 7 0.60000 1 1 0 0 0 1 1 1 C 7 1.24000<br />

0 1 0 0 1 0 0 0 4 8 0.60500 1 1 0 0 1 0 0 0 C 8 1.24500<br />

0 1 0 0 1 0 0 1 4 9 0.61000 1 1 0 0 1 0 0 1 C 9 1.25000<br />

0 1 0 0 1 0 1 0 4 A 0.61500 1 1 0 0 1 0 1 0 C A 1.25500<br />

0 1 0 0 1 0 1 1 4 B 0.62000 1 1 0 0 1 0 1 1 C B 1.26000<br />

0 1 0 0 1 1 0 0 4 C 0.62500 1 1 0 0 1 1 0 0 C C 1.26500<br />

0 1 0 0 1 1 0 1 4 D 0.63000 1 1 0 0 1 1 0 1 C D 1.27000<br />

0 1 0 0 1 1 1 0 4 E 0.63500 1 1 0 0 1 1 1 0 C E 1.27500<br />

0 1 0 0 1 1 1 1 4 F 0.64000 1 1 0 0 1 1 1 1 C F 1.28000<br />

0 1 0 1 0 0 0 0 5 0 0.64500 1 1 0 1 0 0 0 0 D 0 1.28500<br />

0 1 0 1 0 0 0 1 5 1 0.65000 1 1 0 1 0 0 0 1 D 1 1.29000<br />

0 1 0 1 0 0 1 0 5 2 0.65500 1 1 0 1 0 0 1 0 D 2 1.29500<br />

0 1 0 1 0 0 1 1 5 3 0.66000 1 1 0 1 0 0 1 1 D 3 1.30000<br />

0 1 0 1 0 1 0 0 5 4 0.66500 1 1 0 1 0 1 0 0 D 4 1.30500<br />

0 1 0 1 0 1 0 1 5 5 0.67000 1 1 0 1 0 1 0 1 D 5 1.31000<br />

76 Datasheet, Volume 1<br />

VID<br />

7<br />

VID<br />

6<br />

VID<br />

5<br />

VID<br />

4<br />

VID<br />

3<br />

VID<br />

2<br />

VID<br />

1<br />

VID<br />

0<br />

HEX V CC_MAX


Electrical Specifications<br />

Table 7-1. VR 12.0 Voltage Identification Definition (Sheet 3 of 3)<br />

VID<br />

7<br />

VID<br />

6<br />

VID<br />

5<br />

VID<br />

4<br />

VID<br />

3<br />

VID<br />

2<br />

VID<br />

1<br />

VID<br />

0<br />

HEX V CC_MAX<br />

0 1 0 1 0 1 1 0 5 6 0.67500 1 1 0 1 0 1 1 0 D 6 1.31500<br />

0 1 0 1 0 1 1 1 5 7 0.68000 1 1 0 1 0 1 1 1 D 7 1.32000<br />

0 1 0 1 1 0 0 0 5 8 0.68500 1 1 0 1 1 0 0 0 D 8 1.32500<br />

0 1 0 1 1 0 0 1 5 9 0.69000 1 1 0 1 1 0 0 1 D 9 1.33000<br />

0 1 0 1 1 0 1 0 5 A 0.69500 1 1 0 1 1 0 1 0 D A 1.33500<br />

0 1 0 1 1 0 1 1 5 B 0.70000 1 1 0 1 1 0 1 1 D B 1.34000<br />

0 1 0 1 1 1 0 0 5 C 0.70500 1 1 0 1 1 1 0 0 D C 1.34500<br />

0 1 0 1 1 1 0 1 5 D 0.71000 1 1 0 1 1 1 0 1 D D 1.35000<br />

0 1 0 1 1 1 1 0 5 E 0.71500 1 1 0 1 1 1 1 0 D E 1.35500<br />

0 1 0 1 1 1 1 1 5 F 0.72000 1 1 0 1 1 1 1 1 D F 1.36000<br />

0 1 1 0 0 0 0 0 6 0 0.72500 1 1 1 0 0 0 0 0 E 0 1.36500<br />

0 1 1 0 0 0 0 1 6 1 0.73000 1 1 1 0 0 0 0 1 E 1 1.37000<br />

0 1 1 0 0 0 1 0 6 2 0.73500 1 1 1 0 0 0 1 0 E 2 1.37500<br />

0 1 1 0 0 0 1 1 6 3 0.74000 1 1 1 0 0 0 1 1 E 3 1.38000<br />

0 1 1 0 0 1 0 0 6 4 0.74500 1 1 1 0 0 1 0 0 E 4 1.38500<br />

0 1 1 0 0 1 0 1 6 5 0.75000 1 1 1 0 0 1 0 1 E 5 1.39000<br />

0 1 1 0 0 1 1 0 6 6 0.75500 1 1 1 0 0 1 1 0 E 6 1.39500<br />

0 1 1 0 0 1 1 1 6 7 0.76000 1 1 1 0 0 1 1 1 E 7 1.40000<br />

0 1 1 0 1 0 0 0 6 8 0.76500 1 1 1 0 1 0 0 0 E 8 1.40500<br />

0 1 1 0 1 0 0 1 6 9 0.77000 1 1 1 0 1 0 0 1 E 9 1.41000<br />

0 1 1 0 1 0 1 0 6 A 0.77500 1 1 1 0 1 0 1 0 E A 1.41500<br />

0 1 1 0 1 0 1 1 6 B 0.78000 1 1 1 0 1 0 1 1 E B 1.42000<br />

0 1 1 0 1 1 0 0 6 C 0.78500 1 1 1 0 1 1 0 0 E C 1.42500<br />

0 1 1 0 1 1 0 1 6 D 0.79000 1 1 1 0 1 1 0 1 E D 1.43000<br />

0 1 1 0 1 1 1 0 6 E 0.79500 1 1 1 0 1 1 1 0 E E 1.43500<br />

0 1 1 0 1 1 1 1 6 F 0.80000 1 1 1 0 1 1 1 1 E F 1.44000<br />

0 1 1 1 0 0 0 0 7 0 0.80500 1 1 1 1 0 0 0 0 F 0 1.44500<br />

0 1 1 1 0 0 0 1 7 1 0.81000 1 1 1 1 0 0 0 1 F 1 1.45000<br />

0 1 1 1 0 0 1 0 7 2 0.81500 1 1 1 1 0 0 1 0 F 2 1.45500<br />

0 1 1 1 0 0 1 1 7 3 0.82000 1 1 1 1 0 0 1 1 F 3 1.46000<br />

0 1 1 1 0 1 0 0 7 4 0.82500 1 1 1 1 0 1 0 0 F 4 1.46500<br />

0 1 1 1 0 1 0 1 7 5 0.83000 1 1 1 1 0 1 0 1 F 5 1.47000<br />

0 1 1 1 0 1 1 0 7 6 0.83500 1 1 1 1 0 1 1 0 F 6 1.47500<br />

0 1 1 1 0 1 1 1 7 7 0.84000 1 1 1 1 0 1 1 1 F 7 1.48000<br />

0 1 1 1 1 0 0 0 7 8 0.84500 1 1 1 1 1 0 0 0 F 8 1.48500<br />

0 1 1 1 1 0 0 1 7 9 0.85000 1 1 1 1 1 0 0 1 F 9 1.49000<br />

0 1 1 1 1 0 1 0 7 A 0.85500 1 1 1 1 1 0 1 0 F A 1.49500<br />

0 1 1 1 1 0 1 1 7 B 0.86000 1 1 1 1 1 0 1 1 F B 1.50000<br />

0 1 1 1 1 1 0 0 7 C 0.86500 1 1 1 1 1 1 0 0 F C 1.50500<br />

0 1 1 1 1 1 0 1 7 D 0.87000 1 1 1 1 1 1 0 1 F D 1.51000<br />

0 1 1 1 1 1 1 0 7 E 0.87500 1 1 1 1 1 1 1 0 F E 1.51500<br />

0 1 1 1 1 1 1 1 7 F 0.88000 1 1 1 1 1 1 1 1 F F 1.52000<br />

Datasheet, Volume 1 77<br />

VID<br />

7<br />

VID<br />

6<br />

VID<br />

5<br />

VID<br />

4<br />

VID<br />

3<br />

VID<br />

2<br />

VID<br />

1<br />

VID<br />

0<br />

HEX V CC_MAX


7.5 System Agent (SA) VCC VID<br />

The VCC SA is configured by the processor output pin VCCSA_VID.<br />

Electrical Specifications<br />

VCCSA_VID output default logic state is low for the processors; logic high is reserved<br />

for future compatibility.<br />

Table 7-2 specifies the different VCCSA_VID configurations.<br />

Table 7-2. VCCSA_VID configuration<br />

<strong>Processor</strong> <strong>Family</strong> VCCSA_VID Selected VCCSA<br />

<strong>Intel</strong> ® <strong>Xeon</strong> ® processor <strong>E3</strong>-<strong>1200</strong> family 0 0.925 V<br />

Future <strong>Intel</strong> processors 1 Note 1<br />

Notes:<br />

1. Some of V CCSA configurations are reserved for future <strong>Intel</strong> processor families.<br />

7.6 Reserved or Unused Signals<br />

The following are the general types of reserved (RSVD) signals and connection<br />

guidelines:<br />

RSVD These signals should not be connected.<br />

RSVD_NCTF These signals are non-critical to function and may be left unconnected<br />

Arbitrary connection of these signals to V CC, V CCIO, V DDQ, V CCPLL, V CCSA, V CCAXG, V SS, or<br />

to any other signal (including each other) may result in component malfunction or<br />

incompatibility with future processors. See Chapter 8 for a land listing of the processor<br />

and the location of all reserved signals.<br />

For reliable operation, always connect unused inputs or bi-directional signals to an<br />

appropriate signal level. Unused active high inputs should be connected through a<br />

resistor to ground (V SS). Unused outputs maybe left unconnected; however, this may<br />

interfere with some Test Access Port (TAP) functions, complicate debug probing, and<br />

prevent boundary scan testing. A resistor must be used when tying bi-directional<br />

signals to power or ground. When tying any signal to power or ground, a resistor will<br />

also allow for system testability. For details see Table 7-9.<br />

78 Datasheet, Volume 1


Electrical Specifications<br />

7.7 Signal Groups<br />

Signals are grouped by buffer type and similar characteristics as listed in Table 7-3. The<br />

buffer type indicates which signaling technology and specifications apply to the signals.<br />

All the differential signals, and selected DDR3 and Control Sideband signals have On-<br />

Die Termination (ODT) resistors. There are some signals that do not have ODT and<br />

need to be terminated on the board.<br />

Table 7-3. Signal Groups (Sheet 1 of 2) 1<br />

Signal Group Type Signals<br />

System Reference Clock<br />

Differential CMOS Input BCLK[0], BCLK#[0]<br />

DDR3 Reference Clocks 2<br />

Differential DDR3 Output<br />

DDR3 Command Signals 2<br />

Single Ended DDR3 Output<br />

DDR3 Data Signals 2<br />

SA_CK[3:0], SA_CK#[3:0]<br />

SB_CK[3:0], SB_CK#[3:0]<br />

SA_RAS#, SB_RAS#, SA_CAS#, SB_CAS#<br />

SA_WE#, SB_WE#<br />

SA_MA[15:0], SB_MA[15:0]<br />

SA_BS[2:0], SB_BS[2:0]<br />

SM_DRAMRST#<br />

SA_CS#[3:0], SB_CS#[3:0]<br />

SA_ODT[3:0], SB_ODT[3:0]<br />

SA_CKE[3:0], SB_CKE[3:0]<br />

Single ended DDR3 Bi-directional SA_DQ[63:0], SB_DQ[63:0]<br />

Differential DDR3 Bi-directional<br />

TAP (ITP/XDP)<br />

SA_DQS[8:0], SA_DQS#[8:0]<br />

SA_ECC_CB[7:0] 4<br />

SB_DQS[8:0], SB_DQS#[8:0]<br />

SB_ECC_CB[7:0] 4<br />

Single Ended CMOS Input TCK, TDI, TMS, TRST#<br />

Single Ended CMOS Output TDO<br />

Single Ended Asynchronous CMOS Output TAPPWRGOOD<br />

Control Sideband<br />

Single Ended CMOS Input CFG[17:0]<br />

Single Ended<br />

Asynchronous CMOS/Open<br />

Drain Bi-directional<br />

PROCHOT#<br />

Single Ended Asynchronous CMOS Output THERMTRIP#, CATERR#<br />

Single Ended Asynchronous CMOS Input<br />

Single Ended Asynchronous Bi-directional PECI<br />

Single Ended<br />

Power/Ground/Other<br />

CMOS Input<br />

Open Drain Output<br />

Bi-directional CMOS Input<br />

/Open Drain Output<br />

SM_DRAMPWROK, UNCOREPWRGOOD 3 , PM_SYNC,<br />

RESET#<br />

VIDALERT#<br />

VIDSCLK<br />

VIDSOUT<br />

Power VCC, VCC_NCTF, VCCIO, VCCPLL, VDDQ, VCCAXG<br />

Ground VSS<br />

Datasheet, Volume 1 79


Table 7-3. Signal Groups (Sheet 2 of 2) 1<br />

PCI Express*<br />

No Connect and test point RSVD, RSVD_NCTF, RSVD_TP, FC_x<br />

Sense Points<br />

Differential PCI Express Input<br />

Differential PCI Express Output<br />

Other SKTOCC#, DBR#<br />

Notes:<br />

1. Refer to Chapter 6 and Chapter 8 for signal description details.<br />

2. SA and SB refer to DDR3 Channel A and DDR3 Channel B.<br />

3. The maximum rise/fall time for UNCOREPWRGOOD is 20 ns.<br />

4. These signals are only used on processors and platforms that support ECC DIMMs.<br />

Electrical Specifications<br />

All Control Sideband Asynchronous signals are required to be asserted/de-asserted for<br />

at least 10 BCLKs with a maximum Trise/Tfall of 6 ns for the processor to recognize<br />

the proper signal state. See Section 7.10 for the DC specifications.<br />

7.8 Test Access Port (TAP) Connection<br />

VCC_SENSE, VSS_SENSE, VCCIO_SENSE,<br />

VSS_SENSE_VCCIO, VAXG_SENSE, VSSAXG_SENSE<br />

PEG_RX[15:0], PEG_RX#[15:0], PE_RX[3:0],<br />

PE_RX#[3:0]<br />

PEG_TX[15:0], PEG_TX#[15:0], PE_TX[3:0],<br />

PE_TX#[3:0]<br />

Single Ended Analog Input PEG_ICOMP0, PEG_COMPI, PEG_RCOMP0<br />

DMI<br />

Signal Group Type Signals<br />

Differential DMI Input DMI_RX[3:0], DMI_RX#[3:0]<br />

Differential DMI Output DMI_TX[3:0], DMI_TX#[3:0]<br />

<strong>Intel</strong> ® FDI<br />

Single Ended FDI Input FDI_FSYNC[1:0], FDI_LSYNC[1:0], FDI_INT<br />

Differential FDI Output FDI_TX[7:0], FDI_TX#[7:0]<br />

Single Ended Analog Input FDI_COMPIO, FDI_ICOMPO<br />

Due to the voltage levels supported by other components in the Test Access Port (TAP)<br />

logic, <strong>Intel</strong> recommends the processor be first in the TAP chain, followed by any other<br />

components within the system. A translation buffer should be used to connect to the<br />

rest of the chain unless one of the other components is capable of accepting an input of<br />

the appropriate voltage. Two copies of each signal may be required with each driving a<br />

different voltage level.<br />

The processor supports Boundary Scan (JTAG) IEEE 1149.1-2001 and IEEE 1149.6-<br />

2003 standards. Note that some small portion of the I/O pins may support only one of<br />

these standards.<br />

80 Datasheet, Volume 1


Electrical Specifications<br />

7.9 Storage Conditions Specifications<br />

Environmental storage condition limits define the temperature and relative humidity<br />

that the device is exposed to while being stored in a moisture barrier bag. The specified<br />

storage conditions are for component level prior to board attach.<br />

Table 7-4 specifies absolute maximum and minimum storage temperature limits that<br />

represent the maximum or minimum device condition beyond which damage, latent or<br />

otherwise, may occur. The table also specifies sustained storage temperature, relative<br />

humidity, and time-duration limits. These limits specify the maximum or minimum<br />

device storage conditions for a sustained period of time. Failure to adhere to the<br />

following specifications can affect long term reliability of the processor.<br />

Table 7-4. Storage Condition Ratings<br />

T absolute storage<br />

Symbol Parameter Min Max Notes<br />

T sustained storage<br />

T short term storage<br />

RH sustained storage<br />

Time sustained storage<br />

The non-operating device storage<br />

temperature. Damage (latent or otherwise)<br />

may occur when exceeded for any length of<br />

time.<br />

The ambient storage temperature (in<br />

shipping media) for a sustained period of time<br />

The ambient storage temperature (in<br />

shipping media) for a short period of time.<br />

The maximum device storage relative<br />

humidity for a sustained period of time.<br />

A prolonged or extended period of time;<br />

typically associated with customer shelf life.<br />

-25 °C 125 °C 1, 2, 3, 4<br />

-5 °C 40 °C 5, 6<br />

-20 °C 85 °C<br />

60% at 24 °C 6, 7<br />

0 Months 30 Months 7<br />

Time short term storage A short-period of time. 0 hours 72 hours<br />

Notes:<br />

1. Refers to a component device that is not assembled in a board or socket and is not electrically connected to<br />

a voltage reference or I/O signal.<br />

2. Specified temperatures are not to exceed values based on data collected. Exceptions for surface mount<br />

reflow are specified by the applicable JEDEC standard. Non-adherence may affect processor reliability.<br />

3. T absolute storage applies to the unassembled component only and does not apply to the shipping media,<br />

moisture barrier bags, or desiccant.<br />

4. Component product device storage temperature qualification methods may follow JESD22-A119 (low temp)<br />

and JESD22-A103 (high temp) standards when applicable for volatile memory.<br />

5. <strong>Intel</strong> branded products are specified and certified to meet the following temperature and humidity limits<br />

that are given as an example only (Non-Operating Temperature Limit: -40 °C to 70 °C and Humidity: 50%<br />

to 90%, non-condensing with a maximum wet bulb of 28 °C.) Post board attach storage temperature limits<br />

are not specified for non-<strong>Intel</strong> branded boards.<br />

6. The JEDEC J-JSTD-020 moisture level rating and associated handling practices apply to all moisture<br />

sensitive devices removed from the moisture barrier bag.<br />

7. Nominal temperature and humidity conditions and durations are given and tested within the constraints<br />

imposed by T sustained storage and customer shelf life in applicable <strong>Intel</strong> boxes and bags.<br />

Datasheet, Volume 1 81


7.10 DC Specifications<br />

Electrical Specifications<br />

The processor DC specifications in this section are defined at the processor<br />

pads, unless noted otherwise. See Chapter 8 for the processor land listings and<br />

Chapter 6 for signal definitions. Voltage and current specifications are detailed in<br />

Table 7-5, Table 7-6, and Table 7-7.<br />

The DC specifications for the DDR3 signals are listed in Table 7-8 Control Sideband and<br />

Test Access Port (TAP) are listed in Table 7-9.<br />

Table 7-5 through Table 7-7 list the DC specifications for the processor and are valid<br />

only while meeting the thermal specifications (as specified in the Thermal / Mechanical<br />

Specifications and Guidelines), clock frequency, and input voltages. Care should be<br />

taken to read all notes associated with each parameter.<br />

7.10.1 Voltage and Current Specifications<br />

Table 7-5. <strong>Processor</strong> Core Active and Idle Mode DC Voltage and Current Specifications<br />

Symbol Parameter Min Typ Max Unit Note 1<br />

VID VID Range 0.2500 1.5200 V 2<br />

LL VCC<br />

V CCTOB<br />

V CCRipple<br />

V CC,BOOT<br />

I CC<br />

I CC<br />

I CC<br />

I CC_TDC<br />

I CC_TDC<br />

I CC_TDC<br />

V CC Loadline Slope<br />

2011D, 2011C, 2011B (processors<br />

with 95 W, 65 W, and 45 W TDPs)<br />

V CC Tolerance Band<br />

2011D, 2011C, 2011B (processors<br />

with 95 W, 65 W, and 45 W TDPs)<br />

PS0<br />

PS1<br />

PS2<br />

Ripple:<br />

2011D, 2011C, 2011B (processors<br />

with 95 W, 65 W, and 45 W TDPs)<br />

PS0<br />

PS1<br />

PS2<br />

Default V CC voltage for initial<br />

power up<br />

2011D (processors with 95 W<br />

TDPs) I CC<br />

2011C (processors with 65 W TDP)<br />

I CC<br />

2011B (processors with 45 W TDP)<br />

I CC<br />

2011D (processors with 95 W<br />

TDPs) Sustained I CC<br />

2011C (processors with 65 W TDP)<br />

Sustained I CC<br />

2011B (processors with 45 W TDP)<br />

Sustained I CC<br />

1.7 m 3, 5, 6<br />

±16<br />

±13<br />

±11.5<br />

±7<br />

±10<br />

-10/+25<br />

Notes:<br />

1. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical<br />

data. These specifications will be updated with characterized data from silicon measurements at a later<br />

date.<br />

2. Each processor is programmed with a maximum valid voltage identification value (VID) that is set at<br />

manufacturing and cannot be altered. Individual maximum VID values are calibrated during manufacturing<br />

82 Datasheet, Volume 1<br />

mV<br />

mV<br />

0 V<br />

3, 5, 6,<br />

7<br />

3, 5, 6,<br />

7<br />

112 A 4<br />

75 A 4<br />

60 A 4<br />

85 A 4<br />

55 A 4<br />

40 A 4


Electrical Specifications<br />

such that two processors at the same frequency may have different settings within the VID range. Note<br />

that this differs from the VID employed by the processor during a power management event (Adaptive<br />

Thermal Monitor, Enhanced <strong>Intel</strong> SpeedStep Technology, or Low Power States).<br />

3. The voltage specification requirements are measured across VCC_SENSE and VSS_SENSE lands at the<br />

socket with a 20-MHz bandwidth oscilloscope, 1.5 pF maximum probe capacitance, and 1-M minimum<br />

impedance. The maximum length of ground wire on the probe should be less than 5 mm. Ensure external<br />

noise from the system is not coupled into the oscilloscope probe.<br />

4. ICC_MAX specification is based on the V CC loadline at worst case (highest) tolerance and ripple.<br />

5. The V CC specifications represent static and transient limits.<br />

6. The loadlines specify voltage limits at the die measured at the VCC_SENSE and VSS_SENSE lands. Voltage<br />

regulation feedback for voltage regulator circuits must also be taken from processor VCC_SENSE and<br />

VSS_SENSE lands.<br />

7. PSx refers to the voltage regulator power state as set by the SVID protocol.<br />

Table 7-6. <strong>Processor</strong> System Agent I/O Buffer Supply DC Voltage and Current<br />

Specifications<br />

Symbol Parameter Min Typ Max Unit Note 1<br />

V CCSA Voltage for the system agent 0.879 0.925 0.971 V 2<br />

V DDQ<br />

V CCPLL<br />

V CCIO<br />

<strong>Processor</strong> I/O supply voltage for<br />

DDR3<br />

PLL supply voltage (DC + AC<br />

specification)<br />

<strong>Processor</strong> I/O supply voltage for<br />

other than DDR3<br />

1.425 1.5 1.575 V<br />

1.71 1.8 1.89 V<br />

-2/-3% 1.05 +2/+3% V 3<br />

I SA Current for the system agent 8.8 A<br />

I SA_TDC<br />

I DDQ<br />

I DDQ_TDC<br />

I DDQ_STANDBY<br />

Sustained current for the system<br />

agent<br />

<strong>Processor</strong> I/O supply current for<br />

DDR3<br />

<strong>Processor</strong> I/O supply sustained<br />

current for DDR3<br />

<strong>Processor</strong> I/O supply standby<br />

current for DDR3<br />

8.2 A<br />

4.75 A<br />

4.75 A<br />

1 A<br />

I CC_VCCPLL PLL supply current 1.5 A<br />

I CC_VCCPLL_TDC PLL sustained supply current 0.93 A<br />

I CC_VCCIO <strong>Processor</strong> I/O supply current 8.5 A<br />

I CC_VCCIO_TDC<br />

<strong>Processor</strong> I/O supply sustained<br />

current<br />

8.5 A<br />

Notes:<br />

1. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical<br />

data. These specifications will be updated with characterized data from silicon measurements at a later<br />

date.<br />

2. V CCSA must be provided using a separate voltage source and not be connected to V CC. This specification is<br />

measured at VCCSA_SENSE.<br />

3. ±5% total. Minimum of ±2% DC and 3% AC at the sense point. di/dt = 50 A/us with 150 ns step.<br />

Datasheet, Volume 1 83


Electrical Specifications<br />

Table 7-7. <strong>Processor</strong> Graphics VID based (V AXG ) Supply DC Voltage and Current<br />

Specifications<br />

Symbol Parameter Min Typ Max Unit Note 2<br />

V AXG GFX_VID<br />

Range<br />

GFX_VID Range for V CCAXG 0.2500 1.5200 V 1<br />

LL AXG V CCAXG Loadline Slope 4.1 m 3, 4<br />

V AXGTOB<br />

V AXGRipple<br />

I AXG<br />

I AXG_TDC<br />

V CC Tolerance Band<br />

PS0, PS1<br />

PS2<br />

Ripple:<br />

PS0<br />

PS1<br />

PS2<br />

Current for <strong>Processor</strong> Graphics<br />

core<br />

Sustained current for <strong>Processor</strong><br />

Graphics core<br />

Notes:<br />

1. V CCAXG is VID based rail.<br />

2. Unless otherwise noted, all specifications in this table are based on estimates and simulations or empirical<br />

data. These specifications will be updated with characterized data from silicon measurements at a later<br />

date.<br />

3. The V AXG_MIN and V AXG_MAX loadlines represent static and transient limits.<br />

4. The loadlines specify voltage limits at the die measured at the VAXG_SENSE and VSSAXG_SENSE lands.<br />

Voltage regulation feedback for voltage regulator circuits must also be taken from processor VAXG_SENSE<br />

and VSSAXG_SENSE lands.<br />

5. PSx refers to the voltage regulator power state as set by the SVID protocol.<br />

6. Each processor is programmed with a maximum valid voltage identification value (VID) that is set at<br />

manufacturing and cannot be altered. Individual maximum VID values are calibrated during manufacturing<br />

such that two processors at the same frequency may have different settings within the VID range. Note<br />

that this differs from the VID employed by the processor during a power management event (Adaptive<br />

Thermal Monitor, Enhanced <strong>Intel</strong> SpeedStep Technology, or Low Power States).<br />

84 Datasheet, Volume 1<br />

19<br />

11.5<br />

±10<br />

±10<br />

-10/+15<br />

Table 7-8. DDR3 Signal Group DC Specifications (Sheet 1 of 2)<br />

35 A<br />

25 A<br />

mV 3, 4, 5<br />

mV 3, 4, 5<br />

Symbol Parameter Min Typ Max Units Notes 1,9<br />

V IL Input Low Voltage SM_VREF 0.1 V 2,4<br />

V IH Input High Voltage SM_VREF + 0.1 V 3<br />

V OL<br />

V OH<br />

R ON_UP(DQ)<br />

R ON_DN(DQ)<br />

R ODT(DQ)<br />

V ODT(DC)<br />

R ON_UP(CK)<br />

R ON_DN(CK)<br />

Output Low Voltage (V DDQ / 2)* (R ON<br />

/(R ON+R TERM))<br />

Output High Voltage V DDQ ((V DDQ / 2)*<br />

(R ON/(R ON+R TERM))<br />

DDR3 data buffer pull-up<br />

resistance<br />

DDR3 data buffer pull-down<br />

resistance<br />

DDR3 on-die termination<br />

equivalent resistance for data<br />

signals<br />

DDR3 on-die termination DC<br />

working point (driver set to<br />

receive mode)<br />

DDR3 clock buffer pull-up<br />

resistance<br />

DDR3 clock buffer pull-down<br />

resistance<br />

6<br />

V 4,6<br />

24.31 28.6 32.9 5<br />

22.88 28.6 34.32 5<br />

83<br />

41.5<br />

100<br />

50<br />

117<br />

65<br />

0.43*V DDQ 0.5*V DDQ 0.56*V CC V 7<br />

20.8 26 28.6 5<br />

20.8 26 31.2 5<br />

7


Electrical Specifications<br />

Table 7-8. DDR3 Signal Group DC Specifications (Sheet 2 of 2)<br />

Symbol Parameter Min Typ Max Units Notes 1,9<br />

R ON_UP(CMD)<br />

R ON_DN(CMD)<br />

R ON_UP(CTL)<br />

R ON_DN(CTL)<br />

V IL_SM_DRAMP<br />

WROK<br />

V IH_SM_DRAMP<br />

WROK<br />

I LI<br />

I LI<br />

DDR3 command buffer pull-up<br />

resistance<br />

DDR3 command buffer pulldown<br />

resistance<br />

DDR3 control buffer pull-up<br />

resistance<br />

DDR3 control buffer pull-down<br />

resistance<br />

Input Low Voltage for<br />

SM_DRAMPWROK<br />

Input High Voltage for<br />

SM_DRAMPWROK<br />

Input Leakage Current (DQ, CK)<br />

0 V<br />

0.2*V DDQ<br />

0.8*V DDQ<br />

V DDQ<br />

Input Leakage Current (CMD,<br />

CTL)<br />

0 V<br />

0.2*V DDQ<br />

0.8*V DDQ<br />

V DDQ<br />

16 20 23 5<br />

16 20 24 5<br />

16 20 23 5<br />

16 20 24 5<br />

V DDQ *.55 0.1 V 9<br />

V DDQ *.55 +0.1 V 9<br />

± 0.75<br />

± 0.55<br />

± 0.9<br />

± 1.4<br />

Notes:<br />

1. Unless otherwise noted, all specifications in this table apply to all processor frequencies.<br />

2. V IL is defined as the maximum voltage level at a receiving agent that will be interpreted as a logical low<br />

value.<br />

3. V IH is defined as the minimum voltage level at a receiving agent that will be interpreted as a logical high<br />

value.<br />

4. V IH and V OH may experience excursions above V DDQ . However, input signal drivers must comply with the<br />

signal quality specifications.<br />

5. This is the pull up/down driver resistance.<br />

6. R TERM is the termination on the DIMM and in not controlled by the processor.<br />

7. The minimum and maximum values for these signals are programmable by BIOS to one of the two sets.<br />

8. DDR3 values are pre-silicon estimations and subject to change.<br />

9. SM_DRAMPWROK must have a maximum of 15 ns rise or fall time over V DDQ * 0.55 ±200 mV and edge<br />

must be monotonic.<br />

Table 7-9. Control Sideband and TAP Signal Group DC Specifications<br />

± 0.85<br />

± 0.65<br />

± 1.1<br />

± 1.65<br />

Symbol Parameter Min Max Units Notes 1<br />

V IL Input Low Voltage V CCIO * 0.3 V 2<br />

V IH Input High Voltage V CCIO * 0.7 V 2, 4<br />

V OL Output Low Voltage V CCIO * 0.1 V 2<br />

V OH Output High Voltage V CCIO * 0.9 V 2, 4<br />

R ON Buffer on Resistance 23 73<br />

I LI Input Leakage Current ±200 A 3<br />

Notes:<br />

1. Unless otherwise noted, all specifications in this table apply to all processor frequencies.<br />

2. The V CCIO referred to in these specifications refers to instantaneous V CCIO.<br />

3. For V IN between 0 V and V CCIO. Measured when the driver is tristated.<br />

4. V IH and V OH may experience excursions above V CCIO . However, input signal drivers must comply with the<br />

signal quality specifications.<br />

Datasheet, Volume 1 85<br />

mA<br />

mA


Table 7-10. PCIe DC Specifications<br />

Electrical Specifications<br />

Symbol Parameter Min Typ Max Units Notes 1,11<br />

V TX-DIFF-p-p Low<br />

Low differential peak to peak Tx voltage<br />

swing<br />

0.4 0.5 0.6 V 3<br />

V TX-DIFF-p-p Differential peak to peak Tx voltage swing 0.8 1 1.2 V 3<br />

V TX_CM-AC-p<br />

V TX_CM-AC-p-p<br />

Tx AC Peak Common Mode Output<br />

Voltage (Gen1 only)<br />

Tx AC Peak Common Mode Output<br />

Voltage (Gen2 only)<br />

20 mV 1, 2, 6<br />

100 mV 1, 2<br />

Z TX-DIFF-DC DC Differential Tx Impedance (Gen1 only) 80 90 120 1, 10<br />

Z RX-DC DC Common Mode Rx Impedance 40 45 60 1, 8, 9<br />

Z RX-DIFF-DC DC Differential Rx Impedance (Gen1 only) 80 90 120 1<br />

V RX-DIFFp-p<br />

V RX-DIFFp-p<br />

Differential Rx input Peak to Peak Voltage<br />

(Gen1 only)<br />

Differential Rx input Peak to Peak Voltage<br />

(Gen2 only)<br />

0.175 1.2 V 1<br />

0.12 1.2 V 1<br />

V RX_CM-AC-p Rx AC peak Common Mode Input Voltage 150 mV 1, 7<br />

PEG_ICOMPO Comp Resistance 24.75 25 25.25 4, 5<br />

PEG_COMPI Comp Resistance 24.75 25 25.25 4, 5<br />

PEG_RCOMPO Comp Resistance 24.75 25 25.25 4, 5<br />

Notes:<br />

1. Refer to the PCI Express Base Specification for more details.<br />

2. V TX-AC-CM-PP and V TX-AC-CM-P are defined in the PCI Express Base Specification. Measurement is made over<br />

at least 10^ 6 UI.<br />

3. As measured with compliance test load. Defined as 2*|V TXD+ V TXD- |.<br />

4. COMP resistance must be provided on the system board with 1% resistors.<br />

5. PEG_ICOMPO, PEG_COMPI, PEG_RCOMPO are the same resistor.<br />

6. RMS value.<br />

7. Measured at Rx pins into a pair of 50- terminations into ground. Common mode peak voltage is defined by<br />

the expression: max{|(Vd+ - Vd-) - V-CMDC|}.<br />

8. DC impedance limits are needed to ensure Receiver detect.<br />

9. The Rx DC Common Mode Impedance must be present when the Receiver terminations are first enabled to<br />

ensure that the Receiver Detect occurs properly. Compensation of this impedance can start immediately<br />

and the 15 Rx Common Mode Impedance (constrained by RLRX-CM to 50 ±20%) must be within the<br />

specified range by the time Detect is entered.<br />

10. Low impedance defined during signaling. Parameter is captured for 5.0 GHz by RLTX-DIFF.<br />

11. These are pre-silicon estimates and are subject to change.<br />

86 Datasheet, Volume 1


Electrical Specifications<br />

7.11 Platform Environmental Control Interface (PECI)<br />

DC Specifications<br />

PECI is an <strong>Intel</strong> proprietary interface that provides a communication channel between<br />

<strong>Intel</strong> processors and chipset components to external thermal monitoring devices. The<br />

processor contains a Digital Thermal Sensor (DTS) that reports a relative die<br />

temperature as an offset from Thermal Control Circuit (TCC) activation temperature.<br />

Temperature sensors located throughout the die are implemented as analog-to-digital<br />

converters calibrated at the factory. PECI provides an interface for external devices to<br />

read the DTS temperature for thermal management and fan speed control. More<br />

detailed information is provided in the Platform Environment Control Interface (PECI)<br />

Specification.<br />

7.11.1 PECI Bus Architecture<br />

The PECI architecture based on wired OR bus that the clients (as processor PECI) can<br />

pull up high (with strong drive).<br />

The idle state on the bus is near zero.<br />

Figure 7-1 demonstrates PECI design and connectivity, while the host/originator can be<br />

3rd party PECI host, and one of the PECI clients is the processor PECI device.<br />

Figure 7-1. Example for PECI Host-clients Connection<br />

Datasheet, Volume 1 87


7.11.2 DC Characteristics<br />

Electrical Specifications<br />

The PECI interface operates at a nominal voltage set by V CCIO. The set of DC electrical<br />

specifications shown in Table 7-11 is used with devices normally operating from a V CCIO<br />

interface supply. V CCIO nominal levels will vary between processor families. All PECI<br />

devices will operate at the V CCIO level determined by the processor installed in the<br />

system. For specific nominal V CCIO levels, refer to Table 7-6.<br />

Table 7-11. PECI DC Electrical Limits<br />

Symbol Definition and Conditions Min Max Units Notes 1<br />

Rup Internal pull up resistance 15 45 Ohm 3<br />

V in Input Voltage Range -0.15 V CCIO V<br />

V hysteresis Hysteresis 0.1 * V CCIO N/A V<br />

V n Negative-Edge Threshold Voltage 0.275 * V CCIO 0.500 * V CCIO V<br />

V p Positive-Edge Threshold Voltage 0.550 * V CCIO 0.725 * V CCIO V<br />

C bus Bus Capacitance per Node N/A 10 pF<br />

Cpad Pad Capacitance 0.7 1.8 pF<br />

Ileak000 leakage current at 0V 0.6 mA 2<br />

Ileak025 leakage current at 0.25*V CCIO 0.4 mA 2<br />

Ileak050 leakage current at 0.50*V CCIO 0.2 mA 2<br />

Ileak075 leakage current at 0.75*V CCIO 0.13 mA 2<br />

Ileak100 leakage current at V CCIO 0.10 mA 2<br />

Notes:<br />

1. V CCIO supplies the PECI interface. PECI behavior does not affect V CCIO min/max specifications.<br />

2. The leakage specification applies to powered devices on the PECI bus.<br />

3. The PECI buffer internal pull up resistance measured at 0.75*V CCIO<br />

7.11.3 Input Device Hysteresis<br />

The input buffers in both client and host models must use a Schmitt-triggered input<br />

design for improved noise immunity. Use Figure 7-2 as a guide for input buffer design.<br />

Figure 7-2. Input Device Hysteresis<br />

VTTD<br />

Maximum VP<br />

Minimum VP<br />

Maximum VN<br />

Minimum VN<br />

PECI Ground<br />

PECI High Range<br />

PECI Low Range<br />

88 Datasheet, Volume 1<br />

§ §<br />

Minimum<br />

Hysteresis<br />

Valid Input<br />

Signal Range


<strong>Processor</strong> Pin and Signal Information<br />

8 <strong>Processor</strong> Pin and Signal<br />

Information<br />

8.1 <strong>Processor</strong> Pin Assignments<br />

The processor pinmap quadrants are shown in Figure 8-1 through Figure 8-4. Table 8-1<br />

provides a listing of all processor pins ordered alphabetically by pin name.<br />

Datasheet, Volume 1 89


Figure 8-1. Socket Pinmap (Top View, Upper-Left Quadrant)<br />

AY<br />

AW<br />

AV<br />

<strong>Processor</strong> Pin and Signal Information<br />

40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21<br />

VSS_NCTF SA_DQ[37] VSS SA_BS[0] VDDQ SA_CK#[2] VDDQ SA_CK[0] SA_MA[1] VDDQ<br />

NCTF SA_DQ[33] VSS<br />

VSS_NCTF VSS SA_DQS[4] SA_DQS#[4] VSS RSVD VDDQ<br />

SA_DQ[36] RSVD SA_ODT[3] SA_MA[13] VDDQ SA_CS#[2] SA_WE# SA_BS[1] SA_CK[2] SA_CK#[3] SA_CK#[0] SA_M A[2] SA_MA[3]<br />

SA_CS#[1] SA_ODT[0] SA_CAS# VDDQ SA_MA[10] SA_M A[0] SA_CK[3] VDDQ VDDQ SA_MA[4] SA_MA[8] VDDQ<br />

AU NCTF SA_DQ[34] SA_DQ[38] SA_DQ[39] SA_DQ[35] SA_DQ[32] VSS SA_CS#[3] SA_ODT[1] VDDQ SA_ODT[2] SA_CS#[0] SA_RAS# VDDQ SA_CK#[1] VSS SA_CK[1] SA_MA[7] VDDQ SA_MA[11]<br />

AT VSS VSS VSS VSS VSS VSS VSS VSS VSS VSS VSS VSS VSS VSS SB_CS#[3] SA_M A[5] SA_MA[6] SA_MA[9] SA_MA[12 ]<br />

VSS<br />

AR SA_DQ[40] SA_DQ[44] SA_DQ[45] SA_DQ[41] VSS SB_DQ[46] SB_DQ[47] SB_DQS#[5] SB_DQ[44] SB_DQ[45] VSS SB_DQ[33] SB_DQ[32] VSS SB_MA[13] SB_WE# VDDQ VDDQ VDDQ VDDQ<br />

AP VSS SA_DQS#[5] SA_DQS[5] VSS VSS SB_DQ[42] SB_DQ[43] SB_DQS[5] SB_DQ[40] SB_DQ[41] VSS SB_DQ[37] SB_DQ[36] VSS SB_ODT[1] VSS SB_RAS# SB_BS[0] VSS SB_CK[3]<br />

AN SA_DQ[47] SA_DQ[46] SA_DQ[42] SA_DQ[43] VSS VSS VSS VSS VSS VSS VSS SB_DQS[4] SB_DQS#[4] VSS SB_CS#[1] SB_CS#[0] VSS SB_MA[10] VSS SB_CK#[3]<br />

AM VSS VSS VSS VSS VSS SB_DQ[54] SB_DQ[52] SB_DQS#[6] SB_DQ[48] SB_DQ[49] VSS SB_DQ[39] SB_DQ[38] VSS SB_ODT[2] VSS SB_BS[1] VSS SB_CK#[2] VSS<br />

AL SA_DQ[48] SA_DQ[52] SA_DQ[53] SA_DQ[49] VSS SB_DQ[50] SB_DQ[55] SB_DQS[6] SB_DQ[51] SB_DQ[53] VSS SB_DQ[35] SB_DQ[34] VSS SB_ODT[0] SB_CS#[2] SB_CK[2] SB_CK#[0] SB_CK[0]<br />

VSS<br />

AK VSS SA_DQS#[6] SA_DQS[6] VSS VSS VSS VSS VSS VSS VSS VCCIO VCCIO VSS VCCIO SB_ODT[3] SB_CAS# SB_M A[0] VCCIO VSS VCCIO<br />

AJ SA_DQ[55] SA_DQ[54] SA_DQ[50] SA_DQ[51] VSS SB_DQ[60] SB_DQ[61] SKTOCC# VCCIO RSVD RSVD RSVD VCCIO VSS VCCIO VSS VDDQ VDDQ SM_VREF VSS<br />

AH VSS VSS VSS VSS VSS SB_DQ[56] SB_DQ[57] VSS<br />

AG SA_DQ[56] SA_DQ[60] SA_DQ[61] SA_DQ[57] VSS SB_DQS[7] SB_DQS#[7] VCCIO<br />

AF VSS SA_DQS#[7] SA_DQS[7] VSS VSS SB_DQ[63] VSS SB_DQ[62]<br />

AE SA_DQ[63] SA_DQ[62] SA_DQ[58] SA_DQ[59] VSS SB_DQ[59] SB_DQ[58] VSS<br />

AD<br />

AC<br />

AB<br />

AA<br />

VSS VSS VSS RSVD VSS RSVD RSVD VSS<br />

VCC AXG VCCAXG VC CAXG VCCA XG VCCAXG VCCAXG VCCAX G VCCAXG<br />

VCC AXG VCCAXG VC CAXG VCCA XG VCCAXG VCCAXG VCCAX G VCCAXG<br />

VSS VSS VSS VSS VSS VSS<br />

90 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Figure 8-2. Socket Pinmap (Top View, Upper-Right Quadrant)<br />

20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1<br />

VSS SB _MA [9] S B_MA[1 4] SB_CKE[1] VSS VSS<br />

SM _DRAM RST# SB_BS[2] VSS SB_CKE[2] VSS SA_ECC_CB[2<br />

SA_ECC_CB[3 ] SA_E CC_CB[6]<br />

] SA_E CC_CB[7] VSS<br />

SA_BS[2] S A_C KE[0] SA_ CKE [3] VSS S B_MA[1 5] SB_CKE[3] VSS SA _DQS[8 ] SA_DQS#[8] VSS<br />

RSVD S A_DQ[3 1] VSS S A_DQ[2 4] VSS S A_DQ[2 3] VSS<br />

VSS S A_DQ[3 0] SA_DQS#[3] S A_DQ[2 9] VSS<br />

VSS S A_DQ[2 6] SA_ DQS[3] S A_DQ[2 8] VSS VSS<br />

Datasheet, Volume 1 91<br />

RS VD_ NC TF<br />

S A_DQ[1 9] SA_ DQS [2] SA _DQ[17 ] RS VD_NC TF<br />

S A_DQ[1 8] SA_DQS#[2]<br />

SA _DQ[16 ] RS VD_ NC TF<br />

S A_MA[1 4] VDDQ S A_C KE[2 ] SB_ MA[ 11] SB_CKE[0] SA_ECC_CB[1] SA_E CC_CB[4] SA_ECC_CB[0 ] SA_E CC_CB[5] VSS RSVD S A_DQ[2 7] VSS S A_DQ[2 5] VSS S A_DQ[2 2] VSS SA _DQ[21 ] SA_ DQ[ 20] VSS<br />

AU<br />

S A_MA[1 5] SA_ CK E[1] S B_MA[1 2] VSS VSS VSS RSVD VSS VSS RSVD VSS VSS VSS VSS VSS VSS VSS VSS VSS VSS AT<br />

VDDQ VSS VSS VSS SB_ECC_<br />

CB[3 ] SB_ECC_CB[6] VSS<br />

RSVD SB_MA[4] SB_MA[5] SB_ECC_ CB[2 ] SB_ECC_CB[7] VSS VSS<br />

RSVD VSS SB_MA[8] VSS VSS<br />

S B_DQS[ 8] SB_DQS#[8]<br />

SB _DQ[26 ] SB_ DQ[ 30] VSS SB _DQ[19 ] S B_ DQ[ 23] SB _DQS[2 ] S B_ DQ[1 7] SB _DQ[21 ] VSS<br />

SB _DQ[27 ] SB_ DQ[ 31] VSS SB _DQ[18 ] S B_ DQ[ 22] SB_DQS#[2] S B_DQ[1 6] SB_ DQ[ 20] VSS VSS VSS<br />

VSS<br />

SB _DQS[3 ] SB_DQS#[3]<br />

VSS VSS VSS VSS VSS VSS<br />

SA _DQ[1 1] SA_ DQ[ 10] SA _DQ[14 ] SA_ DQ[ 15]<br />

SA _DQS[1 ] SA_DQS#[1]<br />

SA_DQ[9] SA _DQ[13 ] SA_ DQ[ 12] SA_DQ[8]<br />

AY<br />

AW<br />

AV<br />

AR<br />

AP<br />

SB_MA[1] SB_MA[2] SB_MA[6] SB_ECC_ CB[1 ] SB_ECC_CB[5] VSS VSS SB _DQ[25 ] SB_ DQ[ 24] VSS SB _DQ[10 ] S B_ DQ[ 15] SB _DQS[1 ] SB_DQ[9] S B_DQ[1 3] VSS VSS VSS VSS VSS<br />

AM<br />

SB_CK[1] VSS SB_MA[7] VSS SB_ECC_<br />

CB[0 ] SB_ECC_CB[4] VSS<br />

SB _DQ[29 ] SB_ DQ[ 28] VSS SB _DQ[11 ] S B_ DQ[ 14] SB_DQS#[1] SB_DQ[8] S B_DQ[1 2] VSS<br />

S B_C K#[ 1] VCCIO SB_MA[3] VCCIO VSS VCCIO VSS VSS VCCPLL VCCPLL VSS VSS VSS VSS VSS VSS VSS VSS<br />

VDDQ SM<br />

_D RAM PWRO K VSS<br />

SA_DQ[3] SA_DQ[2] SA_DQ[6] SA_DQ[7]<br />

SA _DQS[0 ] SA_DQS#[0]<br />

VCCIO VCCIO VSS VDDQ VDDQ VSS RSVD SB _DQ[2] SB_DQ[3] SB_DQ[7] SB_DQ[6] VSS SA_DQ[1] SA_DQ[0] SA_DQ[4] SA_DQ[5]<br />

VSS VSS<br />

S B_DQS[ 0] SB_DQS#[0]<br />

FC_AH4 VSS VSS FC_AH1<br />

AN<br />

AL<br />

AK<br />

SB_DQ[1] SB_DQ[0] SB_DQ[5] SB_DQ[4] RSVD FDI_INT FDI_TX[7] FDI _TX #[7 ]<br />

AJ<br />

AH<br />

AG<br />

VCCIO VSS VSS VSS RSVD FDI_TX[6] FDI _TX #[6 ] VSS AF<br />

FDI_TX #[ 5] FDI_TX[5] RSVD VSS<br />

VSS FDI_TX[4] FDI_TX#[ 4] VSS<br />

FDI_TX[0] FDI_TX#[ 0] VSS FDI_FSY<br />

VCCIO RSVD RSVD VSS VCCIO_SENSE<br />

DMI_TX#[ 3] DMI_TX [3] VSS DM<br />

FDI_FSY NC[1 ] FDI_L SYNC[1] FDI_C OMPIO FDI _ICOMPO<br />

FDI_TX[3] FDI_TX#[3 ] FDI_TX[2] FDI _TX #[2 ]<br />

AE<br />

AD<br />

NC[0 ] FDI_L SYNC[0] FDI_TX#[1 ] FDI_TX[1] VSS AC<br />

VSSIO_SENSE<br />

I_RX#[3] DMI_RX[3] VCCIO<br />

AB<br />

AA


Figure 8-3. Socket Pinmap (Top View, Lower-Left Quadrant)<br />

Y<br />

W<br />

V<br />

U<br />

T<br />

R<br />

VCCAXG VCCAXG VCCAXG VCCAXG VCCAXG VC CAXG<br />

VCCAXG VCCAXG VCCAXG VCCAXG VCCAXG VC CAXG<br />

VSS VSS VSS VSS VSS VSS VSS VSS<br />

VCC AXG VCCAXG VCCAXG VCCA XG VCCAXG VCCAXG VCCAX G VC CAXG<br />

VCC AXG VCCAXG VCCAXG VCCA XG VCCAXG VCCAXG VCCAX G VC CAXG<br />

RSVD VSS RSVD VSS RSVD VSS RSVD VSS<br />

P VSS RSVD VSS RSVD VSS RSVD VCCSA_VID[0] RSVD<br />

N<br />

CFG[15] C FG[13] CFG[12] CFG[14] C FG[11] CFG[5] RSVD RSVD<br />

<strong>Processor</strong> Pin and Signal Information<br />

M TCK VSS CFG[10] VSS CFG[7] VSS RSVD VSS VSSAXG_SENSE VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

L TDI TDO TMS CFG[6] CFG[4] C FG[9] RSVD RSVD VCCAXG _SENSE RSVD VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

K PREQ# VSS PRDY# VSS CFG[3] VSS RSVD VSS PROC_SEL RSVD VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

J<br />

UNCO REPWRGOOD TRST# CFG[8] CFG[2] CFG[1] PECI RSVD RSVD VSS RSVD VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

H BPM#[0] VSS BPM#[1] VSS CFG[0] VSS PROCHOT# VSS VCC VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

G BPM#[3] BPM#[4] BPM#[2] CFG[16] C FG[17] THERMTRIP# VSS VCC VCC VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

F BPM#[7] VSS BPM#[5] VSS RESET# VSS VCC VCC VCC VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

E BPM#[6] DBR# PM_SYNC CATERR# VSS VCC VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

D<br />

RSVD VSS RSVD VSS VCC VCC VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

C RSVD RSVD RSVD VIDSCLK VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC<br />

B<br />

A<br />

RSVD VSS VIDSOUT V SS_S ENS E VSS VCC VCC VSS VCC VCC VSS VCC VCC VSS VCC VCC VSS<br />

NCTF VIDALERT# V CC_S ENS E VSS VSS VCC VCC VSS VCC VCC VSS<br />

40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21<br />

92 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Figure 8-4. Socket Pinmap (Top View, Lower-Right Quadrant)<br />

VSS VCC VCC VSS VCC VCC VCC VCCIO VCCSA VCCSA VCCSA VSS VSS<br />

VSS DMI_ TX#[2] DMI_TX[2] VSS DM<br />

I_RX#[2] DMI_RX[2]<br />

DMI_ TX#[1] DMI_TX[1] VSS DMI_RX[0] DM I_RX#[0] VCCIO BCLK[0] BCLK#[0]<br />

VCCIO DMI_ TX[0 ] DMI_ TX#[0] VSS DM<br />

Datasheet, Volume 1 93<br />

VSS<br />

Y<br />

W<br />

I_RX#[1] DMI_RX[1] VSS VSS V<br />

VCCIO PE _TX #[3] PE_TX[3] VCCIO VCCIO PE_RX[3] PE_ RX#[3]<br />

PE _TX #[1 ] PE_TX[1] VSS VSS PE_RX[2] PE_ RX#[2] VCCSA_ SENSE VSS<br />

VSS<br />

U<br />

T<br />

VCCIO PE_TX[2] PE _TX #[2] VCCIO VCCIO PE_RX[1] PE_ RX#[1]<br />

PE_TX[0] PE _TX #[0] VSS VSS PE_ RX#[0] PE_RX[0] VSS VSS P<br />

VSS<br />

PE G_TX[1 3] PEG_TX#[13]<br />

VCCIO PEG_TX#[15] PE G_TX[15 ] VCCIO VCCIO PEG_ RX#[15] P EG_RX[15]<br />

VSS PEG_RX#[14]<br />

P EG_RX[14] VSS<br />

VSS VCC VCC VSS VCC VCC VCC VCC VCCSA VCCSA VSS RSVD VSS VCCIO PE G_TX[14 ] PEG_TX#[14] VCCIO VCCIO<br />

VSS VCC VCC VSS VCC VCC VSS VSS VSS VCCSA VCCSA RSVD PEG_TX#[11] PE G_TX[1 1] VSS VSS PEG_RX#[12]<br />

VSS VCC VCC VSS VCC VCC PEG_TX [4] PE G_TX#[ 4] VCC VSS<br />

P EG_RX[12] VSS<br />

R<br />

N<br />

VSS M<br />

PEG_ RX#[13] P EG_RX[13]<br />

L<br />

VSS K<br />

VCCSA RSVD VCCIO VCCIO PEG_TX#[12] PE G_TX[12 ] VCCIO VCCIO PEG_ RX#[11] P EG_RX[11]<br />

VSS VCC VCC VSS VCC VCC VCC VCC VCCSA VCCSA VCCSA VSS RSVD RSVD VSS VSS PEG_RX#[10]<br />

VSS VCC VCC VSS VCC VCC PEG_TX [2] PE G_TX#[ 2] VSS VSS PE G_TX[9 ] PE G_ TX#[9] VSS VSS PEG_TX#[10]<br />

VSS VCC VCC VSS VCC VCC VSS VSS PEG_TX [3] PE G_TX#[ 3] VSS VSS PE G_TX[8 ] PE G_ TX#[8] VSS VSS VSS<br />

VSS VCC VCC VSS VCC VCC PEG_TX [1] PE G_TX#[ 1] VSS VSS VSS<br />

PEG_RX[3] PEG_RX#[3]<br />

VSS<br />

P EG_RX[10] VSS<br />

PE G_TX[10 ] VCCIO VCCIO<br />

PEG_RX[8] PEG_RX#[8]<br />

J<br />

VSS H<br />

PEG_RX[9] PEG_RX#[9]<br />

G<br />

VSS F<br />

PE G_TX[7] PE G_ TX#[ 7] VCCIO VCCIO PEG_RX[7] PEG_RX#[7]<br />

VSS VCC VCC VSS VCC VCC VCC VCC PEG_RX[1] PEG_ RX#[1] VCCIO VSS PE G_TX[5 ] PE G_ TX#[5] VCCIO VSS VSS PE G_TX [6] VSS NCTF D<br />

VSS VCC VCC VSS VCC VCC PEG_TX #[0 ] PEG_TX[ 0] VSS VSS VSS<br />

VCC VSS VCC VCC VSS VSS PEG_<br />

PEG_RX[2] PEG_RX#[2]<br />

VSS<br />

RX#[0] PEG_RX[0] VSS VCCIO PEG_RX[4] PEG_RX#[4] VSS<br />

VCC VSS VCC VCC VCC VCC VCC VCCIO VCCIO<br />

PEG_RX[5] PEG_RX#[5] PEG_RCOM PO PE G_TX #[6 ] NCTF<br />

PEG_ICOM<br />

PO PEG_COMPI VSS_NCTF<br />

PEG_RX#[6] PEG_RX[6] VSS_NCTF<br />

E<br />

C<br />

B<br />

A


Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

BCLK_ITP C40 Diff Clk I<br />

BCLK_ITP# D40 Diff Clk I<br />

BCLK[0] W2 Diff Clk I<br />

BCLK#[0] W1 Diff Clk I<br />

BPM#[0] H40 GTL I/O<br />

BPM#[1] H38 GTL I/O<br />

BPM#[2] G38 GTL I/O<br />

BPM#[3] G40 GTL I/O<br />

BPM#[4] G39 GTL I/O<br />

BPM#[5] F38 GTL I/O<br />

BPM#[6] E40 GTL I/O<br />

BPM#[7] F40 GTL I/O<br />

CATERR# <strong>E3</strong>7 GTL O<br />

CFG[0] H36 CMOS I<br />

CFG[1] J36 CMOS I<br />

CFG[10] M38 CMOS I<br />

CFG[11] N36 CMOS I<br />

CFG[12] N38 CMOS I<br />

CFG[13] N39 CMOS I<br />

CFG[14] N37 CMOS I<br />

CFG[15] N40 CMOS I<br />

CFG[16] G37 CMOS I<br />

CFG[17] G36 CMOS I<br />

CFG[2] J37 CMOS I<br />

CFG[3] K36 CMOS I<br />

CFG[4] L36 CMOS I<br />

CFG[5] N35 CMOS I<br />

CFG[6] L37 CMOS I<br />

CFG[7] M36 CMOS I<br />

CFG[8] J38 CMOS I<br />

CFG[9] L35 CMOS I<br />

DBR# <strong>E3</strong>9 Async CMOS O<br />

DMI_RX[0] W5 DMI I<br />

DMI_RX[1] V3 DMI I<br />

DMI_RX[2] Y3 DMI I<br />

DMI_RX[3] AA4 DMI I<br />

DMI_RX#[0] W4 DMI I<br />

DMI_RX#[1] V4 DMI I<br />

DMI_RX#[2] Y4 DMI I<br />

DMI_RX#[3] AA5 DMI I<br />

DMI_TX[0] V7 DMI O<br />

DMI_TX[1] W7 DMI O<br />

DMI_TX[2] Y6 DMI O<br />

DMI_TX[3] AA7 DMI O<br />

<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

DMI_TX#[0] V6 DMI O<br />

DMI_TX#[1] W8 DMI O<br />

DMI_TX#[2] Y7 DMI O<br />

DMI_TX#[3] AA8 DMI O<br />

FC_AH1 AH1 N/A O<br />

FC_AH4 AH4 N/A O<br />

FDI_COMPIO AE2 Analog I<br />

FDI_FSYNC[0] AC5 CMOS I<br />

FDI_FSYNC[1] AE5 CMOS I<br />

FDI_ICOMPO AE1 Analog I<br />

FDI_INT AG3 CMOS I<br />

FDI_LSYNC[0] AC4 CMOS I<br />

FDI_LSYNC[1] AE4 CMOS I<br />

FDI_TX[0] AC8 FDI O<br />

FDI_TX[1] AC2 FDI O<br />

FDI_TX[2] AD2 FDI O<br />

FDI_TX[3] AD4 FDI O<br />

FDI_TX[4] AD7 FDI O<br />

FDI_TX[5] AE7 FDI O<br />

FDI_TX[6] AF3 FDI O<br />

FDI_TX[7] AG2 FDI O<br />

FDI_TX#[0] AC7 FDI O<br />

FDI_TX#[1] AC3 FDI O<br />

FDI_TX#[2] AD1 FDI O<br />

FDI_TX#[3] AD3 FDI O<br />

FDI_TX#[4] AD6 FDI O<br />

FDI_TX#[5] AE8 FDI O<br />

FDI_TX#[6] AF2 FDI O<br />

FDI_TX#[7] AG1 FDI O<br />

NCTF A38<br />

NCTF AU40<br />

NCTF AW38<br />

NCTF C2<br />

NCTF D1<br />

PE_RX[0] P3 PCI Express I<br />

PE_RX[1] R2 PCI Express I<br />

PE_RX[2] T4 PCI Express I<br />

PE_RX[3] U2 PCI Express I<br />

PE_RX#[0] P4 PCI Express I<br />

PE_RX#[1] R1 PCI Express I<br />

PE_RX#[2] T3 PCI Express I<br />

PE_RX#[3] U1 PCI Express I<br />

PE_TX[0] P8 PCI Express O<br />

PE_TX[1] T7 PCI Express O<br />

94 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

PE_TX[2] R6 PCI Express O<br />

PE_TX[3] U5 PCI Express O<br />

PE_TX#[0] P7 PCI Express O<br />

PE_TX#[1] T8 PCI Express O<br />

PE_TX#[2] R5 PCI Express O<br />

PE_TX#[3] U6 PCI Express O<br />

PECI J35 Async I/O<br />

PEG_COMPI B4 Analog I<br />

PEG_ICOMPO B5 Analog I<br />

PEG_RCOMPO C4 Analog I<br />

PEG_RX[0] B11 PCI Express I<br />

PEG_RX[1] D12 PCI Express I<br />

PEG_RX[10] H3 PCI Express I<br />

PEG_RX[11] J1 PCI Express I<br />

PEG_RX[12] K3 PCI Express I<br />

PEG_RX[13] L1 PCI Express I<br />

PEG_RX[14] M3 PCI Express I<br />

PEG_RX[15] N1 PCI Express I<br />

PEG_RX[2] C10 PCI Express I<br />

PEG_RX[3] E10 PCI Express I<br />

PEG_RX[4] B8 PCI Express I<br />

PEG_RX[5] C6 PCI Express I<br />

PEG_RX[6] A5 PCI Express I<br />

PEG_RX[7] E2 PCI Express I<br />

PEG_RX[8] F4 PCI Express I<br />

PEG_RX[9] G2 PCI Express I<br />

PEG_RX#[0] B12 PCI Express I<br />

PEG_RX#[1] D11 PCI Express I<br />

PEG_RX#[10] H4 PCI Express I<br />

PEG_RX#[11] J2 PCI Express I<br />

PEG_RX#[12] K4 PCI Express I<br />

PEG_RX#[13] L2 PCI Express I<br />

PEG_RX#[14] M4 PCI Express I<br />

PEG_RX#[15] N2 PCI Express I<br />

PEG_RX#[2] C9 PCI Express I<br />

PEG_RX#[3] E9 PCI Express I<br />

PEG_RX#[4] B7 PCI Express I<br />

PEG_RX#[5] C5 PCI Express I<br />

PEG_RX#[6] A6 PCI Express I<br />

PEG_RX#[7] E1 PCI Express I<br />

PEG_RX#[8] F3 PCI Express I<br />

PEG_RX#[9] G1 PCI Express I<br />

PEG_TX[0] C13 PCI Express O<br />

PEG_TX[1] E14 PCI Express O<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

PEG_TX[2] G14 PCI Express O<br />

PEG_TX[3] F12 PCI Express O<br />

PEG_TX[4] J14 PCI Express O<br />

PEG_TX[5] D8 PCI Express O<br />

PEG_TX[6] D3 PCI Express O<br />

PEG_TX[7] E6 PCI Express O<br />

PEG_TX[8] F8 PCI Express O<br />

PEG_TX[9] G10 PCI Express O<br />

PEG_TX[10] G5 PCI Express O<br />

PEG_TX[11] K7 PCI Express O<br />

PEG_TX[12] J5 PCI Express O<br />

PEG_TX[13] M8 PCI Express O<br />

PEG_TX[14] L6 PCI Express O<br />

PEG_TX[15] N5 PCI Express O<br />

PEG_TX#[0] C14 PCI Express O<br />

PEG_TX#[1] E13 PCI Express O<br />

PEG_TX#[2] G13 PCI Express O<br />

PEG_TX#[3] F11 PCI Express O<br />

PEG_TX#[4] J13 PCI Express O<br />

PEG_TX#[5] D7 PCI Express O<br />

PEG_TX#[6] C3 PCI Express O<br />

PEG_TX#[7] E5 PCI Express O<br />

PEG_TX#[8] F7 PCI Express O<br />

PEG_TX#[9] G9 PCI Express O<br />

PEG_TX#[10] G6 PCI Express O<br />

PEG_TX#[11] K8 PCI Express O<br />

PEG_TX#[12] J6 PCI Express O<br />

PEG_TX#[13] M7 PCI Express O<br />

PEG_TX#[14] L5 PCI Express O<br />

PEG_TX#[15] N6 PCI Express O<br />

PM_SYNC <strong>E3</strong>8 CMOS I<br />

PRDY# K38 Async GTL O<br />

PREQ# K40 Async GTL I<br />

PROC_SEL K32 N/A O<br />

PROCHOT# H34 Async GTL I/O<br />

RESET# F36 CMOS I<br />

RSVD AB6<br />

RSVD AB7<br />

RSVD AD37<br />

RSVD AE6<br />

RSVD AF4<br />

RSVD AG4<br />

RSVD AJ11<br />

RSVD AJ29<br />

Datasheet, Volume 1 95


Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

RSVD AJ30<br />

RSVD AJ31<br />

RSVD AN20<br />

RSVD AP20<br />

RSVD AT11<br />

RSVD AT14<br />

RSVD AU10<br />

RSVD AV34<br />

RSVD AW34<br />

RSVD AY10<br />

RSVD C38<br />

RSVD C39<br />

RSVD D38<br />

RSVD H7<br />

RSVD H8<br />

RSVD J33<br />

RSVD J34<br />

RSVD J9<br />

RSVD K34<br />

RSVD K9<br />

RSVD L31<br />

RSVD L33<br />

RSVD L34<br />

RSVD L9<br />

RSVD M34<br />

RSVD N33<br />

RSVD N34<br />

RSVD P35<br />

RSVD P37<br />

RSVD P39<br />

RSVD R34<br />

RSVD R36<br />

RSVD R38<br />

RSVD R40<br />

RSVD J31<br />

RSVD AD34<br />

RSVD AD35<br />

RSVD K31<br />

RSVD_NCTF AV1<br />

RSVD_NCTF AW2<br />

RSVD_NCTF AY3<br />

RSVD_NCTF B39<br />

SA_BS[0] AY29 DDR3 O<br />

SA_BS[1] AW28 DDR3 O<br />

<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

SA_BS[2] AV20 DDR3 O<br />

SA_CAS# AV30 DDR3 O<br />

SA_CK[0] AY25 DDR3 O<br />

SA_CK[1] AU24 DDR3 O<br />

SA_CK[2] AW27 DDR3 O<br />

SA_CK[3] AV26 DDR3 O<br />

SA_CK#[0] AW25 DDR3 O<br />

SA_CK#[1] AU25 DDR3 O<br />

SA_CK#[2] AY27 DDR3 O<br />

SA_CK#[3] AW26 DDR3 O<br />

SA_CKE[0] AV19 DDR3 O<br />

SA_CKE[1] AT19 DDR3 O<br />

SA_CKE[2] AU18 DDR3 O<br />

SA_CKE[3] AV18 DDR3 O<br />

SA_CS#[0] AU29 DDR3 O<br />

SA_CS#[1] AV32 DDR3 O<br />

SA_CS#[2] AW30 DDR3 O<br />

SA_CS#[3] AU33 DDR3 O<br />

SA_DQ[0] AJ3 DDR3 I/O<br />

SA_DQ[1] AJ4 DDR3 I/O<br />

SA_DQ[2] AL3 DDR3 I/O<br />

SA_DQ[3] AL4 DDR3 I/O<br />

SA_DQ[4] AJ2 DDR3 I/O<br />

SA_DQ[5] AJ1 DDR3 I/O<br />

SA_DQ[6] AL2 DDR3 I/O<br />

SA_DQ[7] AL1 DDR3 I/O<br />

SA_DQ[8] AN1 DDR3 I/O<br />

SA_DQ[9] AN4 DDR3 I/O<br />

SA_DQ[10] AR3 DDR3 I/O<br />

SA_DQ[11] AR4 DDR3 I/O<br />

SA_DQ[12] AN2 DDR3 I/O<br />

SA_DQ[13] AN3 DDR3 I/O<br />

SA_DQ[14] AR2 DDR3 I/O<br />

SA_DQ[15] AR1 DDR3 I/O<br />

SA_DQ[16] AV2 DDR3 I/O<br />

SA_DQ[17] AW3 DDR3 I/O<br />

SA_DQ[18] AV5 DDR3 I/O<br />

SA_DQ[19] AW5 DDR3 I/O<br />

SA_DQ[20] AU2 DDR3 I/O<br />

SA_DQ[21] AU3 DDR3 I/O<br />

SA_DQ[22] AU5 DDR3 I/O<br />

SA_DQ[23] AY5 DDR3 I/O<br />

SA_DQ[24] AY7 DDR3 I/O<br />

SA_DQ[25] AU7 DDR3 I/O<br />

96 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

SA_DQ[26] AV9 DDR3 I/O<br />

SA_DQ[27] AU9 DDR3 I/O<br />

SA_DQ[28] AV7 DDR3 I/O<br />

SA_DQ[29] AW7 DDR3 I/O<br />

SA_DQ[30] AW9 DDR3 I/O<br />

SA_DQ[31] AY9 DDR3 I/O<br />

SA_DQ[32] AU35 DDR3 I/O<br />

SA_DQ[33] AW37 DDR3 I/O<br />

SA_DQ[34] AU39 DDR3 I/O<br />

SA_DQ[35] AU36 DDR3 I/O<br />

SA_DQ[36] AW35 DDR3 I/O<br />

SA_DQ[37] AY36 DDR3 I/O<br />

SA_DQ[38] AU38 DDR3 I/O<br />

SA_DQ[39] AU37 DDR3 I/O<br />

SA_DQ[40] AR40 DDR3 I/O<br />

SA_DQ[41] AR37 DDR3 I/O<br />

SA_DQ[42] AN38 DDR3 I/O<br />

SA_DQ[43] AN37 DDR3 I/O<br />

SA_DQ[44] AR39 DDR3 I/O<br />

SA_DQ[45] AR38 DDR3 I/O<br />

SA_DQ[46] AN39 DDR3 I/O<br />

SA_DQ[47] AN40 DDR3 I/O<br />

SA_DQ[48] AL40 DDR3 I/O<br />

SA_DQ[49] AL37 DDR3 I/O<br />

SA_DQ[50] AJ38 DDR3 I/O<br />

SA_DQ[51] AJ37 DDR3 I/O<br />

SA_DQ[52] AL39 DDR3 I/O<br />

SA_DQ[53] AL38 DDR3 I/O<br />

SA_DQ[54] AJ39 DDR3 I/O<br />

SA_DQ[55] AJ40 DDR3 I/O<br />

SA_DQ[56] AG40 DDR3 I/O<br />

SA_DQ[57] AG37 DDR3 I/O<br />

SA_DQ[58] A<strong>E3</strong>8 DDR3 I/O<br />

SA_DQ[59] A<strong>E3</strong>7 DDR3 I/O<br />

SA_DQ[60] AG39 DDR3 I/O<br />

SA_DQ[61] AG38 DDR3 I/O<br />

SA_DQ[62] A<strong>E3</strong>9 DDR3 I/O<br />

SA_DQ[63] AE40 DDR3 I/O<br />

SA_DQS[0] AK3 DDR3 I/O<br />

SA_DQS[1] AP3 DDR3 I/O<br />

SA_DQS[2] AW4 DDR3 I/O<br />

SA_DQS[3] AV8 DDR3 I/O<br />

SA_DQS[4] AV37 DDR3 I/O<br />

SA_DQS[5] AP38 DDR3 I/O<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

SA_DQS[6] AK38 DDR3 I/O<br />

SA_DQS[7] AF38 DDR3 I/O<br />

SA_DQS[8] AV13 DDR3 I/O<br />

SA_DQS#[0] AK2 DDR3 I/O<br />

SA_DQS#[1] AP2 DDR3 I/O<br />

SA_DQS#[2] AV4 DDR3 I/O<br />

SA_DQS#[3] AW8 DDR3 I/O<br />

SA_DQS#[4] AV36 DDR3 I/O<br />

SA_DQS#[5] AP39 DDR3 I/O<br />

SA_DQS#[6] AK39 DDR3 I/O<br />

SA_DQS#[7] AF39 DDR3 I/O<br />

SA_DQS#[8] AV12 DDR3 I/O<br />

SA_ECC_CB[0 AU12 DDR3 I/O<br />

SA_ECC_CB[1] AU14 DDR3 I/O<br />

SA_ECC_CB[2] AW13 DDR3 I/O<br />

SA_ECC_CB[3] AY13 DDR3 I/O<br />

SA_ECC_CB[4] AU13 DDR3 I/O<br />

SA_ECC_CB[5] AU11 DDR3 I/O<br />

SA_ECC_CB[6] AY12 DDR3 I/O<br />

SA_ECC_CB[7] AW12 DDR3 I/O<br />

SA_MA[0] AV27 DDR3 O<br />

SA_MA[1] AY24 DDR3 O<br />

SA_MA[2] AW24 DDR3 O<br />

SA_MA[3] AW23 DDR3 O<br />

SA_MA[4] AV23 DDR3 O<br />

SA_MA[5] AT24 DDR3 O<br />

SA_MA[6] AT23 DDR3 O<br />

SA_MA[7] AU22 DDR3 O<br />

SA_MA[8] AV22 DDR3 O<br />

SA_MA[9] AT22 DDR3 O<br />

SA_MA[10] AV28 DDR3 O<br />

SA_MA[11] AU21 DDR3 O<br />

SA_MA[12] AT21 DDR3 O<br />

SA_MA[13] AW32 DDR3 O<br />

SA_MA[14] AU20 DDR3 O<br />

SA_MA[15] AT20 DDR3 O<br />

SA_ODT[0] AV31 DDR3 O<br />

SA_ODT[1] AU32 DDR3 O<br />

SA_ODT[2] AU30 DDR3 O<br />

SA_ODT[3] AW33 DDR3 O<br />

SA_RAS# AU28 DDR3 O<br />

SA_WE# AW29 DDR3 O<br />

SB_BS[0] AP23 DDR3 O<br />

SB_BS[1] AM24 DDR3 O<br />

Datasheet, Volume 1 97


Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

SB_BS[2] AW17 DDR3 O<br />

SB_CAS# AK25 DDR3 O<br />

SB_CK[0] AL21 DDR3 O<br />

SB_CK[1] AL20 DDR3 O<br />

SB_CK[2] AL23 DDR3 O<br />

SB_CK[3] AP21 DDR3 O<br />

SB_CK#[0] AL22 DDR3 O<br />

SB_CK#[1] AK20 DDR3 O<br />

SB_CK#[2] AM22 DDR3 O<br />

SB_CK#[3] AN21 DDR3 O<br />

SB_CKE[0] AU16 DDR3 O<br />

SB_CKE[1] AY15 DDR3 O<br />

SB_CKE[2] AW15 DDR3 O<br />

SB_CKE[3] AV15 DDR3 O<br />

SB_CS#[0] AN25 DDR3 O<br />

SB_CS#[1] AN26 DDR3 O<br />

SB_CS#[2] AL25 DDR3 O<br />

SB_CS#[3] AT26 DDR3 O<br />

SB_DQ[0] AG7 DDR3 I/O<br />

SB_DQ[1] AG8 DDR3 I/O<br />

SB_DQ[2] AJ9 DDR3 I/O<br />

SB_DQ[3] AJ8 DDR3 I/O<br />

SB_DQ[4] AG5 DDR3 I/O<br />

SB_DQ[5] AG6 DDR3 I/O<br />

SB_DQ[6] AJ6 DDR3 I/O<br />

SB_DQ[7] AJ7 DDR3 I/O<br />

SB_DQ[8] AL7 DDR3 I/O<br />

SB_DQ[9] AM7 DDR3 I/O<br />

SB_DQ[10] AM10 DDR3 I/O<br />

SB_DQ[11] AL10 DDR3 I/O<br />

SB_DQ[12] AL6 DDR3 I/O<br />

SB_DQ[13] AM6 DDR3 I/O<br />

SB_DQ[14] AL9 DDR3 I/O<br />

SB_DQ[15] AM9 DDR3 I/O<br />

SB_DQ[16] AP7 DDR3 I/O<br />

SB_DQ[17] AR7 DDR3 I/O<br />

SB_DQ[18] AP10 DDR3 I/O<br />

SB_DQ[19] AR10 DDR3 I/O<br />

SB_DQ[20] AP6 DDR3 I/O<br />

SB_DQ[21] AR6 DDR3 I/O<br />

SB_DQ[22] AP9 DDR3 I/O<br />

SB_DQ[23] AR9 DDR3 I/O<br />

SB_DQ[24] AM12 DDR3 I/O<br />

SB_DQ[25] AM13 DDR3 I/O<br />

<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

SB_DQ[26] AR13 DDR3 I/O<br />

SB_DQ[27] AP13 DDR3 I/O<br />

SB_DQ[28] AL12 DDR3 I/O<br />

SB_DQ[29] AL13 DDR3 I/O<br />

SB_DQ[30] AR12 DDR3 I/O<br />

SB_DQ[31] AP12 DDR3 I/O<br />

SB_DQ[32] AR28 DDR3 I/O<br />

SB_DQ[33] AR29 DDR3 I/O<br />

SB_DQ[34] AL28 DDR3 I/O<br />

SB_DQ[35] AL29 DDR3 I/O<br />

SB_DQ[36] AP28 DDR3 I/O<br />

SB_DQ[37] AP29 DDR3 I/O<br />

SB_DQ[38] AM28 DDR3 I/O<br />

SB_DQ[39] AM29 DDR3 I/O<br />

SB_DQ[40] AP32 DDR3 I/O<br />

SB_DQ[41] AP31 DDR3 I/O<br />

SB_DQ[42] AP35 DDR3 I/O<br />

SB_DQ[43] AP34 DDR3 I/O<br />

SB_DQ[44] AR32 DDR3 I/O<br />

SB_DQ[45] AR31 DDR3 I/O<br />

SB_DQ[46] AR35 DDR3 I/O<br />

SB_DQ[47] AR34 DDR3 I/O<br />

SB_DQ[48] AM32 DDR3 I/O<br />

SB_DQ[49] AM31 DDR3 I/O<br />

SB_DQ[50] AL35 DDR3 I/O<br />

SB_DQ[51] AL32 DDR3 I/O<br />

SB_DQ[52] AM34 DDR3 I/O<br />

SB_DQ[53] AL31 DDR3 I/O<br />

SB_DQ[54] AM35 DDR3 I/O<br />

SB_DQ[55] AL34 DDR3 I/O<br />

SB_DQ[56] AH35 DDR3 I/O<br />

SB_DQ[57] AH34 DDR3 I/O<br />

SB_DQ[58] A<strong>E3</strong>4 DDR3 I/O<br />

SB_DQ[59] A<strong>E3</strong>5 DDR3 I/O<br />

SB_DQ[60] AJ35 DDR3 I/O<br />

SB_DQ[61] AJ34 DDR3 I/O<br />

SB_DQ[62] AF33 DDR3 I/O<br />

SB_DQ[63] AF35 DDR3 I/O<br />

SB_DQS[0] AH7 DDR3 I/O<br />

SB_DQS[1] AM8 DDR3 I/O<br />

SB_DQS[2] AR8 DDR3 I/O<br />

SB_DQS[3] AN13 DDR3 I/O<br />

SB_DQS[4] AN29 DDR3 I/O<br />

SB_DQS[5] AP33 DDR3 I/O<br />

98 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

SB_DQS[6] AL33 DDR3 I/O<br />

SB_DQS[7] AG35 DDR3 I/O<br />

SB_DQS[8] AN16 DDR3 I/O<br />

SB_DQS#[0] AH6 DDR3 I/O<br />

SB_DQS#[1] AL8 DDR3 I/O<br />

SB_DQS#[2] AP8 DDR3 I/O<br />

SB_DQS#[3] AN12 DDR3 I/O<br />

SB_DQS#[4] AN28 DDR3 I/O<br />

SB_DQS#[5] AR33 DDR3 I/O<br />

SB_DQS#[6] AM33 DDR3 I/O<br />

SB_DQS#[7] AG34 DDR3 I/O<br />

SB_DQS#[8] AN15 DDR3 I/O<br />

SB_ECC_CB[0] AL16 DDR3 I/O<br />

SB_ECC_CB[1] AM16 DDR3 I/O<br />

SB_ECC_CB[2] AP16 DDR3 I/O<br />

SB_ECC_CB[3] AR16 DDR3 I/O<br />

SB_ECC_CB[4] AL15 DDR3 I/O<br />

SB_ECC_CB[5] AM15 DDR3 I/O<br />

SB_ECC_CB[6] AR15 DDR3 I/O<br />

SB_ECC_CB[7] AP15 DDR3 I/O<br />

SB_MA[0] AK24 DDR3 O<br />

SB_MA[1] AM20 DDR3 O<br />

SB_MA[2] AM19 DDR3 O<br />

SB_MA[3] AK18 DDR3 O<br />

SB_MA[4] AP19 DDR3 O<br />

SB_MA[5] AP18 DDR3 O<br />

SB_MA[6] AM18 DDR3 O<br />

SB_MA[7] AL18 DDR3 O<br />

SB_MA[8] AN18 DDR3 O<br />

SB_MA[9] AY17 DDR3 O<br />

SB_MA[10] AN23 DDR3 O<br />

SB_MA[11] AU17 DDR3 O<br />

SB_MA[12] AT18 DDR3 O<br />

SB_MA[13] AR26 DDR3 O<br />

SB_MA[14] AY16 DDR3 O<br />

SB_MA[15] AV16 DDR3 O<br />

SB_ODT[0] AL26 DDR3 O<br />

SB_ODT[1] AP26 DDR3 O<br />

SB_ODT[2] AM26 DDR3 O<br />

SB_ODT[3] AK26 DDR3 O<br />

SB_RAS# AP24 DDR3 O<br />

SB_WE# AR25 DDR3 O<br />

SKTOCC# AJ33 Analog O<br />

SM_DRAMPWROK AJ19 Async CMOS I<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

SM_DRAMRST# AW18 DDR3 O<br />

SM_VREF AJ22 Analog I<br />

TCK M40 TAP I<br />

TDI L40 TAP I<br />

TDO L39 TAP O<br />

THERMTRIP# G35 Async CMOS O<br />

TMS L38 TAP I<br />

TRST# J39 TAP I<br />

UNCOREPWRGOOD J40 Async CMOS I<br />

VCC A12 PWR<br />

VCC A13 PWR<br />

VCC A14 PWR<br />

VCC A15 PWR<br />

VCC A16 PWR<br />

VCC A18 PWR<br />

VCC A24 PWR<br />

VCC A25 PWR<br />

VCC A27 PWR<br />

VCC A28 PWR<br />

VCC B15 PWR<br />

VCC B16 PWR<br />

VCC B18 PWR<br />

VCC B24 PWR<br />

VCC B25 PWR<br />

VCC B27 PWR<br />

VCC B28 PWR<br />

VCC B30 PWR<br />

VCC B31 PWR<br />

VCC B33 PWR<br />

VCC B34 PWR<br />

VCC C15 PWR<br />

VCC C16 PWR<br />

VCC C18 PWR<br />

VCC C19 PWR<br />

VCC C21 PWR<br />

VCC C22 PWR<br />

VCC C24 PWR<br />

VCC C25 PWR<br />

VCC C27 PWR<br />

VCC C28 PWR<br />

VCC C30 PWR<br />

VCC C31 PWR<br />

VCC C33 PWR<br />

VCC C34 PWR<br />

Datasheet, Volume 1 99


Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VCC C36 PWR<br />

VCC D13 PWR<br />

VCC D14 PWR<br />

VCC D15 PWR<br />

VCC D16 PWR<br />

VCC D18 PWR<br />

VCC D19 PWR<br />

VCC D21 PWR<br />

VCC D22 PWR<br />

VCC D24 PWR<br />

VCC D25 PWR<br />

VCC D27 PWR<br />

VCC D28 PWR<br />

VCC D30 PWR<br />

VCC D31 PWR<br />

VCC D33 PWR<br />

VCC D34 PWR<br />

VCC D35 PWR<br />

VCC D36 PWR<br />

VCC E15 PWR<br />

VCC E16 PWR<br />

VCC E18 PWR<br />

VCC E19 PWR<br />

VCC E21 PWR<br />

VCC E22 PWR<br />

VCC E24 PWR<br />

VCC E25 PWR<br />

VCC E27 PWR<br />

VCC E28 PWR<br />

VCC <strong>E3</strong>0 PWR<br />

VCC <strong>E3</strong>1 PWR<br />

VCC <strong>E3</strong>3 PWR<br />

VCC <strong>E3</strong>4 PWR<br />

VCC <strong>E3</strong>5 PWR<br />

VCC F15 PWR<br />

VCC F16 PWR<br />

VCC F18 PWR<br />

VCC F19 PWR<br />

VCC F21 PWR<br />

VCC F22 PWR<br />

VCC F24 PWR<br />

VCC F25 PWR<br />

VCC F27 PWR<br />

VCC F28 PWR<br />

<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VCC F30 PWR<br />

VCC F31 PWR<br />

VCC F32 PWR<br />

VCC F33 PWR<br />

VCC F34 PWR<br />

VCC G15 PWR<br />

VCC G16 PWR<br />

VCC G18 PWR<br />

VCC G19 PWR<br />

VCC G21 PWR<br />

VCC G22 PWR<br />

VCC G24 PWR<br />

VCC G25 PWR<br />

VCC G27 PWR<br />

VCC G28 PWR<br />

VCC G30 PWR<br />

VCC G31 PWR<br />

VCC G32 PWR<br />

VCC G33 PWR<br />

VCC H13 PWR<br />

VCC H14 PWR<br />

VCC H15 PWR<br />

VCC H16 PWR<br />

VCC H18 PWR<br />

VCC H19 PWR<br />

VCC H21 PWR<br />

VCC H22 PWR<br />

VCC H24 PWR<br />

VCC H25 PWR<br />

VCC H27 PWR<br />

VCC H28 PWR<br />

VCC H30 PWR<br />

VCC H31 PWR<br />

VCC H32 PWR<br />

VCC J12 PWR<br />

VCC J15 PWR<br />

VCC J16 PWR<br />

VCC J18 PWR<br />

VCC J19 PWR<br />

VCC J21 PWR<br />

VCC J22 PWR<br />

VCC J24 PWR<br />

VCC J25 PWR<br />

VCC J27 PWR<br />

100 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VCC J28 PWR<br />

VCC J30 PWR<br />

VCC K15 PWR<br />

VCC K16 PWR<br />

VCC K18 PWR<br />

VCC K19 PWR<br />

VCC K21 PWR<br />

VCC K22 PWR<br />

VCC K24 PWR<br />

VCC K25 PWR<br />

VCC K27 PWR<br />

VCC K28 PWR<br />

VCC K30 PWR<br />

VCC L13 PWR<br />

VCC L14 PWR<br />

VCC L15 PWR<br />

VCC L16 PWR<br />

VCC L18 PWR<br />

VCC L19 PWR<br />

VCC L21 PWR<br />

VCC L22 PWR<br />

VCC L24 PWR<br />

VCC L25 PWR<br />

VCC L27 PWR<br />

VCC L28 PWR<br />

VCC L30 PWR<br />

VCC M14 PWR<br />

VCC M15 PWR<br />

VCC M16 PWR<br />

VCC M18 PWR<br />

VCC M19 PWR<br />

VCC M21 PWR<br />

VCC M22 PWR<br />

VCC M24 PWR<br />

VCC M25 PWR<br />

VCC M27 PWR<br />

VCC M28 PWR<br />

VCC M30 PWR<br />

VCC_SENSE A36 Analog O<br />

VCCAXG AB33 PWR<br />

VCCAXG AB34 PWR<br />

VCCAXG AB35 PWR<br />

VCCAXG AB36 PWR<br />

VCCAXG AB37 PWR<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VCCAXG AB38 PWR<br />

VCCAXG AB39 PWR<br />

VCCAXG AB40 PWR<br />

VCCAXG AC33 PWR<br />

VCCAXG AC34 PWR<br />

VCCAXG AC35 PWR<br />

VCCAXG AC36 PWR<br />

VCCAXG AC37 PWR<br />

VCCAXG AC38 PWR<br />

VCCAXG AC39 PWR<br />

VCCAXG AC40 PWR<br />

VCCAXG T33 PWR<br />

VCCAXG T34 PWR<br />

VCCAXG T35 PWR<br />

VCCAXG T36 PWR<br />

VCCAXG T37 PWR<br />

VCCAXG T38 PWR<br />

VCCAXG T39 PWR<br />

VCCAXG T40 PWR<br />

VCCAXG U33 PWR<br />

VCCAXG U34 PWR<br />

VCCAXG U35 PWR<br />

VCCAXG U36 PWR<br />

VCCAXG U37 PWR<br />

VCCAXG U38 PWR<br />

VCCAXG U39 PWR<br />

VCCAXG U40 PWR<br />

VCCAXG W33 PWR<br />

VCCAXG W34 PWR<br />

VCCAXG W35 PWR<br />

VCCAXG W36 PWR<br />

VCCAXG W37 PWR<br />

VCCAXG W38 PWR<br />

VCCAXG Y33 PWR<br />

VCCAXG Y34 PWR<br />

VCCAXG Y35 PWR<br />

VCCAXG Y36 PWR<br />

VCCAXG Y37 PWR<br />

VCCAXG Y38 PWR<br />

VCCAXG_SENSE L32 Analog O<br />

VCCIO A11 PWR<br />

VCCIO A7 PWR<br />

VCCIO AA3 PWR<br />

VCCIO AB8 PWR<br />

Datasheet, Volume 1 101


Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VCCIO AF8 PWR<br />

VCCIO AG33 PWR<br />

VCCIO AJ16 PWR<br />

VCCIO AJ17 PWR<br />

VCCIO AJ26 PWR<br />

VCCIO AJ28 PWR<br />

VCCIO AJ32 PWR<br />

VCCIO AK15 PWR<br />

VCCIO AK17 PWR<br />

VCCIO AK19 PWR<br />

VCCIO AK21 PWR<br />

VCCIO AK23 PWR<br />

VCCIO AK27 PWR<br />

VCCIO AK29 PWR<br />

VCCIO AK30 PWR<br />

VCCIO B9 PWR<br />

VCCIO D10 PWR<br />

VCCIO D6 PWR<br />

VCCIO <strong>E3</strong> PWR<br />

VCCIO E4 PWR<br />

VCCIO G3 PWR<br />

VCCIO G4 PWR<br />

VCCIO J3 PWR<br />

VCCIO J4 PWR<br />

VCCIO J7 PWR<br />

VCCIO J8 PWR<br />

VCCIO L3 PWR<br />

VCCIO L4 PWR<br />

VCCIO L7 PWR<br />

VCCIO M13 PWR<br />

VCCIO N3 PWR<br />

VCCIO N4 PWR<br />

VCCIO N7 PWR<br />

VCCIO R3 PWR<br />

VCCIO R4 PWR<br />

VCCIO R7 PWR<br />

VCCIO U3 PWR<br />

VCCIO U4 PWR<br />

VCCIO U7 PWR<br />

VCCIO V8 PWR<br />

VCCIO W3 PWR<br />

VCCIO_SEL P33 N/A O<br />

VCCIO_SENSE AB4 Analog O<br />

VCCPLL AK11 PWR<br />

<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VCCPLL AK12 PWR<br />

VCCSA H10 PWR<br />

VCCSA H11 PWR<br />

VCCSA H12 PWR<br />

VCCSA J10 PWR<br />

VCCSA K10 PWR<br />

VCCSA K11 PWR<br />

VCCSA L11 PWR<br />

VCCSA L12 PWR<br />

VCCSA M10 PWR<br />

VCCSA M11 PWR<br />

VCCSA M12 PWR<br />

VCCSA_SENSE T2 Analog O<br />

VCCSA_VID P34 CMOS O<br />

VDDQ AJ13 PWR<br />

VDDQ AJ14 PWR<br />

VDDQ AJ20 PWR<br />

VDDQ AJ23 PWR<br />

VDDQ AJ24 PWR<br />

VDDQ AR20 PWR<br />

VDDQ AR21 PWR<br />

VDDQ AR22 PWR<br />

VDDQ AR23 PWR<br />

VDDQ AR24 PWR<br />

VDDQ AU19 PWR<br />

VDDQ AU23 PWR<br />

VDDQ AU27 PWR<br />

VDDQ AU31 PWR<br />

VDDQ AV21 PWR<br />

VDDQ AV24 PWR<br />

VDDQ AV25 PWR<br />

VDDQ AV29 PWR<br />

VDDQ AV33 PWR<br />

VDDQ AW31 PWR<br />

VDDQ AY23 PWR<br />

VDDQ AY26 PWR<br />

VDDQ AY28 PWR<br />

VIDALERT# A37 CMOS I<br />

VIDSCLK C37 CMOS O<br />

VIDSOUT B37 CMOS I/O<br />

VSS A17 GND<br />

VSS A23 GND<br />

VSS A26 GND<br />

VSS A29 GND<br />

102 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS A35 GND<br />

VSS AA33 GND<br />

VSS AA34 GND<br />

VSS AA35 GND<br />

VSS AA36 GND<br />

VSS AA37 GND<br />

VSS AA38 GND<br />

VSS AA6 GND<br />

VSS AB5 GND<br />

VSS AC1 GND<br />

VSS AC6 GND<br />

VSS AD33 GND<br />

VSS AD36 GND<br />

VSS AD38 GND<br />

VSS AD39 GND<br />

VSS AD40 GND<br />

VSS AD5 GND<br />

VSS AD8 GND<br />

VSS A<strong>E3</strong> GND<br />

VSS A<strong>E3</strong>3 GND<br />

VSS A<strong>E3</strong>6 GND<br />

VSS AF1 GND<br />

VSS AF34 GND<br />

VSS AF36 GND<br />

VSS AF37 GND<br />

VSS AF40 GND<br />

VSS AF5 GND<br />

VSS AF6 GND<br />

VSS AF7 GND<br />

VSS AG36 GND<br />

VSS AH2 GND<br />

VSS AH3 GND<br />

VSS AH33 GND<br />

VSS AH36 GND<br />

VSS AH37 GND<br />

VSS AH38 GND<br />

VSS AH39 GND<br />

VSS AH40 GND<br />

VSS AH5 GND<br />

VSS AH8 GND<br />

VSS AJ12 GND<br />

VSS AJ15 GND<br />

VSS AJ18 GND<br />

VSS AJ21 GND<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS AJ25 GND<br />

VSS AJ27 GND<br />

VSS AJ36 GND<br />

VSS AJ5 GND<br />

VSS AK1 GND<br />

VSS AK10 GND<br />

VSS AK13 GND<br />

VSS AK14 GND<br />

VSS AK16 GND<br />

VSS AK22 GND<br />

VSS AK28 GND<br />

VSS AK31 GND<br />

VSS AK32 GND<br />

VSS AK33 GND<br />

VSS AK34 GND<br />

VSS AK35 GND<br />

VSS AK36 GND<br />

VSS AK37 GND<br />

VSS AK4 GND<br />

VSS AK40 GND<br />

VSS AK5 GND<br />

VSS AK6 GND<br />

VSS AK7 GND<br />

VSS AK8 GND<br />

VSS AK9 GND<br />

VSS AL11 GND<br />

VSS AL14 GND<br />

VSS AL17 GND<br />

VSS AL19 GND<br />

VSS AL24 GND<br />

VSS AL27 GND<br />

VSS AL30 GND<br />

VSS AL36 GND<br />

VSS AL5 GND<br />

VSS AM1 GND<br />

VSS AM11 GND<br />

VSS AM14 GND<br />

VSS AM17 GND<br />

VSS AM2 GND<br />

VSS AM21 GND<br />

VSS AM23 GND<br />

VSS AM25 GND<br />

VSS AM27 GND<br />

VSS AM3 GND<br />

Datasheet, Volume 1 103


Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS AM30 GND<br />

VSS AM36 GND<br />

VSS AM37 GND<br />

VSS AM38 GND<br />

VSS AM39 GND<br />

VSS AM4 GND<br />

VSS AM40 GND<br />

VSS AM5 GND<br />

VSS AN10 GND<br />

VSS AN11 GND<br />

VSS AN14 GND<br />

VSS AN17 GND<br />

VSS AN19 GND<br />

VSS AN22 GND<br />

VSS AN24 GND<br />

VSS AN27 GND<br />

VSS AN30 GND<br />

VSS AN31 GND<br />

VSS AN32 GND<br />

VSS AN33 GND<br />

VSS AN34 GND<br />

VSS AN35 GND<br />

VSS AN36 GND<br />

VSS AN5 GND<br />

VSS AN6 GND<br />

VSS AN7 GND<br />

VSS AN8 GND<br />

VSS AN9 GND<br />

VSS AP1 GND<br />

VSS AP11 GND<br />

VSS AP14 GND<br />

VSS AP17 GND<br />

VSS AP22 GND<br />

VSS AP25 GND<br />

VSS AP27 GND<br />

VSS AP30 GND<br />

VSS AP36 GND<br />

VSS AP37 GND<br />

VSS AP4 GND<br />

VSS AP40 GND<br />

VSS AP5 GND<br />

VSS AR11 GND<br />

VSS AR14 GND<br />

VSS AR17 GND<br />

<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS AR18 GND<br />

VSS AR19 GND<br />

VSS AR27 GND<br />

VSS AR30 GND<br />

VSS AR36 GND<br />

VSS AR5 GND<br />

VSS AT1 GND<br />

VSS AT10 GND<br />

VSS AT12 GND<br />

VSS AT13 GND<br />

VSS AT15 GND<br />

VSS AT16 GND<br />

VSS AT17 GND<br />

VSS AT2 GND<br />

VSS AT25 GND<br />

VSS AT27 GND<br />

VSS AT28 GND<br />

VSS AT29 GND<br />

VSS AT3 GND<br />

VSS AT30 GND<br />

VSS AT31 GND<br />

VSS AT32 GND<br />

VSS AT33 GND<br />

VSS AT34 GND<br />

VSS AT35 GND<br />

VSS AT36 GND<br />

VSS AT37 GND<br />

VSS AT38 GND<br />

VSS AT39 GND<br />

VSS AT4 GND<br />

VSS AT40 GND<br />

VSS AT5 GND<br />

VSS AT6 GND<br />

VSS AT7 GND<br />

VSS AT8 GND<br />

VSS AT9 GND<br />

VSS AU1 GND<br />

VSS AU15 GND<br />

VSS AU26 GND<br />

VSS AU34 GND<br />

VSS AU4 GND<br />

VSS AU6 GND<br />

VSS AU8 GND<br />

VSS AV10 GND<br />

104 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS AV11 GND<br />

VSS AV14 GND<br />

VSS AV17 GND<br />

VSS AV3 GND<br />

VSS AV35 GND<br />

VSS AV38 GND<br />

VSS AV6 GND<br />

VSS AW10 GND<br />

VSS AW11 GND<br />

VSS AW14 GND<br />

VSS AW16 GND<br />

VSS AW36 GND<br />

VSS AW6 GND<br />

VSS AY11 GND<br />

VSS AY14 GND<br />

VSS AY18 GND<br />

VSS AY35 GND<br />

VSS AY4 GND<br />

VSS AY6 GND<br />

VSS AY8 GND<br />

VSS B10 GND<br />

VSS B13 GND<br />

VSS B14 GND<br />

VSS B17 GND<br />

VSS B23 GND<br />

VSS B26 GND<br />

VSS B29 GND<br />

VSS B32 GND<br />

VSS B35 GND<br />

VSS B38 GND<br />

VSS B6 GND<br />

VSS C11 GND<br />

VSS C12 GND<br />

VSS C17 GND<br />

VSS C20 GND<br />

VSS C23 GND<br />

VSS C26 GND<br />

VSS C29 GND<br />

VSS C32 GND<br />

VSS C35 GND<br />

VSS C7 GND<br />

VSS C8 GND<br />

VSS D17 GND<br />

VSS D2 GND<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS D20 GND<br />

VSS D23 GND<br />

VSS D26 GND<br />

VSS D29 GND<br />

VSS D32 GND<br />

VSS D37 GND<br />

VSS D39 GND<br />

VSS D4 GND<br />

VSS D5 GND<br />

VSS D9 GND<br />

VSS E11 GND<br />

VSS E12 GND<br />

VSS E17 GND<br />

VSS E20 GND<br />

VSS E23 GND<br />

VSS E26 GND<br />

VSS E29 GND<br />

VSS <strong>E3</strong>2 GND<br />

VSS <strong>E3</strong>6 GND<br />

VSS E7 GND<br />

VSS E8 GND<br />

VSS F1 GND<br />

VSS F10 GND<br />

VSS F13 GND<br />

VSS F14 GND<br />

VSS F17 GND<br />

VSS F2 GND<br />

VSS F20 GND<br />

VSS F23 GND<br />

VSS F26 GND<br />

VSS F29 GND<br />

VSS F35 GND<br />

VSS F37 GND<br />

VSS F39 GND<br />

VSS F5 GND<br />

VSS F6 GND<br />

VSS F9 GND<br />

VSS G11 GND<br />

VSS G12 GND<br />

VSS G17 GND<br />

VSS G20 GND<br />

VSS G23 GND<br />

VSS G26 GND<br />

VSS G29 GND<br />

Datasheet, Volume 1 105


Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS G34 GND<br />

VSS G7 GND<br />

VSS G8 GND<br />

VSS H1 GND<br />

VSS H17 GND<br />

VSS H2 GND<br />

VSS H20 GND<br />

VSS H23 GND<br />

VSS H26 GND<br />

VSS H29 GND<br />

VSS H33 GND<br />

VSS H35 GND<br />

VSS H37 GND<br />

VSS H39 GND<br />

VSS H5 GND<br />

VSS H6 GND<br />

VSS H9 GND<br />

VSS J11 GND<br />

VSS J17 GND<br />

VSS J20 GND<br />

VSS J23 GND<br />

VSS J26 GND<br />

VSS J29 GND<br />

VSS J32 GND<br />

VSS K1 GND<br />

VSS K12 GND<br />

VSS K13 GND<br />

VSS K14 GND<br />

VSS K17 GND<br />

VSS K2 GND<br />

VSS K20 GND<br />

VSS K23 GND<br />

VSS K26 GND<br />

VSS K29 GND<br />

VSS K33 GND<br />

VSS K35 GND<br />

VSS K37 GND<br />

VSS K39 GND<br />

VSS K5 GND<br />

VSS K6 GND<br />

VSS L10 GND<br />

VSS L17 GND<br />

VSS L20 GND<br />

VSS L23 GND<br />

<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS L26 GND<br />

VSS L29 GND<br />

VSS L8 GND<br />

VSS M1 GND<br />

VSS M17 GND<br />

VSS M2 GND<br />

VSS M20 GND<br />

VSS M23 GND<br />

VSS M26 GND<br />

VSS M29 GND<br />

VSS M33 GND<br />

VSS M35 GND<br />

VSS M37 GND<br />

VSS M39 GND<br />

VSS M5 GND<br />

VSS M6 GND<br />

VSS M9 GND<br />

VSS N8 GND<br />

VSS P1 GND<br />

VSS P2 GND<br />

VSS P36 GND<br />

VSS P38 GND<br />

VSS P40 GND<br />

VSS P5 GND<br />

VSS P6 GND<br />

VSS R33 GND<br />

VSS R35 GND<br />

VSS R37 GND<br />

VSS R39 GND<br />

VSS R8 GND<br />

VSS T1 GND<br />

VSS T5 GND<br />

VSS T6 GND<br />

VSS U8 GND<br />

VSS V1 GND<br />

VSS V2 GND<br />

VSS V33 GND<br />

VSS V34 GND<br />

VSS V35 GND<br />

VSS V36 GND<br />

VSS V37 GND<br />

VSS V38 GND<br />

VSS V39 GND<br />

VSS V40 GND<br />

106 Datasheet, Volume 1


<strong>Processor</strong> Pin and Signal Information<br />

Table 8-1. <strong>Processor</strong> Pin List by Pin<br />

Name<br />

Pin Name Pin # Buffer Type Dir.<br />

VSS V5 GND<br />

VSS W6 GND<br />

VSS Y5 GND<br />

VSS Y8 GND<br />

VSS_NCTF A4 GND<br />

VSS_NCTF AV39 GND<br />

VSS_NCTF AY37 GND<br />

VSS_NCTF B3 GND<br />

VSS_SENSE B36 Analog O<br />

VSSAXG_SENSE M32 Analog O<br />

VSSIO_SENSE AB3 Analog O<br />

§ §<br />

Datasheet, Volume 1 107


<strong>Processor</strong> Pin and Signal Information<br />

108 Datasheet, Volume 1


DDR Data Swizzling<br />

9 DDR Data Swizzling<br />

To achieve better memory performance and better memory timing, <strong>Intel</strong> design<br />

performed the DDR Data pin swizzling that will allow a better use of the product across<br />

different platforms. Swizzling has no effect on functional operation and is invisible to<br />

the OS/SW.<br />

However, during debug, swizzling needs to be taken into consideration. This chapter<br />

presents swizzling data. When placing a DIMM logic analyzer, the design engineer must<br />

pay attention to the swizzling table to perform an efficient memory debug.<br />

Datasheet, Volume 1 109


Table 9-1. DDR Data Swizzling<br />

Table – Channel A<br />

Pin Name Pin # MC Pin Name<br />

SA_DQ[0] AJ3 DQ01<br />

SA_DQ[1] AJ4 DQ02<br />

SA_DQ[2] AL3 DQ07<br />

SA_DQ[3] AL4 DQ06<br />

SA_DQ[4] AJ2 DQ03<br />

SA_DQ[5] AJ1 DQ00<br />

SA_DQ[6] AL2 DQ05<br />

SA_DQ[7] AL1 DQ04<br />

SA_DQ[8] AN1 DQ08<br />

SA_DQ[9] AN4 DQ11<br />

SA_DQ[10] AR3 DQ14<br />

SA_DQ[11] AR4 DQ15<br />

SA_DQ[12] AN2 DQ09<br />

SA_DQ[13] AN3 DQ10<br />

SA_DQ[14] AR2 DQ13<br />

SA_DQ[15] AR1 DQ12<br />

SA_DQ[16] AV2 DQ18<br />

SA_DQ[17] AW3 DQ19<br />

SA_DQ[18] AV5 DQ22<br />

SA_DQ[19] AW5 DQ20<br />

SA_DQ[20] AU2 DQ16<br />

SA_DQ[21] AU3 DQ17<br />

SA_DQ[22] AU5 DQ21<br />

SA_DQ[23] AY5 DQ23<br />

SA_DQ[24] AY7 DQ27<br />

SA_DQ[25] AU7 DQ25<br />

SA_DQ[26] AV9 DQ28<br />

SA_DQ[27] AU9 DQ29<br />

SA_DQ[28] AV7 DQ24<br />

SA_DQ[29] AW7 DQ26<br />

SA_DQ[30] AW9 DQ30<br />

SA_DQ[31] AY9 DQ31<br />

SA_DQ[32] AU35 DQ35<br />

SA_DQ[33] AW37 DQ34<br />

SA_DQ[34] AU39 DQ38<br />

SA_DQ[35] AU36 DQ39<br />

SA_DQ[36] AW35 DQ33<br />

SA_DQ[37] AY36 DQ32<br />

SA_DQ[38] AU38 DQ36<br />

SA_DQ[39] AU37 DQ37<br />

SA_DQ[40] AR40 DQ43<br />

DDR Data Swizzling<br />

Table 9-1. DDR Data Swizzling<br />

Table – Channel A<br />

Pin Name Pin # MC Pin Name<br />

SA_DQ[41] AR37 DQ42<br />

SA_DQ[42] AN38 DQ44<br />

SA_DQ[43] AN37 DQ45<br />

SA_DQ[44] AR39 DQ41<br />

SA_DQ[45] AR38 DQ40<br />

SA_DQ[46] AN39 DQ46<br />

SA_DQ[47] AN40 DQ47<br />

SA_DQ[48] AL40 DQ51<br />

SA_DQ[49] AL37 DQ48<br />

SA_DQ[50] AJ38 DQ52<br />

SA_DQ[51] AJ37 DQ53<br />

SA_DQ[52] AL39 DQ49<br />

SA_DQ[53] AL38 DQ50<br />

SA_DQ[54] AJ39 DQ54<br />

SA_DQ[55] AJ40 DQ55<br />

SA_DQ[56] AG40 DQ58<br />

SA_DQ[57] AG37 DQ56<br />

SA_DQ[58] A<strong>E3</strong>8 DQ60<br />

SA_DQ[59] A<strong>E3</strong>7 DQ61<br />

SA_DQ[60] AG39 DQ57<br />

SA_DQ[61] AG38 DQ59<br />

SA_DQ[62] A<strong>E3</strong>9 DQ63<br />

SA_DQ[63] AE40 DQ62<br />

110 Datasheet, Volume 1


DDR Data Swizzling<br />

Table 9-2. DDR Data Swizzling<br />

Table – Channel B<br />

Pin Name Pin # MC Pin Name<br />

SB_DQ[0] AG7 DQ03<br />

SB_DQ[1] AG8 DQ02<br />

SB_DQ[2] AJ9 DQ05<br />

SB_DQ[3] AJ8 DQ04<br />

SB_DQ[4] AG5 DQ00<br />

SB_DQ[5] AG6 DQ01<br />

SB_DQ[6] AJ6 DQ06<br />

SB_DQ[7] AJ7 DQ07<br />

SB_DQ[8] AL7 DQ11<br />

SB_DQ[9] AM7 DQ10<br />

SB_DQ[10] AM10 DQ14<br />

SB_DQ[11] AL10 DQ13<br />

SB_DQ[12] AL6 DQ08<br />

SB_DQ[13] AM6 DQ09<br />

SB_DQ[14] AL9 DQ12<br />

SB_DQ[15] AM9 DQ15<br />

SB_DQ[16] AP7 DQ19<br />

SB_DQ[17] AR7 DQ18<br />

SB_DQ[18] AP10 DQ21<br />

SB_DQ[19] AR10 DQ22<br />

SB_DQ[20] AP6 DQ17<br />

SB_DQ[21] AR6 DQ16<br />

SB_DQ[22] AP9 DQ20<br />

SB_DQ[23] AR9 DQ23<br />

SB_DQ[24] AM12 DQ25<br />

SB_DQ[25] AM13 DQ30<br />

SB_DQ[26] AR13 DQ29<br />

SB_DQ[27] AP13 DQ28<br />

SB_DQ[28] AL12 DQ24<br />

SB_DQ[29] AL13 DQ31<br />

SB_DQ[30] AR12 DQ27<br />

SB_DQ[31] AP12 DQ26<br />

SB_DQ[32] AR28 DQ32<br />

SB_DQ[33] AR29 DQ34<br />

SB_DQ[34] AL28 DQ39<br />

SB_DQ[35] AL29 DQ37<br />

SB_DQ[36] AP28 DQ33<br />

SB_DQ[37] AP29 DQ35<br />

SB_DQ[38] AM28 DQ36<br />

SB_DQ[39] AM29 DQ38<br />

SB_DQ[40] AP32 DQ44<br />

Table 9-2. DDR Data Swizzling<br />

Table – Channel B<br />

Pin Name Pin # MC Pin Name<br />

SB_DQ[41] AP31 DQ43<br />

SB_DQ[42] AP35 DQ45<br />

SB_DQ[43] AP34 DQ46<br />

SB_DQ[44] AR32 DQ40<br />

SB_DQ[45] AR31 DQ42<br />

SB_DQ[46] AR35 DQ47<br />

SB_DQ[47] AR34 DQ41<br />

SB_DQ[48] AM32 DQ51<br />

SB_DQ[49] AM31 DQ48<br />

SB_DQ[50] AL35 DQ53<br />

SB_DQ[51] AL32 DQ50<br />

SB_DQ[52] AM34 DQ52<br />

SB_DQ[53] AL31 DQ49<br />

SB_DQ[54] AM35 DQ54<br />

SB_DQ[55] AL34 DQ55<br />

SB_DQ[56] AH35 DQ59<br />

SB_DQ[57] AH34 DQ58<br />

SB_DQ[58] A<strong>E3</strong>4 DQ61<br />

SB_DQ[59] A<strong>E3</strong>5 DQ62<br />

SB_DQ[60] AJ35 DQ57<br />

SB_DQ[61] AJ34 DQ56<br />

SB_DQ[62] AF33 DQ63<br />

SB_DQ[63] AF35 DQ60<br />

Datasheet, Volume 1 111<br />

§ §


DDR Data Swizzling<br />

112 Datasheet, Volume 1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!