0% found this document useful (0 votes)
59 views12 pages

Corona: System Implications of Emerging Nanophotonic Technology

Important

Uploaded by

shoaeb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views12 pages

Corona: System Implications of Emerging Nanophotonic Technology

Important

Uploaded by

shoaeb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

International Symposium on Computer Architecture

Corona: System Implications of Emerging Nanophotonic Technology

Dana Vantrease† , Robert Schreiber‡ , Matteo Monchiero‡ , Moray McLaren‡ , Norman P. Jouppi‡ ,
Marco Fiorentino‡ , Al Davis§ , Nathan Binkert‡ , Raymond G. Beausoleil‡ , Jung Ho Ahn‡

University of Wisconsin - Madison, ‡ Hewlett-Packard Laboratories, § University of Utah

Abstract bandwidth poses a serious problem. Evidence suggests that


many-core systems using electrical interconnects may not
We expect that many-core microprocessors will push per- be able to meet these high bandwidth demands while main-
formance per chip from the 10 gigaflop to the 10 teraflop taining acceptable performance, power, and area [19].
range in the coming decade. To support this increased per- Nanophotonics offers an opportunity to reduce the power
formance, memory and inter-core bandwidths will also have and area of off- and on-stack interconnects while meeting
to scale by orders of magnitude. Pin limitations, the en- future system bandwidth demands. Optics is ideal for global
ergy cost of electrical signaling, and the non-scalability of communication because the energy cost is incurred only at
chip-length global wires are significant bandwidth impedi- the endpoints and is largely independent of length. Dense
ments. Recent developments in silicon nanophotonic tech- wavelength division multiplexing (DWDM) enables multi-
nology have the potential to meet these off- and on-stack ple single-wavelength communication channels to share a
bandwidth requirements at acceptable power levels. waveguide, providing a significant increase in bandwidth
Corona is a 3D many-core architecture that uses density. Recent nanophotonic developments demonstrate
nanophotonic communication for both inter-core commu- that waveguides and modulation/demodulation circuit di-
nication and off-stack communication to memory or I/O de- mensions are approaching electrical buffer and wire circuit
vices. Its peak floating-point performance is 10 teraflops. dimensions [20].
Dense wavelength division multiplexed optically connected Several benefits accrue when nanophotonics is coupled
memory modules provide 10 terabyte per second memory to emerging 3D packaging [1]. The 3D approach allows
bandwidth. A photonic crossbar fully interconnects its 256 multiple die, each fabricated using a process well-suited to
low-power multithreaded cores at 20 terabyte per second it, to be stacked and to communicate with through silicon
bandwidth. We have simulated a 1024 thread Corona sys- vias (TSVs). Optics, logic, DRAM, non-volatile memory
tem running synthetic benchmarks and scaled versions of (e.g. FLASH), and analog circuitry may all occupy separate
the SPLASH-2 benchmark suite. We believe that in compar- die and co-exist in the same 3D package. Utilizing the third
ison with an electrically-connected many-core alternative dimension eases layout and helps decrease worst case wire
that uses the same on-stack interconnect power, Corona can lengths.
provide 2 to 6 times more performance on many memory- Corona is a nanophotonically connected 3D many-core
intensive workloads, while simultaneously reducing power. NUMA system that meets the future bandwidth demands
of data-intensive applications at acceptable power levels.
Corona is targeted for a 16 nm process in 2017. Corona
comprises 256 general purpose cores, organized in 64 four-
core clusters, and is interconnected by an all-optical, high-
1 Introduction bandwidth DWDM crossbar. The crossbar enables a cache
coherent design with near uniform on-stack and memory
Multi- and many-core architectures have arrived, and communication latencies. Photonic connections to off-stack
core counts are expected to double every 18 months [3]. As memory enables unprecedented bandwidth to large amounts
core count grows into the hundreds, the main memory band- of memory with only modest power requirements.
width required to support concurrent computation on all This paper presents key aspects of nanophotonic tech-
cores will increase by orders of magnitude. Unfortunately, nology, and considers the implications for many-core pro-
the ITRS roadmap [27] only predicts a small increase in pin cessors. It describes the Corona architecture, and presents
count (< 2x) over the next decade, and pin data rates are a performance comparison to a comparable, all-electrical
increasing slowly. This creates a significant bandwidth bot- many-core alternative. The contribution of this work is to
tleneck. Similarly, the inability of on-chip networks to con- show that nanophotonics is compatible with future CMOS
nect cores to memory or other cores at the required memory technology, is capable of dramatically better communica-

1063-6897/08 $25.00 © 2008 IEEE 153


DOI 10.1109/ISCA.2008.35

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
tion performance per unit area and energy, and can signifi- of waveguide or optical fiber to connect different optically
cantly improve the performance and utility of future many- enabled stacks.
core architectures. The third element is a light source, and a laser is the
obvious choice. A laser’s narrow linewidth allows one to
2 Photonic Technology pack many communication channels in a single waveguide,
thus increasing bandwidth. There are two possible ways
Advances in silicon nanophotonics have made complete to encode data in laser light. The first method uses direct
photonic on-stack communication networks a serious alter- modulation of the laser, where the laser is switched on and
native to electrical networks. Photonic interconnects are in- off to represent digital 1s and 0s. The second method uses
teresting because they can be much more energy efficient a continuous-wave (CW) laser and a separate modulator
than their electrical counterparts, especially at high speeds to achieve the required modulation. To achieve the high
and long distances. In addition, the ability of optical fibers modulation speeds that would make on-stack interconnects
and waveguides to carry many information channels simul- practical (typically 10 Gb/s data rates) one would need to
taneously increases interconnect bandwidth density signifi- use vertical cavity semiconductor lasers (VCSELs) for di-
cantly and eliminates the need for a large number of wires rect modulation. Since VCSELs are built using III-V com-
in order to achieve adequate bandwidth. Photonics dom- pound semiconductors, they cannot be easily integrated in
inates long-haul interconnects and is increasingly ubiqui- a CMOS-compatible process. On-stack mode-locked lasers
tous in metropolitan, storage, and local area networks. Pho- are an interesting separate modulation alternative. A mode-
tonic interconnects are becoming standard in data centers, locked laser generates a comb of phase-coherent wave-
and chip-to-chip optical links have been demonstrated [26]. lengths at equally spaced wavelengths. On-stack mode-
This trend will naturally bring photonic interconnects into locked lasers have been recently demonstrated [16]. It is
the chip stack, particularly since limited pin density and expected that one such laser could provide 64 wavelengths
the power dissipation of global wires places significant con- for a DWDM network.
straints on performance and power. For separate modulation, external modulators are re-
A complete nanophotonic network requires waveguides quired. In a DWDM network it is preferable to have
to carry signals, light sources that provide the optical car- wavelength-selective modulators that can modulate a sin-
rier, modulators that encode the data onto the carrier, pho- gle wavelength in a multi-wavelength channel. This simpli-
todiodes to detect the data, and injection switches that route fies the topology of the network and increases its flexibil-
signals through the network. It is imperative that the opti- ity. Wavelength-selective silicon modulators with modula-
cal components be built in a single CMOS-compatible pro- tion rates in excess of 10 Gb/s have recently been demon-
cess to reduce the cost of introducing this new technology. strated [35]. These modulators are based on ring resonators
Waveguides confine and guide light and need two optical built using an SOI waveguide in a ring with diameter of 3-
materials: a high refraction index material to form the core 5 µm (Figure 1(a)). Depending on their construction, they
of the waveguide and a low index material that forms the may serve to modulate, inject, or detect the light.
cladding. We use crystalline silicon (index ≈ 3.5) and sil- To modulate light, a ring is placed next to a waveguide.
icon oxide (index ≈ 1.45). Both are commonly used in A fraction of the light will be coupled and circulate inside
CMOS processes. A silicon oxide waveguide has typical the ring. For some wavelengths, the length of the ring cir-
cross-sectional dimensions of ≈ 500 nm with a wall thick- cumference will be equal to an integer number of wave-
ness of least 1 µm. These waveguides have been shown to lengths. In this resonance condition, the light in the ring will
be able to carry light with losses on the order of 2–3 dB/cm be enhanced by interference while the light transmitted by
and can be curved with bend radii on the order of 10 µm. the waveguide will be suppressed (Figure 1(b)). In a prop-
In order to communicate rather than simply illuminate, erly coupled ring, the waveguide’s light will be suppressed
we must introduce a second material to absorb light and and the rings’ light will be eventually lost due to the bend
convert the light into an electric signal. Germanium is a nat- in the ring and scattering from imperfections. The wave-
ural choice: it is already being used in CMOS processes and length depends primarily on the index of refraction of the
has a significant photo-absorption between 1.1 and 1.5 µm. materials that constitute the ring. We can further fine-tune
While it is possible to induce strains in the crystalline struc- the wavelength (or index of refraction) by injecting charge
ture of Ge to extend its absorption window into the 1.55 into the ring or changing the temperature of the ring. This
µm window commonly used for long distance fiber com- brings the ring in and out of resonance. The first method
munication, it is easier to rely on unstrained Ge and use is commonly used for fast modulation while the second can
light around 1.3 µm. This wavelength is still compatible be used for slow tuning.
with fiber transmission, which is an important characteris- The same ring structure can be used to inject a single
tic for off-stack networks that will need to use some kind wavelength from one waveguide to another. If the ring is

154

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
a) b) c) d) e)
Ring resonator SiGe Doped

Coupler

Waveguide

Figure 1: Schematic Top View of Nanophotonic Building Blocks. (a) Off-resonance Modulator. The ring resonator is
coupled to a waveguide through evanescent coupling. Off-resonance wavelengths are transmitted through. (b) On-resonance Modula-
tor. A resonant wavelength is coupled in the ring and eventually gets attenuated by losses. A negligible amount of light is transmitted
through due to destructive interference. By switching between the on- and off-resonance state one can achieve modulation (or diversion)
of a continuous-wave laser. (c) Injector. A resonant wavelength in the input (lower) waveguide is coupled into the ring and out through
the output (top) waveguide. (d) Detector. A resonant wavelength is coupled in the ring and absorbed by a SiGe detector coupled to the
ring. (e) SEM image of a 3 µm diameter resonator ring. Image courtesy of Qianfan Xu, HP Labs Palo Alto.

placed between two waveguides and the coupling between networks offer a credible answer to the challenges posed by
the ring and the waveguides is properly chosen, the wave- the increasing bandwidth needs of many-core architectures.
length resonating within the ring will be transferred from
one waveguide to the other. By bringing the ring in and out 3 The Corona Architecture
of resonance, a frequency-selective switch injector (Fig-
ure 1(c)) can be realized.
Corona is a tightly coupled, highly parallel NUMA sys-
A ring resonator can also be used as a wavelength- tem. As NUMA systems and applications scale, it becomes
selective detector (Figure 1(d)). If germanium is included more difficult for the programmer, compiler, and runtime
in the resonator, the resonant wavelength will be absorbed system to manage the placement and migration of programs
by the germanium and it will generate a photocurrent while and data. We try to lessen the burden with homogeneous
non-resonant wavelengths will be transmitted. An advan- cores and caches, a crossbar interconnect that has near-
tage of this scheme is that because the resonant wavelength uniform latency, a fair interconnect arbitration protocol, and
will circulate many times in the ring, only very small ab- high (one byte per flop) bandwidth between cores and from
sorption rates (less than 1% per pass) will be needed and caches to memory.
therefore a small detector will be sufficient. This brings the The architecture is made up of 256 multithreaded in-
capacitance of the detector down to ≈ 1 fF and removes the order cores and is capable of supporting up to 1024 threads
need for power-hungry trans-impedance amplifiers. simultaneously, providing up to 10 teraflops of computa-
A final component that is not necessary for photonic net- tion, up to 20 terabytes per second (TB/s) of on-stack band-
works but that we find useful is a broadband splitter. A width, and up to 10 TB/s of off-stack memory bandwidth.
broadband splitter distributes power and data by splitting Figure 2 gives a conceptual view of the system while
the signal between two waveguides. It diverts a fixed frac- Figure 3 provides a sample layout of the system including
tion of optical power from all wavelengths of one waveg- the waveguides that comprise the optical interconnect (Sec-
uide and injects them onto another waveguide. Other than tion 3.2), the optical connection to memory (Section 3.3),
a drop in strength, the unsplit portion of the signal is unaf- and other optical components.
fected by the splitter.
While most of the individual components of a DWDM 3.1 Cluster Architecture
on-stack network have been demonstrated [16, 35], a num-
ber of important problems remain. Foremost among these Each core has a private L1 instruction and data cache,
is the necessity to integrate a large number of devices in a and all four cores share a unified L2 cache. A hub routes
single chip. It will be necessary to analyze and correct for message traffic between the L2, directory, memory con-
the inevitable fabrication variations to minimize device fail- troller, network interface, optical bus, and optical crossbar.
ures and maximize yield. A large effort will also be needed Figure 2(b) shows the cluster configuration, while the upper
to design the analog electronics that drive and control the left hand insert in Figure 3 shows its approximate floorplan.
optical devices in a power-efficient way. While significant Because Corona is an architecture targeting future high
research is still necessary, we believe that DWDM photonic throughput systems, our exploration and evaluation of the

155

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
Resource Value
OCM OCM OCM Number of clusters 64
Per-Cluster:
L2 cache size/assoc 4 MB/16-way
OCM OCM OCM L2 cache line size 64 B
L2 coherence MOESI
OCM OCM OCM
Memory controllers 1
Cores 4
Cluster Cluster Cluster Per-Core:
0 1 63
L1 ICache size/assoc 16 KB/4-way
L1 DCache size/assoc 32 KB/4-way
L1 I & D cache line size 64 B
Optical Interconnect
Frequency 5 GHz
Threads 4
(a) Issue policy In-order
OCM Issue width 2
64 b floating point SIMD width 4
Memory
Fused floating point operations Multiply-Add
Core Controller
Table 1: Resource Configuration
Shared L2 Cache

Core
Hub
S Directory
Core tory and L2 cache power has been calculated using CACTI
5 [30]. Hub and memory controller power estimates are
Core Network based on synthesized 65 nm designs and Synopsis Nanosim
Interface power values scaled to 16 nm. Total processor, cache, mem-
ory controller and hub power for the Corona design is ex-
Optical Interconnect pected to be between 82 watts (Silverthorne based) and 155
(b)
watts (Penryn based).
Area estimates are based on pessimistically scaled Pen-
Figure 2: Architecture Overview ryn and Silverthorne designs. We estimate that an in-order
Penryn core will have one-third the area of the existing out-
of-order Penryn. This estimate is consistent with the current
architecture is not targeted at optimal configuration of the core-size differences in 45 nm for the out-of-order Penryn
clusters’ subcomponents (such as their branch prediction and the in-order Silverthorne, and is more conservative than
schemes, number of execution units, cache sizes, and cache the 5x area reduction reported by Asanovic et al. [3]. We
policies). Rather, the clusters’ design parameters (Ta- then assume a multithreading area overhead of 10% as re-
ble 1) represent reasonably modest choices for a high- ported in Chaudry et al. [7]. Total die area for the processor
performance system targeted at a 16 nm process in 2017. and L1 die is estimated to be between 423 mm2 (Penryn
based) and 491 mm2 (Silverthorne based). The discrep-
3.1.1 Cores ancy between these estimates is likely affected by the 6-
The core choice is primarily motivated by power; with transistor Penryn L1 cache cell design vs. the 8-transistor
hundreds of cores, each one will need to be extremely en- Silverthorne L1 cache cell.
ergy efficient. We use dual-issue, in-order, four-way multi- 3.1.2 On-Stack Memory
threaded cores.
Power analysis has been based on the Penryn [13] (desk- Corona employs a MOESI directory protocol. The pro-
top and laptop market segments) and the Silverthorne [12] tocol is backed by a single broadcast bus (see Section 3.2.2),
(low-power embedded segment) cores. Penryn is a single- which is used to quickly invalidate a large pool of sharers
threaded out-of-order core supporting 128-bit SSE4 instruc- with a single message. The coherence scheme was included
tions. Power per core has been conservatively reduced by for purposes of die size and power estimation, but has not
5x (compared to the 6x predictions in [3]) and then in- yet been modeled in the system simulation. Nonetheless,
creased by 20% to account for differences in the quad- we believe that our performance simulations provide a sen-
threaded Corona. Silverthorne is a dual-threaded in-order sible first-order indication of Corona’s potential.
64-bit design where power and area have been increased The Corona architecture has one memory controller per
to account for the Corona architectural parameters. Direc- cluster. Associating the memory controller with the cluster

156

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
Core Die
L1-I L1-I

Core 0 Core 2

L1 ↔ L2 Interface
L1-D L1-D Cache Die
Through Silicon Via Array
Modulators
L1-D L1-D

L1 ↔ L2 Interface
Core 1 Core 3

Modulators
Detectors

Detectors
L1-I L1-I Direct

MC
ory

NI
Hub
Splitter

Connection

Connection
L2

Peer X-bar
My X-bar
Cache
Splitters
Star Broadcast
Coupler Optically 4-waveguide
Optical Die Connected bundles
Laser
Memory
Modulators 0
Modulators 1
Modulators 2
Modulators 3

Modulators N-1
Detectors Splitter N
Modulators N+1
Crossbar

Modulators 61
Modulators 62
Modulators 63

Arbitration

Injectors

Detectors

Figure 3: Layout with Serpentine Crossbar and Resonator Ring Detail

ensures that the memory bandwidth grows linearly with in- Photonic Subsystem Waveguides Ring Resonators
creased core count, and it provides local memory accessible Memory 128 16 K
with low latency. Photonics connects the memory controller Crossbar 256 1024 K
Broadcast 1 8K
to off-stack memory as detailed in Section 3.3.
Arbitration 2 8K
Network interfaces, similar to the interface to off-stack
Clock 1 64
main memory, provide inter-stack communication for larger Total 388 ≈ 1056 K
systems using DWDM interconnects.
Table 2: Optical Resource Inventory
3.2 On-Stack Photonic Interconnect

Corona’s 64 clusters communicate through an optical


crossbar and occasionally an optical broadcast ring. Both the channel. A fully-connected 64 × 64 crossbar can be re-
are managed using optical tokens. Several messages of dif- alized by replicating this many-writer single-reader channel
ferent sizes may simultaneously share any communication 64 times, adjusting the assigned “reader” cluster with each
channel, allowing for high utilization. replication.
Table 2 summarizes the interconnects’ optical compo- The channels are each 256 wavelengths, or 4 bundled
nent requirements (power waveguides and I/O components waveguides, wide. When laid out, the waveguide bundle
are omitted). Based on existing designs, we estimate that forms a broken ring that originates at the destination clus-
the photonic interconnect power (including the power dissi- ter (the channel’s home cluster), is routed past every other
pated in the analog circuit layer and the laser power in the cluster, and eventually terminates back at its origin. Light
photonic die) to be 39 W. is sourced at a channel’s home by a splitter that provides
all wavelengths of light from a power waveguide. Commu-
3.2.1 Optical Crossbar nication is unidirectional, in cyclically increasing order of
Each cluster has a designated channel that address, data, cluster number.
and coherence messages share. Any cluster may write to a A cluster sends to another cluster by modulating the light
given channel, but only a single fixed cluster may read from on the destination cluster’s channel. Figure 4 illustrates the

157

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
Cluster 0 der to avoid the need for signal retiming at the destination.
A clock distribution waveguide parallels the data waveg-
modulators uide, with the clock signal traveling clockwise with the data
signals. This means that each cluster is offset from the pre-
0r 0g 0b 0y vious cluster by approximately 1/8th of a clock cycle. A

1r 1g 1b 1y
cluster’s electrical clock is phase locked to the arriving op-

detectors
tical clock. Thus, input and output data are in phase with
the local clock; this avoids costly retiming except when the
3r 3g 3b 3y

serpentine wraps around.


modulators
Cluster 3

Cluster 1
3.2.2 Optical Broadcast Bus
In the MOESI coherency protocol, when a shared block

splitter
is invalidated, an invalidate message must be multicast to all
sharers. For unicast interconnects, such as Corona’s cross-
Power WG

XBar WG

bar, the multicast is translated into several unicast messages.


These unicast messages cause network congestion and may
harm performance [14].
2y 2b 2g 2r
We avoid redundant unicast invalidates by augmenting
our system with a broadcast bus. The broadcast bus is a sin-
modulators
gle waveguide that passes by each cluster twice in a coiled,
or spiral-like, fashion. The upper right hand corner of Fig-
Cluster 2 ure 3 gives a cluster-centric view of the bus’ details. The
Active Ring Resonator Lit light is sourced from one endpoint of the coil. On the light’s
first pass around the coil, clusters modulate the light to en-
Inactive Ring Resonator Modulated code invalidate messages. On the light’s second pass, the in-
validate messages become active, that is, clusters may read
Figure 4: A Four Wavelength Data Channel Exam- the messages and snoop their caches. To do this, each clus-
ple. The home cluster (cluster 1) sources all wavelengths of ter has a splitter that transfers a fraction of the light from the
light (r,g,b,y). The light travels clockwise around the crossbar waveguide to a short dead-end waveguide that is populated
waveguides. It passes untouched by cluster 2’s inactive (off- with detectors.
resonance) modulators. As it passes by cluster 3’s active mod- In addition to broadcasting invalidates, the bus’ function-
ulators, all wavelengths are modulated to encode data. Eventu- ality could be generalized for other broadcast applications,
ally, cluster 1’s detectors sense the modulation, at which point such as bandwidth adaptive snooping [22] and barrier noti-
the waveguide terminate. fication.
3.2.3 Arbitration
conceptual operation of a four-wavelength channel. Modu- The crossbar and broadcast bus both require a conflict
lation occurs on both clock edges, so that each of the wave- resolution scheme to prevent two or more sources from con-
lengths signals at 10 Gb/s, yielding a per-cluster bandwidth currently sending to the same destination. Our solution is a
of 2.56 terabits per second (Tb/s) and a total crossbar band- distributed, all optical, token-based arbitration scheme that
width of 20 TB/s. fairly allocates the available interconnect channels to clus-
A wide phit with low modulation time keeps latency to ters. Token ring arbitration is naturally distributed and has
a minimum, which is critical to ensuring the in-order cores been used in token ring LAN systems [2]. Its asynchronous
minimize stall time. A 64-byte cache line can be sent (256 acquire-and-release nature tolerates variability in request ar-
bits in parallel twice per clock) in one 5 GHz clock. The rival time, message modulation time, and message propaga-
propagation time is at most 8 clocks and is determined by tion time. Figure 5 demonstrates a photonic version of this
a combination of the source’s distance from the destination approach.
and the speed of light in a silicon waveguide (approximately In our implementation, a token conveys the right to send
2 cm per clock). Because messages, such as cache lines, data on a given channel. The one-bit token is represented
are localized to a small portion of the bundle’s length, a by the presence of a short signal in a specific wavelength
bundle may have multiple back-to-back messages in transit on an arbitration waveguide. For the broadcast bus, we use
simultaneously. one wavelength. For the crossbar, we use 64 wavelengths,
Corona uses optical global distribution of the clock in or- in one to one correspondence with the 64 crossbar channels.

158

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
Cluster 0 and contention is high, token transfer time is low and chan-
nel utilization is high. However when contention is low, a
cluster may wait as long as 8 processor clock cycles for an
injectors 0r 0g 0b
uncontested token.

0r 0g 0b detectors 3.3 Optically Connected Memory


r One design goal is to scale main memory bandwidth to
Arbitration WG

home wave-
1r match the growth in computational power. Maintaining this
cluster length
1r

Cluster 1
r balance ensures that the performance of the system is not
0 r 1g
overly dependent on the cache utilization of the applica-
1 g 1g
1b tion. Our target external memory bandwidth for a 10 ter-
2 b 1b aflop processor is 10 TB/s. Using an electrical interconnect
to achieve this performance would require excessive power;
over 160 W assuming 2 mW/Gb/s [25] interconnect power.
2b 2g 2r Instead, we use a nanophotonic interconnect that has high
b
bg bg b b bandwidth and low power. The same channel separations
2b 2g g 2r and data rates that are used on the internal interconnect net-
work can also be used for external fiber connections. We es-
Power WG timate the interconnect power to be 0.078 mW/Gb/s, which
Cluster 2 equates to a total memory system power of approximately
6.4 W.
Active Ring Resonator Lit Each of the 64 memory controllers connects to its exter-
nal memory by a pair of single-waveguide, 64-wavelength
Inactive Ring Resonator Unlit
DWDM links. The optical network is modulated on both
edges of the clock. Hence each memory controller provides
Figure 5: Optical Token Arbitration Example. 3 wave- 160 GB/s of off-stack memory bandwidth, and all memory
lengths are used to arbitrate for 3 channels. The channel-to- controllers together provide 10 TB/s.
token (home cluster-to-wavelength) mapping is shown in the This allows all communication to be scheduled by the
embedded table. In this depiction, all tokens are in transit (i.e.
memory controller with no arbitration. Each external opti-
all channels are free). Cluster 0 is requesting cluster 2 (blue),
will soon divert blue, and will then begin transmitting on cluster
cal communication link consists of a pair of fibers providing
2’s channel. Cluster 1 is requesting cluster 0 (red), has nearly half duplex communication between the CPU and a string
completed diverting red, and will soon begin transmitting on of optically connected memory (OCM) modules. The link
cluster 0’s channel. Cluster 2 is re-injecting green (cluster 1) is optically powered from the chip stack; after connecting
because it has just finished transmitting on cluster 1’s channel. to the OCMs, each outward fiber is looped back as a re-
(Note: Detectors are positioned to prevent a cluster from re- turn fiber. Although the off-stack memory interconnect uses
acquiring a self-injected token until it has completed one revo- the same modulators and detectors as the on-stack intercon-
lution around the ring.) nects, the communication protocols differ. Communication
between processor and memory is master/slave, as opposed
to peer-to-peer. To transmit, the memory controller mod-
When a cluster finishes sending a message on a chan- ulates the light and the target module diverts a portion of
nel, it releases the channel by activating its injector and re- the light to its detectors. To receive, the memory controller
introducing the token onto the arbitration waveguide. The detects light that the transmitting OCM has modulated on
token travels in parallel with the tail of the most recently the return fiber. Because the memory controller is the mas-
transmitted message. Each cluster is equipped with an ar- ter, it can supply the necessary unmodulated power to the
ray of fixed-wavelength detectors that are capable of divert- transmitting OCM.
ing (obtaining) any token. If a token is diverted, the light is Figure 6(a) shows the 3D stacked OCM module, built
completely removed from the arbitration waveguide to pro- from custom DRAM die and an optical die. The DRAM die
vide an exclusive grant for the corresponding channel. Each is organized so that an entire cache line is read or written
cluster will absorb and regenerate its channel token to en- from a single mat. 3D stacking is used to minimize the de-
sure that it remains optically sound even after many trips lay and power in the interconnect between the optical fiber
around the ring without any “takers.” loop and the DRAM mats. The high-performance optical
This scheme fairly allocates the channels in a round- interconnect allows a single mat to quickly provide all the
robin order. When many clusters want the same channel data for an entire cache line. In contrast, current electri-

159

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
Mod OCM 0

Additional Daisy Chained OCMs


mem bus 0

mem bus 3
Dtct Mod

stack

stack
Host Memory Controller
Mod Dtct

mem bus 1

mem bus 2
Dtct Mod

stack

stack
Mod Dtct

Package

Dtct

(a) (b) (c)

Figure 6: Schematic View of Optically Connected Memory. (a) 3D die stack. The stack has one optical die and multiple
DRAM dies. (b) DRAM die floorplan. Each quadrant is independent, and could also be constructed from four independent die. (c)
OCM expansion. The light travels from the processor, through one or more OCMs, finally looping back to the processor.

cal memory systems and DRAMs activate many banks on


many die on a DIMM, reading out tens of thousands of bits
Heat Sink Face to
into an open page. However, with highly interleaved mem- Fiber I/O’s to
pgcTSVs
Face Bonds
ory systems and a thousand threads, the chances of the next OCMs or Processor/L1 Die
Network
access being to an open page are small. Corona’s DRAM Memory Controller/Directory/L2 Die
Analog Electronics Die
architecture avoids accessing an order of magnitude more sTSVs Laser
pgcTSVs Optical Die
bits than are needed for the cache line, and hence consumes
Package
less power in its memory system.
Corona supports memory expansion by adding addi-
tional OCMs to the fiber loop as shown in Figure 6(c). Ex- Figure 7: Schematic Side View of 3D Package
pansion adds only modulators and detectors and not lasers,
so the incremental communication power is small. As the
the face-to-face bonds. External power, ground, and clock
light passes directly through the OCM without buffering
vias (pgcTSVs) are the only TSVs that must go through
or retiming, the incremental delay is also small, so that
three die to connect the package to the top two digital lay-
the memory access latency is similar across all modules.
ers. The optical die is larger than the other die in order to
In contrast, a serial electrical scheme, such as FBDIMM,
expose a mezzanine to permit fiber attachments for I/O and
would typically require the data to be resampled and re-
OCM channels and external lasers.
transmitted at each module, increasing the communication
power and access latency.
4 Experimental Setup
3.4 Chip Stack
We subject our architecture to a combination of synthetic
Figure 7 illustrates the Corona 3D die stack. Most of the and realistic workloads that were selected with an eye to
signal activity, and therefore heat, are in the top die (adja- stressing the on-stack and memory interconnects. Synthetic
cent to the heat sink) which contains the clustered cores and workloads stress particular features and aspects of the inter-
L1 caches. The processor die is face-to-face bonded with connects. The SPLASH-2 benchmark suite [34] indicates
the L2 die, providing direct connection between each clus- their realistic performance. The SPLASH-2 applications are
ter and its L2 cache, hub, memory controller, and directory. not modified in their essentials. We use larger datasets when
The bottom die contains all of the optical structures (waveg- possible to ensure that each core has a nontrivial work-
uides, ring resonators, detectors, etc.) and is face-to-face load. Because of a limitation in our simulator, we needed
bonded with the analog electronics which contain detector to replace implicit synchronization via semaphore variables
circuits and control ring resonance and modulation. with explicit synchronization constructs. In addition, we
All of the L2 die components are potential optical com- set the L2 cache size to 256 KB to better match our simu-
munication end points and connect to the analog die by sig- lated benchmark size and duration when scaled to expected
nal through silicon vias (sTSVs). This strategy minimizes system workloads. A summary of the workload setup is de-
the layout impact since most die-to-die signals are carried in scribed in Table 3.

160

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
Synthetic Resource OCM ECM
# Network Memory controllers 64 64
Benchmark Description Requests External connectivity 256 fibers 1536 pins
Uniform Uniform random 1M Channel width 128 b half duplex 12 b full duplex
Hot Spot All clusters to one cluster 1M Channel data rate 10 Gb/s 10 Gb/s
Tornado Cluster (i, j) to cluster 1M Memory bandwidth 10.24 TB/s 0.96 TB/s
((i + bk/2c − 1)%k, Memory latency 20 ns 20 ns
(j + bk/2c − 1)%k),
where k = network’s radix Table 4: Optical vs Electrical Memory Interconnects
Transpose Cluster (i, j) to cluster (j, i) 1M

SPLASH-2 stack memory design. Our simulator has three network con-
Data Set # Network figuration options:
Benchmark Experimental (Default) Requests
Barnes 64 K particles (16 K) 7.2 M • XBar – An optical crossbar (as described in Sec-
Cholesky tk29.O (tk15.O) 0.6 M tion 3.2), with bisection bandwidth of 20.48 TB/s, and
FFT 16 M points (64 K) 176 M maximum signal propagation time of 8 clocks.
FMM 1 M particles (16 K) 1.8 M
LU 2048×2048 matrix (512×512) 34 M • HMesh – An electrical 2D mesh with bisection band-
Ocean 2050×2050 grid (258×258) 240 M width 1.28 TB/s and per hop signal latency (including
Radiosity roomlarge (room) 4.2 M forwarding and signal propagation time) of 5 clocks.
Radix 64 M integers (1 M) 189 M
Raytrace balls4 (car) 0.7 M • LMesh – An electrical 2D mesh with bisection band-
Volrend head (head) 3.6 M width 0.64 TB/s and per hop signal latency (including
Water-Sp 32 K molecules (512) 3.2 M forwarding and signal propagation time) of 5 clocks.

Table 3: Benchmarks and Configurations The two meshes employ dimension-order wormhole rout-
ing [9]. We estimated a worst-case power of 26 W for the
optical crossbar. Since many components of the optical sys-
The simulation infrastructure is split into two indepen- tem power are fixed (e.g., laser, ring trimming, etc.), we
dent parts: a full system simulator for generating L2 miss conservatively assumed a continuous power of 26 W for the
memory traces and a network simulator for processing these XBar. We assumed an electrical energy of 196 pJ per trans-
traces. A modified version of the HP Labs’ COTSon simu- action per hop, including router overhead. This aggressively
lator [11] generates the traces. (COTSon is based on AMD’s assumes low swing busses and ignores all leakage power in
SimNow simulator infrastructure.) Each application is com- the electrical meshes.
piled with gcc 4.1, using -O3 optimization, and run as a We also simulate two memory interconnects, the OCM
single 1024-threaded instance. We are able to collect multi- interconnect (as described in Section 3.3) plus an electrical
threaded traces by translating the operating system’s thread- interconnect:
level parallelism into hardware thread-level parallelism. In
order to keep the trace files and network simulations man- • OCM – Optically connected memory; off-stack mem-
ageable, the simulators do not tackle the intricacies of cache ory bandwidth is 10.24 TB/s, memory latency is 20 ns.
coherency between clusters. • ECM – Electrically connected memory; off-stack
The network simulator reads the traces and processes memory bandwidth is 0.96 TB/s, memory latency is
them in the network subsystem. The traces consist of L2 20 ns.
misses and synchronization events that are annotated with
thread id and timing information. In the network simulator, The electrical memory interconnect is based on the ITRS
L2 misses go through a request-response, on-stack intercon- roadmap, according to which it will be impossible to imple-
nect transaction and an off-stack memory transaction. The ment an ECM with performance equivalent to the proposed
simulator, which is based on the M5 framework [4], takes OCM. Table 4 contrasts the memory interconnects.
an trace-driven approach to processing memory requests. We simulate five combinations: XBar/OCM (i.e.
The MSHRs, hub, interconnect, arbitration, and memory Corona), HMesh/OCM, LMesh/OCM, HMesh/ECM, and
are all modeled in detail with finite buffers, queues, and LMesh/ECM. These choices highlight, for each benchmark,
ports. This enforces bandwidth, latency, back pressure, and the performance gain, if any, due to faster memory and due
capacity limits throughout. to faster interconnect. We ran each simulation for a prede-
In the simulation, our chief goal is to understand the per- termined number of network requests (L2 misses). These
formance implications of the on-stack network and the off- miss counts are shown in Table 3.

161

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
5 Performance Evaluation al. [34]. Overall, replacing an ECM with an OCM in a sys-
tem using an HMesh can provide a geometric mean speedup
For the five system configurations, Figure 8 shows per- of 1.80. Adding the photonic crossbar can provide a further
formance relative to the realistic, electrically connected speedup of 1.44 on the SPLASH-2 applications.
LMesh/ECM system. 5
LMesh/ECM
HMesh/ECM
13.5
10 4 LMesh/OCM
LMesh/ECM HMesh/OCM

Bandwidth (TB/s)
HMesh/ECM XBar/OCM
8 LMesh/OCM
3
Normalized Speedup

HMesh/OCM
XBar/OCM

6 2

4 1

2
0
t es ky T M LU ean ity x ce p
rm o do se FF FM di nd -S
i fo Sp rna spo rn les c os Ra ytra lre ter
0 Un ot o an Ba ho O
adi a V o a
H T C R R W
rm o
t
do se es ky T M LU ean ity di
x ce
nd -S
p Tr
FF FM
ifo t Sp rna spo rn les c os Ra ytra lre ter
Un Ho Ba ho O di o a
To ran C R a R a V W
T
Figure 9: Achieved Bandwidth
Figure 8: Normalized Speedup
Figure 9 shows the actual rate of of communication with
main memory. The four low bandwidth applications that
We can form a few hypotheses based on the synthetic
perform well on the LMesh/ECM configuration are those
benchmarks. Evidently, with low memory bandwidth, the
with bandwidth demands lower than that provided by ECM.
high-performance mesh adds little value. With fast OCM,
FMM needs somewhat more memory bandwidth than ECM
there is very substantial performance gain over ECM sys-
provides. Three of the synthetic tests and four of the appli-
tems, when using the fast mesh or the crossbar intercon-
cations have very high bandwidth and interconnect require-
nect, but much less gain if the low performance mesh is
ments, in the 2 – 5 TB/s range; these benefit the most from
used. Most of the performance gain made possible by OCM
the XBar/OCM configuration. LU and Raytrace do much
is realized only if the crossbar interconnect is used. In the
better on OCM systems than ECM, but do not require much
exceptional case, Hot Spot, memory bandwidth remains the
more bandwidth than ECM provides. They appear to benefit
performance limiter (because all the memory traffic is chan-
mainly from the improved latency offered by XBar/OCM.
neled through a single cluster); hence there is less pressure
We believe that this is due to bursty memory traffic in these
on the interconnect. Overall, by moving to an OCM from
two applications. Analysis of the LU code shows that many
an ECM in systems with an HMesh, we achieve a geometric
threads attempt to access the same remotely stored matrix
mean speedup of 3.28. Adding the photonic crossbar can
block at the same time, following a barrier. In a mesh, this
provide a further speedup of 2.36 on the synthetic bench-
oversubscribes the links into the cluster that stores the re-
marks.
quested block.
For the SPLASH-2 applications, we find that in four
cases (Barnes, Radiosity, Volrend, and Water-Sp) the 450
LMesh/ECM
LMesh/ECM system is fully adequate. These applications 400
Average Request Latency (ns)

HMesh/ECM
LMesh/OCM
350
perform well due to their low cache-miss rates and con- HMesh/OCM
XBar/OCM
300
sequently low main memory bandwidth demands. FMM
250
is quite similar to these. The remaining applications are
200
memory bandwidth limited on ECM-based systems. For
150
Cholesky, FFT, Ocean, and Radix, fast memory provides
100
considerable benefits, which are realized only with the fast 50
crossbar. LU and Raytrace are like Hot Spot: while OCM 0
t es ky T M LU ean ity x ce p
gives most of the significant speedup, some additional ben- rm o do se FF FM di nd -S
ifo t Sp rna spo rn les c os Ra ytra lre ter
Un Ho Ba ho O di Vo Wa
efit derives from the use of the fast crossbar. We posit below To ran C Ra Ra
T
a possible reason for the difference between Cholesky, FFT,
Ocean, and Radix on the one hand, and LU and Raytrace Figure 10: Average L2 Miss Latency
on the other, when examining the bandwidth and latency
data. These observations are generally consistent with the Figure 10 reports the average latency of an L2 cache miss
detailed memory traffic measurements reported by Woo et to main memory. An L2 miss may be delayed in waiting for

162

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
crossbar arbitration (the token) and by flow-control (desti- Chang et al. [5] overlay a 2D mesh CMP interconnect
nation cluster buffers may be full) before an interconnect with a radio frequency interconnect to provide low latency
message is generated. Our latency statistics measure both shortcuts. They suggest frequency division multiple ac-
queue waiting times and interconnect transit times. LU and cess similar to our DWDM to provide multiple channels
Raytrace see considerable average latency in ECM systems; per waveguide. Capacitive [10] and inductive [23] coupling
it is improved dramatically by OCM and improved further technologies can provide wireless chip-to-chip communica-
by the optical crossbar. Note that the average latency can tion which can be used within a package.
be high even when overall bandwidth is low when traffic is The 8- and 16-core Sun Niagara [17] and Niagara2 [24]
bursty. chips use electrical crossbars. The 80-core Intel Polaris
250
chip [31] and the 64-core MIT Raw processor [29] connect
LMesh/ECM
HMesh/ECM their cores with 2D mesh networks. A 2D mesh is easily
LMesh/OCM
200 HMesh/OCM laid out, regardless of its size. Latency for nonlocal traffic
XBar/OCM
is high because multiple hops are required to communicate
Power (W)

150
between cores unless they are physically adjacent. Random
100
traffic is choked by the limited bisection bandwidth of a
mesh (O(n) in an n2 -node mesh). Express Virtual Channels
50 (EVCs) [18] alleviate the per-hop latency of packet based
mesh and torus networks, but the paths cannot be arbitrarily
0
rm o
t
do se es ky T
FF FM
M LU ean ity di
x ce
nd -S
p shaped.
ifo t Sp rna spo rn les c os Ra ytra lre ter
n Ba ho O di o a
U Ho To ran C R a R a V W
T

Figure 11: On-chip Network Power 7 Conclusions

Figure 11 shows the on-chip network dynamic power.


For applications that fit in the L2 cache, the photonic cross- Over the coming decade, memory and inter-core band-
bar can dissipate more power than for the electronic meshes widths must scale by orders of magnitude to support the
(albeit ignoring mesh leakage power). However, for appli- expected growth in per-socket core performance result-
cations with significant memory demands, the power of the ing from increased transistor counts and device perfor-
electronic meshes can fast become prohibitive with power mance. We believe recent developments in nanophotonics
of 100 W or more, even while providing lower performance. can be crucial in providing required bandwidths at accept-
able power levels.
6 Related Work To investigate the potential benefits of nanophotonics on
computer systems we have developed an architectural de-
Recent proposals for optical networks in chip multipro- sign called Corona. Corona uses optically connected mem-
cessors include a hierarchical multi-bus with optical global ories (OCMs) that have been architected for low power and
layer [15] and an optical, circuit-switched, mesh-based net- high bandwidth. A set of 64 OCMs can provide 10 TB/s of
work managed by electrical control packets [28]. In con- memory bandwidth through 128 fibers using dense wave-
trast, our crossbar interconnect uses optical arbitration and length division multiplexing. Once this memory bandwidth
control. comes on chip, the next challenge is getting each byte to
Optical crossbars have been proposed for Asynchronous the right core out of the hundreds on chip. Corona uses a
Transfer Mode (ATM) switches [32] and for communica- photonic crossbar with optical arbitration to fully intercon-
tion within processor clusters in large distributed shared nect its cores, providing near uniform latency and 20 TB/s
memory systems [33]. This work relies on expensive VC- of on-stack bandwidth.
SELs and free-space optical gratings to demultiplex the We simulated a 1024 thread Corona system running syn-
crossbar’s wavelengths, unlike our solution which can be thetic benchmarks and scaled versions of the SPLASH-
integrated into a modern 3D chip stack. 2 benchmark suite. We found systems using optically-
Most recent optical CMP interconnect proposals rely on connected memories and an optical crossbar between cores
electrical arbitration techniques [6, 15, 28]. Optical arbi- could perform 2 to 6 times better on memory-intensive
tration techniques have been investigated in SMP and ATM workloads than systems using only electrical interconnects,
designs [8, 21]. While these techniques employ token rings, while dissipating much less interconnect power. Thus we
their tokens circulate more slowly, as they are designed to believe nanophotonics can be a compelling solution to both
stop at every node in the ring, whether or not the node is the memory and network-on-chip bandwidth walls, while
participating in the arbitration. simultaneously ameliorating the power wall.

163

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.
Acknowledgments [16] B. R. Koch, A. W. Fang, O. Cohen, and J. E. Bowers. Mode-
locked silicon evanescent lasers. Optics Express, 15(18), Sep
2007.
We thank Ayose Falcon, Paolo Faraboschi, and Daniel
[17] P. Kongetira, K. Aingaran, and K. Olukotun. Niagara: A
Ortega for help with our COTSon simulations. We also 32-Way Multithreaded Sparc Processor. IEEE Micro, 25(2),
thank Mikko Lipasti and Gabriel Black for their invaluable 2005.
support and feedback. Dana Vantrease was supported in [18] A. Kumar, L.-S. Peh, P. Kundu, and N. K. Jha. Express Vir-
part by a National Science Foundation Graduate Research tual Channels: Towards the Ideal Interconnection Fabric. In
Fellowship. ISCA, Jun 2007.
[19] R. Kumar, V. Zyuban, and D. M. Tullsen. Interconnections
in Multi-Core Architectures: Understanding Mechanisms,
References Overheads and Scaling. In ISCA, Jun 2005.
[20] M. Lipson. Guiding, Modulating, and Emitting Light on
[1] Proceedings of the ISSCC Workshop F2: Design of 3D- Silicon–Challenges and Opportunities. Journal of Lightwave
Chipstacks. IEEE, Feb 2007. Organizers: W. Weber and Technology, 23(12), Dec 2005.
W. Bowhill. [21] A. Louri and A. K. Kodi. SYMNET: An Optical Intercon-
[2] ANSI/IEEE. Local Area Networks: Token Ring Access nection Network for Scalable High-Performance Symmetric
Method and Physical Layer Specifications, Std 802.5. Tech- Multiprocessors. Applied Optics, 42(17), Jun 2003.
nical report, 1989. [22] M. M. K. Martin, D. J. Sorin, M. D. Hill, and D. A. Wood.
Bandwidth Adaptive Snooping. In HPCA, Feb 2002.
[3] K. Asanovic, et al. The Landscape of Parallel Comput-
[23] N. Miura, D. Mizoguchi, T. Sakurai, and T. Kuroda. Analysis
ing Research: A View from Berkeley. Technical Re-
and Design of Inductive Coupling and Transceiver Circuit for
port UCB/EECS-2006-183, EECS Department, University
Inductive Inter-Chip Wireless Superconnect. JSSC, 40(4),
of California, Berkeley, Dec 2006.
Apr 2005.
[4] N. L. Binkert, R. G. Dreslinski, L. R. Hsu, K. T. Lim, A. G. [24] U. Nawathe, M. Hassan, L. Warriner, K. Yen, B. Upputuri,
Saidi, and S. K. Reinhardt. The M5 Simulator: Modeling D. Greenhill, A. Kumar, and H. Park. An 8-Core 64-Thread
Networked Systems. IEEE Micro, 26(4), Jul/Aug 2006. 64b Power-Efficient SPARC SoC. In ISSCC, Feb 2007.
[5] M.-C. F. Chang, J. Cong, A. Kaplan, M. Naik, G. Reinman, [25] R. Palmer, J. Poulton, W. J. Dally, J. Eyles, A. M. Fuller,
E. Socher, and S.-W. Tam. CMP Network-on-Chip Overlaid T. Greer, M. Horowitz, M. Kellam, F. Quan, and F. Zarkesh-
With Multi-Band RF-Interconnect. In HPCA, Feb 2008. varl. A 14mW 6.25Gb/s Transceiver in 90nm CMOS for
[6] H. J. Chao, K.-L. Deng, and Z. Jing. A Petabit Photonic Serial Chip-to-Chip Communications. In ISSCC, Feb 2007.
Packet Switch (P3S). In INFOCOM, 2003. [26] L. Schares, et al. Terabus: Terabit/Second-Class Card-Level
[7] S. Chaudhry, P. Caprioli, S. Yip, and M. Tremblay. High- Optical Interconnect Technologies. IEEE Journal of Selected
Performance Throughput Computing. IEEE Micro, 25(3), Topics in Quantum Electronics, 12(5), Sep/Oct 2006.
May/Jun 2005. [27] Semiconductor Industries Association. International Tech-
[8] J. Choi, H. Lee, H. Hong, H. Kim, K. Kim, and H. Kim. Dis- nology Roadmap for Semiconductors. http://www.itrs.net/,
tributed optical contention resolution using an optical token- 2006 Update.
ring. Photonics Technology Letters, IEEE, 10(10), Oct 1998. [28] A. Shacham, B. G. Lee, A. Biberman, K. Bergman, and L. P.
[9] W. Dally and C. Seitz. Deadlock-Free Message Routing in Carloni. Photonic NoC for DMA Communications in Chip
Multiprocessor Interconnection Networks. IEEE Transac- Multiprocessors. In IEEE Hot Interconnects, Aug 2007.
[29] M. B. Taylor, et al. Evaluation of the Raw Microprocessor:
tions on Computers, C-36(5), May 1987.
An Exposed-Wire-Delay Architecture for ILP and Streams.
[10] R. Drost, R. Hopkins, R. Ho, and I. Sutherland. Proximity
In ISCA, Jun 2004.
Communication. JSSC, 39(9), Sep 2004.
[30] S. Thoziyoor, N. Muralimanohar, J. Ahn, and N. P. Jouppi.
[11] A. Falcon, P. Faraboschi, and D. Ortega. Combining Sim- CACTI 5.1. Technical Report HPL-2008-20, HP Labs.
ulation and Virtualization through Dynamic Sampling. In [31] S. Vangal, et al. An 80-Tile 1.28TFLOPS Network-on-Chip
ISPASS, Apr 2007. in 65nm CMOS. In ISSCC, Feb 2007.
[12] Intel. Intel Atom Processor. http://www.intel.com/techno- [32] B. Webb and A. Louri. All-Optical Crossbar Switch Us-
logy/atom. ing Wavelength Division Multiplexing and Vertical-Cavity
[13] ——. Introducing the 45nm Next Generation Intel Core Mi- Surface-Emitting Lasers . Applied Optics, 38(29), Oct 1999.
croarchitecture. http://www.intel.com/technology/magazine/ [33] ——. A Class of Highly Scalable Optical Crossbar-
45nm/coremicroarchitecture-0507.htm. Connected Interconnection Networks (SOCNs) for Parallel
[14] N. E. Jerger, L.-S. Peh, and M. H. Lipasti. Virtual Circuit Computing Systems. TPDS, 11(5), 2000.
Tree Multicasting: A Case for On-Chip Hardware Multicast [34] S. C. Woo, M. Ohara, E. Torrie, J. P. Singh, and A. Gupta.
Support. ISCA, Jun 2008. The SPLASH-2 Programs: Characterization and Method-
[15] N. Kirman, M. Kirman, R. K. Dokania, J. F. Martinez, A. B. ological Considerations. In ISCA, Jun 1995.
Apsel, M. A. Watkins, and D. H. Albonesi. Leveraging Op- [35] Q. Xu, B. Schmidt, S. Pradhan, and M. Lipson. Micrometre-
tical Technology in Future Bus-based Chip Multiprocessors. scale silicon electro-optic modulator. Nature, 435, May
In MICRO, 2006. 2005.

164

Authorized licensed use limited to: IEEE Xplore. Downloaded on January 19, 2009 at 22:16 from IEEE Xplore. Restrictions apply.

You might also like