17.12.2012 Views

A2. Annex 2 - JCP-Consult

A2. Annex 2 - JCP-Consult

A2. Annex 2 - JCP-Consult

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Project Number: FP6-IST-507554<br />

Project Title: BROADBAND in Europe for All: A Multidisciplinary Approach<br />

CEC Deliverable Number: FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2 – Update<br />

Contractual Date of Deliverable to the IS DG: 30/06/2005<br />

Actual Date of Delivery to the IS DG: 01/08/2005 – Updated 09/01/2006<br />

Title of Deliverable:<br />

Workpackage contributing to the Deliverable:<br />

Nature of the Deliverable R – Report<br />

Second report on the multi-technological analysis of the<br />

‘broadband for all’ concept, focus on the listing of multitechnological<br />

key issues and practical roadmaps on how to<br />

tackle these issues.<br />

WP2: Multi-technological analysis<br />

WP3: Techno-economic, socio-economic and policy studies<br />

Editors: <strong>JCP</strong>: Point J. C. JRC: Ulbrich M. IMEC: Van Daele P.<br />

Contributors:<br />

Voluntary contributors:<br />

IMEC De Turck F., Dhoedt B., Pickavet M., Vlaeminck K., Stevens T.,<br />

Lannoo B., Colle D., Demeester P.<br />

UEssex O Mahony M., Politi T., Bray M.<br />

COM Falch M., Schneider M., Sigurdsson H., Skouby K. E., Berger M.,<br />

Christiansen H., Ruepp S., Soler J.<br />

GET Erasme D., Minot, C., Temoori H., Bristiel B.<br />

HHI Faber J., Grosskopf G., Patzak E., Helmolt C.v., Langer K,<br />

Vathke J.<br />

TELSCOM Rao S., Vogel P.<br />

<strong>JCP</strong> Guillemot C., Million P.<br />

Hadjifotiou T. (University of Essex), Grace D. (University of York),<br />

Tuomi I. (Meaning Processing)<br />

Abstract:<br />

This report is the second deliverable of the FP6-project “Broadband in Europe for all: a multi-disciplinary approach project<br />

(BREAD)”. It is a multi-technological analysis of the ‘broadband for all’ concept, with an update of the listing of multitechnological<br />

key issues, a first gap analysis and first roadmaps on how to tackle these issues.<br />

The deliverable also contains information on ongoing regional and national broadband initiatives in Europe (EU25) and<br />

around the world. The information includes an analysis of the broadband market in these countries with overview of available<br />

technologies, infrastructures, operators, pricing,…. It also includes a summary of the broadband policy in these countries.<br />

This document builds further on the overview of the state-of-the-art on broadband issues, summarized in the first BREADdeliverable<br />

(available via www.ist-bread.org)<br />

Out of the country studies, an analysis is presented factors affecting broadband development first from a classical theoretical<br />

framework, which is composed of the supply/demand - infrastructure/content matrix. However, when drawing on the<br />

elements which were identified in the country studies, it then uses a framework composed of four categories, i.e. country<br />

configuration, legacy situation, competition, and public policy, where the key criterion is the susceptibility of the factors<br />

affecting broadband to be themselves influenced by broadband policy. This approach allows identifying those areas where<br />

government action can really make a difference. Finally a quick look is taken at some potential inhibitors of broadband<br />

development and at broadband applications and user needs.<br />

This document does not claim to be complete, but is intended to give directions, indications and current trends in the field and<br />

will evoke input on new projects, new technologies and new developments to be included in the next editions of this<br />

deliverable..<br />

Keyword list:<br />

Broadband for All, multi-technological, multi-disciplinary analysis, socio-economic<br />

ANNEX 2 – Additional Information


Disclaimer<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The information, documentation and figures available in this deliverable, is written by the BREAD (“Broadband<br />

in Europe for all: a multi-disciplinary approach project (BREAD)” – project consortium under EC co-financing<br />

contract IST-507554 and does not necessarily reflect the views of the European Commission<br />

<strong>Annex</strong> 2 - Page 2 of 282


Table of Contents<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

DISCLAIMER........................................................................................................................................2<br />

TABLE OF CONTENTS.......................................................................................................................3<br />

<strong>A2.</strong> ANNEX 2 – ADDITIONAL INFORMATION ..........................................................................5<br />

<strong>A2.</strong>1 HOME NETWORK (UPDATED 03/01/2006) ............................................................ 5<br />

<strong>A2.</strong>1.1 Introduction 5<br />

<strong>A2.</strong>1.2 Home Networks: Technologies / Standardisation 6<br />

<strong>A2.</strong>1.3 Issues and technical trends / gap analysis 19<br />

<strong>A2.</strong>1.4 Home Networks: Roadmap 20<br />

<strong>A2.</strong>2 CABLE...........................................................................................................................23<br />

<strong>A2.</strong>2.1 Introduction 23<br />

<strong>A2.</strong>2.2 Requirements 23<br />

<strong>A2.</strong>2.3 HFC cable network deployment situation 26<br />

<strong>A2.</strong>2.4 Plant capacity 29<br />

<strong>A2.</strong>2.5 Physical and MAC layers 33<br />

<strong>A2.</strong>2.6 Open access and related issues 35<br />

<strong>A2.</strong>2.7 IP architecture 36<br />

<strong>A2.</strong>2.8 Security 40<br />

<strong>A2.</strong>2.9 Home network 41<br />

<strong>A2.</strong>2.10 Potential issues and topics to develop 43<br />

<strong>A2.</strong>2.11 Appendix 1: Analysis of disturbances in cable upstream 44<br />

<strong>A2.</strong>3 FTTX UPDATED 05/01/2006 ...................................................................................... 47<br />

<strong>A2.</strong>3.1 Introduction 47<br />

<strong>A2.</strong>3.2 State of the Art 47<br />

<strong>A2.</strong>3.3 Issues and trends 52<br />

<strong>A2.</strong>3.4 Roadmap 63<br />

<strong>A2.</strong>3.5 References 65<br />

<strong>A2.</strong>4 HAP UPDATED 01/06.................................................................................................. 65<br />

<strong>A2.</strong>4.1 Introduction 65<br />

<strong>A2.</strong>4.2 Platforms 66<br />

<strong>A2.</strong>4.3 Connections 67<br />

<strong>A2.</strong>4.4 CAPANINA Project 68<br />

<strong>A2.</strong>4.5 Roadmap 70<br />

<strong>A2.</strong>4.6 Other research 71<br />

<strong>A2.</strong>4.7 References 71<br />

<strong>A2.</strong>5 MOBILITY.................................................................................................................... 73<br />

<strong>A2.</strong>5.1 Seamless Mobility: Convergence in networks and services 73<br />

<strong>A2.</strong>5.2 Broadband mobile convergence network 73<br />

<strong>A2.</strong>5.3 Extra information: 76<br />

<strong>A2.</strong>6 VIDEO IN ALL-IP BROADBAND NETWORKS .................................................... 77<br />

<strong>A2.</strong>6.1 Audio Video Coding 77<br />

<strong>A2.</strong>6.2 Emerging transport protocols 82<br />

<strong>A2.</strong>6.3 Application-layer QoS mechanisms 85<br />

<strong>A2.</strong>6.4 Network QoS for Internet multimedia 92<br />

<strong>A2.</strong>6.5 MPLS and traffic engineering 101<br />

<strong>Annex</strong> 2 - Page 3 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>6.6 Session and application level signalling 103<br />

<strong>A2.</strong>6.7 Session signalling and QoS networks: Interaction and Integration 105<br />

<strong>A2.</strong>6.8 Content adaptation 109<br />

<strong>A2.</strong>6.9 Content Delivery Networks 112<br />

<strong>A2.</strong>6.10 Roadmap 114<br />

<strong>A2.</strong>6.11 Appendix I : Applications that would benefit from scalable AV coding 119<br />

<strong>A2.</strong>6.12 Appendix 2: List of relevant standardization bodies 123<br />

<strong>A2.</strong>6.13 Appendix 3: Overview of MPEG-21 124<br />

<strong>A2.</strong>6.14 Appendix 4: Main streaming media products with their characteristics 126<br />

<strong>A2.</strong>6.15 Appendix 5: Some content delivery networks (CDN) providers 127<br />

<strong>A2.</strong>7 OPTICAL METRO / CWDM ................................................................................... 128<br />

<strong>A2.</strong>7.1 Introduction 128<br />

<strong>A2.</strong>7.2 The Metropolitan Optical Networks 129<br />

<strong>A2.</strong>7.3 The vision 132<br />

<strong>A2.</strong>7.4 Gap analysis 134<br />

<strong>A2.</strong>7.5 Network architecture key issues 136<br />

<strong>A2.</strong>7.6 Enabling technologies 141<br />

<strong>A2.</strong>7.7 Summary 147<br />

<strong>A2.</strong>7.8 Appendix 1: The OPTIMIST roadmapping exercise for Metro network 150<br />

<strong>A2.</strong>7.9 Appendix2: FP6-IST-NOBEL scenario–Extract from NOBEL D11. 2004 151<br />

<strong>A2.</strong>7.10 Appendix 3: Standards 153<br />

<strong>A2.</strong>7.11 Appendix4: Related IST projects 153<br />

<strong>A2.</strong>8 OPTICAL BACKBONE ............................................................................................ 155<br />

<strong>A2.</strong>8.1 Introduction 155<br />

<strong>A2.</strong>8.2 Roadmap for Optical Core Networks 157<br />

<strong>A2.</strong>8.3 Transmission Technology 163<br />

<strong>A2.</strong>8.4 Optical Networking Technology 169<br />

<strong>A2.</strong>8.5 Control Plane 188<br />

<strong>A2.</strong>8.6 Trends and issues to be developed in the course of BREAD 190<br />

<strong>A2.</strong>8.7 Related technical initiatives 190<br />

<strong>A2.</strong>9 GRID NETWORKS ................................................................................................... 195<br />

<strong>A2.</strong>9.1 Introduction 195<br />

<strong>A2.</strong>9.2 Enablers and drivers for grids 195<br />

<strong>A2.</strong>9.3 Distinction between peer-peer, cluster and grid networks 196<br />

<strong>A2.</strong>9.4 Grid Network Required Elements 196<br />

<strong>A2.</strong>9.5 Grid Research Trends 198<br />

<strong>A2.</strong>10 SECURITY ............................................................................................................. 200<br />

<strong>A2.</strong>10.1 Introduction 200<br />

<strong>A2.</strong>10.2 Security protocols and mechanisms 210<br />

<strong>A2.</strong>10.3 Emerging Technologies 215<br />

<strong>A2.</strong>10.4 Mobility and network access control 220<br />

<strong>A2.</strong>10.5 Dependability in Broadband networks 234<br />

<strong>A2.</strong>10.6 Digital privacy protection 238<br />

<strong>A2.</strong>11 OVERALL MANAGEMENT AND CONTROL ................................................ 241<br />

<strong>A2.</strong>11.1 Introduction 241<br />

<strong>A2.</strong>11.2 Management 243<br />

<strong>A2.</strong>11.3 Control 250<br />

<strong>A2.</strong>11.4 Application level signalling 277<br />

<strong>A2.</strong>11.5 Conclusions, comparisons and roadmaps 279<br />

<strong>A2.</strong>11.6 Links to other IST projects 279<br />

<strong>Annex</strong> 2 - Page 4 of 282


<strong>A2.</strong> <strong>Annex</strong> 2 – Additional information<br />

<strong>A2.</strong>1 HOME NETWORK (Updated 03/01/2006)<br />

<strong>A2.</strong>1.1 Introduction<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Home networks are based on wired and wireless technologies (Table 1: Technologies of Home-Networks)<br />

Applications are home control, communication, infotainment, and entertainment. The most challenging topic yet<br />

to be addressed in the Home Network environments is interworking and interoperability, as well as the seamless<br />

provision of services, independent of the underlying networks. Fixed Home Networks require cabling between<br />

the devices using existing wires like phone lines, power lines, etc... Premium performance is obtained when<br />

using broadband media like twisted pair, coaxial wires or optical fibres. In contrast to wired networks wireless<br />

systems are far easier to deploy, however, the performance of these systems strongly depends on the constraints<br />

which are given by the environmental conditions. Propagation loss, shadowing, absorption, multipath<br />

propagation effects due to reflections at obstacles and Doppler spread may limit the maximum distance and<br />

transmission speed. From cordless phones to cellular handsets, consumers are now unwiring their laptops, PDAs<br />

and other electronic gadgets with narrowband short-distance solutions such as Bluetooth and IEEE 802.xx /<br />

ETSI standards. Thus driving forces for ongoing R&D activities are the cellular growth and the enterprise<br />

markets of wireless local area networks and wireless personal area networks (WLAN/WPAN).<br />

Home Network Technologies<br />

Cabled Wireless<br />

Copper Fibre Radio<br />

Infrared: kb/s...Mb/s<br />

Telephone line: 100 Mb/s (VDSL)<br />

Twisted pair: < 1 Gb/s<br />

Power line: < 2...14 Mb/s (shared)<br />

MM fibre: fibre:<br />

SI MMF:


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>1.2 Home Networks1: Technologies / Standardisation<br />

High-speed Internet is spreading. Homes that used to have little communication technology in the past now have<br />

multiple computers, peripherals like printers and scanners, televisions, radios, stereos, DVD players, VCRs,<br />

cordless telephones, PDAs, and other electronic devices.<br />

Home networks link the many different electronic devices in a household by way of a local area network (LAN).<br />

The network can be point-to-point, such as connecting one computer to another, or point-to-multipoint where<br />

computers and other devices such as printers, set-top boxes, and stereos are connected to each other and the<br />

Internet. There are many different applications for home networking. They can be broken into five categories:<br />

resource sharing, communications, home controls, home scheduling, and entertainment/information.<br />

Resource Sharing<br />

Home networking allows all users in the household to access the Internet and other applications at the same<br />

time. In addition, files (not just data, but also audio and video depending on the speed of the network) can be<br />

swapped, and peripherals such as printers and scanners can be shared. There is no longer the need to have more<br />

than one Internet access point, printer, scanner, or in many cases, software packages.<br />

Communications<br />

Home networking allows easier and more efficient communication between users within the household and<br />

better communication management with outside communications. Phone, fax, and e-mail messages can be<br />

routed intelligently. Access to the Internet can be attained at multiple places in the home with the use of<br />

terminals and Webpads.<br />

Home Controls<br />

Home networking can allow controls within the house, such as temperature and lighting, to be managed though<br />

the network and even remotely through the Internet. The network can also be used for home security monitoring<br />

with network cameras.<br />

Home Scheduling<br />

A home network would allow families to keep one master schedule that could be updated from different access<br />

points within the house and remotely through the Internet.<br />

Entertainment/Information<br />

Home networks enable a multiple of options for sharing entertainment and information in the home. Networked<br />

multi-user games can be played as well as PC-hosted television games. Digital video networking will allow<br />

households to route video from DBS and DVDs to different set-top boxes, PCs, and other visual display devices<br />

in the home. Streaming media such as Internet radio can be sent to home stereos as well as PCs.<br />

The speed of home networks is also important to consider. Most home networking solutions have speeds of at<br />

least 1 Mbps, which is enough for most everyday data transmission (but may not be enough for bandwidthintensive<br />

applications such as full-motion video). With the development of high-speed Internet access and<br />

digital video and audio comes a need for faster networks. Several kinds of home networks can operate at speeds<br />

of 10 Mbps and up. Digital video networking, for example, requires fast data rates. DBS MPEG-2 video requires<br />

3 Mbps and DVD requires between 3 and 8 Mbps. HDTV requires more speed than current home networks have<br />

but that should change in the future, as home networks get faster and as technology develops and adapts to new<br />

Internet appliances and digital media.<br />

In general, the home networking standards can be divided into two large groups: in-home networking standards,<br />

that provide interconnectivity of devices inside the home, and home-access network standards, that provide<br />

external access and services to the home via networks like cable TV, broadcast TV, phone net and satellite.<br />

Additionally, there are the mobile-service networks that provide access from mobile terminals when the user is<br />

away from home. Currently there is no dominant wired home networking standard, and networks are likely to be<br />

heterogeneous. A comprehensive compilation of the standards for multimedia networking is given in the<br />

MEDIANET 2 project.<br />

<strong>A2.</strong>1.2.1 Cabled Home Network<br />

Many in-home networking standards require cabling between the devices. One option is to install new cabling in<br />

the form of galvanic twisted-pair or coaxial wires, or optical fibres. The alternative is to use existing cabling,<br />

such as power-lines and phone-lines.<br />

Using existing cabling in the home is very convenient for end-users. For in-home networking via the phone-line,<br />

HomePNA 3 has become the de-facto standard, providing up to 10 Mbit/s (240 Mbit/s is expected).<br />

1 Future Home, http://dbs.cordis.lu/ IST-2000-28133<br />

2 MediaNet, http://www.ist-ipmedianet.org/home.html<br />

3 HomePNA, http://www.homepna.com/, http://www.homepna.com/HPNA-DLINK-Kit.html-ssi,<br />

<strong>Annex</strong> 2 - Page 6 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

For power-line networking, low-bandwidth control using and (high) bandwidth data transfer using CEBus 4 and<br />

HomePlug 5 are the most prominent ones, offering from 10 kbit/s up to 14 Mbit/s.<br />

New cabling requires an additional effort of installation, but has the advantage that premium-quality cabling can<br />

be chosen, dedicated to digital data-transport at high rates. The IEEE-1394a standard (also called Firewire and<br />

i.Link) 6 defines a serial bus that allows for data transfers up to 400 Mbit/s over a twisted-pair cable, and<br />

extension up to 3.2 Gbit/s using fibre is underway. Similarly, USB 7 defines a serial bus that allows for data<br />

transfers up to 480 Mbit/s over a twisted-pair cable, but using a master-slave protocol instead of the peer-to-peer<br />

protocol in IEEE-1394a. Both standards support hot plug-and-play and isochronous streaming, via centralised<br />

media access control, which are of significant importance for consumer-electronics applications. The<br />

disadvantage is that this sets a limit to the cable lengths between devices. Another major player is the Ethernet,<br />

which has evolved via 10 Mbit/s Ethernet and 100 Mbit/s Fast Ethernet using twisted-pair cabling, into Gigabit<br />

Ethernet, providing 1 Gbit/s using twisted-pair cabling or fibre. Ethernet notably does not support isochronous<br />

streaming since it lacks centralised medium-access control. Also it does not support device discovery (plug-andplay).<br />

It is, however, widely used, also because of the low cost.<br />

<strong>A2.</strong>1.2.2 Wireless Home Network<br />

As opposed to wired networks, wireless systems are far easier to deploy. This is due to the smaller installation<br />

effort (no new wires), and due to a lower cost of the physical infrastructure. A wireless home network is<br />

configured with an access point that acts as a transmitter and a receiver connected to the wired network at a<br />

fixed location. The access point then transmits to end users who have wireless LAN adapters with either PC<br />

cards in notebooks, ISA or PCI cards in desktops, or fully-integrated devices. A wireless home network allows<br />

real-time instant access to the network without the computer having to be near a phone jack or power outlet.<br />

Installation is easy because there is no cable to pull as with conventional Ethernet. The devices do not have to be<br />

in line-of-sight but can be in different rooms or blocked by walls and other barriers. Finally, all of these services<br />

are secure as they use encryption technologies.<br />

However, regulation by law and associated licensing fees may seriously affect the actual cost of the wireless<br />

connection. Additionally, the governmental regulations vary widely throughout the world. Especially for the<br />

license-free spectrum-bands, the issue of signal interference that limits usable bandwidth has to be solved. This<br />

is one of the reasons that there are so many wireless standards that it is becoming difficult to keep track. From<br />

cordless phones to cellular handsets, consumers are now unwiring their laptops, PDAs and other electronic<br />

gadgets with narrowband short-distance solutions such as Bluetooth and IEEE 802.xx / ETSI standards. Driving<br />

forces for ongoing R&D activities are the cellular growth and the enterprise WLAN market. In tablesTable 2:<br />

Wireless technologiesTable 3: Global wireless standards, the wireless technologies and wireless<br />

standards are summarized.<br />

In some cases there exist an overlapping application between wireless home networking, wireless access<br />

networks, and cellular systems. Within the IST-Future Home 8 project three wireless technologies have been<br />

addressed: IEEE-802.11 (WLAN), HiperLan2 and Bluetooth (WPAN). These have been selected for the<br />

following reasons:<br />

• the technologies are complementing from the usage point of view;<br />

• they are in a different phase of maturity and can be also compared against each other;<br />

• these different technologies provide the heterogeneous environment that can be used as a model of a future<br />

residential network;<br />

• they all can be used as substitutes of wired connections and therefore are suitable for an IP network<br />

environment.<br />

In this chapter on Home Networks additionally cellular and cordless systems have been integrated, because the<br />

largest and most noticeable part of the telecommunications business is telephony. The principal wireless<br />

component of telephony is mobile (i.e., cellular) telephony. In recent years the development of 3G thirdgeneration<br />

cellular i.e. IMT2000 and other wireless technologies have been key issues, like wireless<br />

piconetworking (Bluetooth), personal area network (WPAN) systems, and local area networks (WLAN).<br />

However, wireless metropolitan area network (WMAN) systems (IEEE 802.16 standards, called WiMAX**<br />

systems) are described in the chapter on Wireless Access.<br />

4 CEBus, http://www.cebus.org/<br />

5 HomePlug, http://www.homeplug.org<br />

6 1394 Trade Association, http://www.1394ta.org<br />

7 USB, http://www.usb.org<br />

8 Future Home, http://dbs.cordis.lu/ IST-2000-28133<br />

<strong>Annex</strong> 2 - Page 7 of 282


Wireless<br />

Cordless Cellular mobile<br />

Metropolitan Area<br />

Network, WMAN<br />

Systems<br />

radio<br />

DAB<br />

DECT 2G GSM<br />

DVBT<br />

ETSI HiperMAN<br />

ETSI HiperACCESS<br />

IEEE 802.16<br />

Wi-Fi*: Wireless Fidelity<br />

3G IMT2000<br />

WiMAX**: Worldwide Interoperability for Microwave Access<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Table 2: Wireless technologies<br />

<strong>Annex</strong> 2 - Page 8 of 282<br />

Wireless Local Area<br />

Network,<br />

WLAN<br />

ETSI HiperLAN<br />

MMAC<br />

IEEE 802.11<br />

Wireless Personal<br />

Area Network,<br />

WPAN<br />

HomeRF<br />

IEEE 802.15<br />

UWB<br />

Bluetooth<br />

ZigBee<br />

Area IEEE ETSI Forum/Alliance<br />

WAN 802.20 3GPP, EDGE<br />

LAN 802.11 HiperLAN Wi-Fi*<br />

MAN 802.16 HiperMAN, HiperACCESS WiMAX**<br />

PAN 802.15 HiperPAN WiMedia<br />

Table 3: Global wireless standards<br />

<strong>A2.</strong>1.2.2.1 Wireless local area networks, WLANs<br />

WLANs use two frequency bands, the IEEE 802.11b,e,g 9 standard uses the 2.4 GHz band, and the IEEE<br />

802.11a standard the 5 GHz band. Notably the 802.11b standard is gaining market share. Capabilities of 802.11<br />

are to provide up to 54 Mbit/s over 300 metres distance (Wireless Ethernet 10 ).<br />

Standard Transfer Method Frequencies Data Rates Supported (Mbit/s)<br />

802.11 legacy FHSS, DSSS, infrared 2.4 GHz, IR 1, 2<br />

802.11b DSSS, HR-DSSS 2.4 GHz 1, 2, 5.5, 11<br />

"802.11b+"<br />

standardnon-<br />

DSSS,<br />

(PBCC)<br />

HR-DSSS<br />

2.4 GHz 1, 2, 5.5, 11, 22, 33, 44<br />

802.11a OFDM 5.2, 5.8 GHz 6, 9, 12, 18, 24, 36, 48, 54<br />

802.11g DSSS, HR-DSSS, OFDM 2.4 GHz<br />

Table 4: Overview of the IEEE 802.11 Standards 11<br />

1, 2, 5.5, 11; 6, 9, 12, 18, 24, 36, 48,<br />

54<br />

ETSI HiperLAN/2 operating in the 5 GHz band (licence exempt bands) using OFDM modulation and TDMA<br />

(time division multiple access).<br />

9 http://grouper.ieee.org/groups/802/11/<br />

10 http://www.wirelessethernet.org/OpenSection/index.asp<br />

11 http://en.wikipedia.org/wiki/IEEE_802.11


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The standard provides 25 Mbit/s short range, wireless access and WLAN applications in indoor and campuswide<br />

usage. Typical indoor and outdoor coverage is 50 m and 150 m, respectively. User mobility within local<br />

service area is supported. The Hiperlan/2 standard has now been merged with 802.11a, giving some features<br />

such as power control and QoS.<br />

HiSWANa (Japanese) is a WLAN standard in the 5GHz band and has a MAC structure similar to that of<br />

HiperLAN/2. But, unlike HiperLAN/2, HiSWANa does not offer direct-link mode which allows terminals to<br />

transmit to one another without routing through an access point. HiSWANa also uses a listen-before-talk<br />

mechanism similar to 802.11a to reduce uncoordinated interference. The HiSWANa MAC combines key<br />

features of both 802.11a and HiperLAN/2, at the expense of increased overhead.<br />

Wi-Fi Alliance 12<br />

Wi-Fi, short for wireless fidelity and is meant to be used generally when referring of any type of 802.11<br />

network, whether 802.11b, 802.11a, dual-band, etc. The term is promulgated by the Wi-Fi Alliance. It is a nonprofit<br />

international association formed in 1999 to certify interoperability of wireless Local Area Network<br />

products based on IEEE 802.11 specification, (WLAN). The alliance is targeting three purposes: To promote<br />

Wi-Fi worldwide by encouraging manufacturers to use standardized 802.11 technologies in their wireless<br />

networking products; to promote and market these technologies to consumers in the home, SOHO and enterprise<br />

markets; and to test and certify Wi-Fi product interoperability. A user with a "Wi-Fi Certified" product can use<br />

any brand of access point with any other brand of client hardware that also is certified. Typically, however, any<br />

Wi-Fi product using the same radio frequency (for example, 2.4GHz for 802.11b or 11g, 5GHz for 802.11a) will<br />

work with any other, even if not "Wi-Fi Certified." Formerly, the term "Wi-Fi" was used only in place of the<br />

2.4GHz 802.11b standard, in the same way that "Ethernet" is used in place of IEEE 802.3. The Alliance<br />

expanded the generic use of the term in an attempt to stop confusion about wireless LAN interoperability.<br />

Specifically the home Wi-Fi network enables everyone within a house to access each other's computers, send<br />

files to printers and share a single Internet connection. Within a small business, a Wi-Fi network can easily<br />

improve workflow, give staff the freedom to move around and allow all the users to share network devices<br />

(computers, data files, printers, etc.) and a single Internet connection.<br />

The small office Wi-Fi network also makes it easy to add new employees and computers. There is no need to<br />

install new data cables and install cabling. Just add a Wi-Fi radio to the new computer, configure it and the new<br />

employee can be up and running in minutes.<br />

To allow access to the Internet, the Internet connection (DSL, ISDN or cable modem) connects to the Wi-Fi<br />

gateway. Several Wi-Fi laptops can then wirelessly connect to the gateway. The laptop computers can connect<br />

through a built-in, or embedded, Wi-Fi radio or through a standard slide-in PC Card radio.<br />

The desktop computers can use a variety of types of Wi-Fi radios to connect to the wireless network: a plug-in<br />

USB (Universal Serial Bus) radio, a built-in PCI Card radio or an Apple AirPort module.<br />

A single printer attached to one of the desktop computers enables all of the computers on the network to print to<br />

it. Of course, the connected computer must be turned on to enable the printer to function and communicate with<br />

the rest of the network.<br />

It is also possible to use a stand-alone Wi-Fi equipped printer, or a printer with a Wi-Fi print server.<br />

If you have a combination multifunction printer, scanner and fax machine, you could access and operate this<br />

combo device, and its various capabilities, from any computer on the network.<br />

Public Wi-Fi "HotSpots" are rapidly becoming common in coffee shops, hotels, convention centres, airports,<br />

libraries, and community areas — anyplace where people gather. In these locations, a Wi-Fi network can<br />

provide Internet access to guests and visitors. People can connect using their own Wi-Fi equipped laptop<br />

computers and portable computing devices, or by using Wi-Fi equipped desktop computers provided at the<br />

location. A single networked printer with a built-in print server can also be connected to the access point, to<br />

provide printing services to users. HotSpots operate in various ways. A small public HotSpot may provide free<br />

access to its guests or it may charge a membership, per-time or data-use connection fee.<br />

12 http://www.wi-fi.org/<br />

<strong>Annex</strong> 2 - Page 9 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Even if the venue is providing Internet connectivity as a free value added service, it asks customers to provide<br />

user and registration information before they can connect to the Internet. In many instances a wireless gateway,<br />

the central base station, can provide connectivity for all the wired and wireless networking components. Internet<br />

access, wireless connectivity and wired connections all flow through the Wi-Fi gateway. This kind of Wi-Fi<br />

network differs from a completely wireless network because, in addition to the Wi-Fi connections, the Wi-Fi<br />

gateway is simultaneously supplying wired connectivity to various devices via the extra Ethernet jacks located<br />

on the wireless gateway. These can include other desktop computers, printers and print servers and wired<br />

Ethernet hubs and routers, as well as additional Wi-Fi access points 13 .<br />

<strong>A2.</strong>1.2.2.2 Wireless personal area networks, WPANs<br />

Wireless PANs typically have a short range-of-use and are intended to set up connections between personal<br />

devices. The most widely deployed standard in this class is Bluetooth 14 . Its capability is providing 1 Mbit/s for<br />

few connected devices in a small network, called a piconet. Its range is between 10 and 100 meters depending<br />

on the transmission power and the environmental conditions. The used transmission-band for Bluetooth lies in<br />

the 2.4 GHz ISM band. The HomeRF standard, like Bluetooth, also works in the 2.4 GHz ISM band. From an<br />

initial maximum data rate of 1.6 Mbit/s, it has been extended to 10 Mbit/s 15 . HomeRF has a range of 50 meters<br />

at this speed. It is not interoperable with its strongest competitor, IEEE 802.11b, however.<br />

The IEEE 802.15 16 standard is intended to go a step further. In this context the WiMedia Alliance 17 has been<br />

established. 802.15 integrates the Bluetooth standard and harmonizes it with the IEEE 802 family, such that it is<br />

IP and Ethernet compatible. The objectives of 802.15 are a high-bit rate solution providing up to 20 Mbit/s and<br />

beyond, and a low bit-rate one (IEEE 802.15.4, also known as ZigBee). Within IEEE 802.3 the study group 3c 18<br />

was formed in March 2004. The group is developing a millimetre-wave-based alternative physical layer (PHY)<br />

for the existing 802.15.3 Wireless Personal Area Network (WPAN) Standard 802.15.3-2003. This mm-wave<br />

WPAN will operate in the new and clear 57-64 GHz unlicensed band defined by FCC 47 CFR 15.255. The 60<br />

GHz WPAN will allow high coexistence (close physical spacing) with all other microwave systems in the<br />

802.15 family of WPANs. In addition, the 60 GHz WPAN will allow very high data rate applications such as<br />

high speed internet access, streaming content download (video on demand, HDTV, home theatre, etc.) real time<br />

streaming and wireless data bus for cable replacement. Data rates in excess of 2 Gbps will be provided.<br />

Within IEEE 802.15.3 task group 3a coordinates the activities of the ultra-wideband (UWB) technology 19 . It is a<br />

promising high-speed, low-power wireless technology for home entertainment or personal area network. While<br />

providing wireless distribution for TV programs, movies, games and intensive data, UWB claims also that it is<br />

assured that such distribution will not interfere with other wireless transmissions common at home. In February<br />

2002, the FCC allocated 7,500 MHz of unlicensed spectrum for UWB devices for communication applications<br />

in the 3.1 GHz to 10.6 GHz frequency band. The UWB system provides a WPAN with data payload<br />

communication capabilities of 28, 55, 110, and even at 220, 500, 660, 1000 and 1320 Mbps are expected.<br />

There exist two proposals for UWB, one is based on multi-band OFDM transmission and the other on direct<br />

sequence spreading (DS-UWB). Two different bands are defined: one band nominally occupying the spectrum<br />

from 3.1 to 4.85 GHz (the low band), and the second band nominally occupying the spectrum from 6.2 to 9.7<br />

GHz (the high band).<br />

The DS-UWB system employs direct sequence spreading of binary phase shift keying (BPSK) and quaternary<br />

bi-orthogonal keying (4BOK) UWB pulses. Forward error correction coding (convolutional coding) is used with<br />

a coding rate of ½ and ¾.<br />

The OFDM System consists of 13 sub bands of 528 MHz width each. There are 128 subcarriers in each sub<br />

band with QPSK modulation and convolutional coding.<br />

13 http://www.wi-fi.org/OpenSection/index.asp<br />

14 BlueTooth, http://www.bluetooth.com<br />

15 HomeRF, http://www.pcworld.com/news/article/0,aid,64024,00.asp<br />

16 IEEE 802.15, http://grouper.ieee.org/groups/802/15/<br />

17 WiMEDIA, http://www.wimedia.org/<br />

18 IEEE 802.15.3 SG3c, http://www.ieee802.org/15/pub/SG3c.html<br />

19 Ultra Wide Band, http://www.uwbforum.org/standards/specifications.asp<br />

<strong>Annex</strong> 2 - Page 10 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

In cooperation with the 1394 Trade Association (TA) a protocol adaptation layer (PAL) has been developed<br />

between the wired IEEE1394 and the IEEE 802.15.3 MAC. The PAL also adapts the IEEE P1394.1 bridging<br />

specification to wireless use. The result is a ‘wireless FireWire’ capability, which can be implemented with any<br />

standard or non-standard physical layer, including Ultra Wideband PHYs. The PAL permits IEEE 1394 devices<br />

and protocols to be used in a wireless environment at speeds up to 480 Mbit/s per second, while allowing<br />

compatibility with existing wired 1394 devices. The standard will move consumers’ one significant step closer<br />

to controlling home networks, HDTVs, and other advanced electronics systems wirelessly, just as they now use<br />

remote controls to change TV channels or audio output 20 .<br />

WiMedia Alliance<br />

The WiMedia Alliance is a not-for-profit open industry association formed to promote wireless personal-area<br />

network (WPAN) connectivity and interoperability for multiple industry-based protocols. The WiMedia<br />

Alliance develops and adopts standards-based specifications for connecting wireless multimedia devices,<br />

including application, transport, and control profiles; test suites; and a certification program to accelerate widespread<br />

consumer adoption of "wire-free" imaging and multimedia solutions. (IEEE 802.15, 1394, WiMedia<br />

Alliance’s Convergence Architecture, WiMCA), MBOA (Multiband OFDM Alliance). The WiMedia Alliance<br />

charter is to develop a specification based on the IEEE 802.15.3 standard with a strong focus on an ultra-wide<br />

band physical layer (802.15.3a). The Alliance will establish a certification and logo program and promote the<br />

WiMedia brand 21 . Alliance activities include coordinating with other standards bodies, promoting the allocation<br />

of UWB spectrum at international regulatory bodies. The Alliance is committed to intelligently leveraging as<br />

many existing technologies as possible with the end-goal of developing an easy-to-understand consumer system<br />

for interoperable wireless multimedia devices. The WiMedia Alliance serves the consumer electronics, PC and<br />

mobile communications markets. Products specific to these markets as well as emerging convergence products<br />

will benefit from simple wireless connectivity.<br />

WiMedia-enabled products will meet the demanding requirements of portable consumer imaging and<br />

multimedia applications and support peer-to-peer connectivity and isochronous as well as synchronous data.<br />

WiMedia technology will be optimised for low-cost, small-form factor, and quality of service (QoS) awareness<br />

and will enable multimedia applications that are not optimised by existing wireless standards.<br />

<strong>A2.</strong>1.2.2.3 Third generation mobile communication systems (cellular)<br />

For the development of 3G the ITU established the IMT2000 (International Mobile Telecommunications at<br />

2000 MHz) standard. 3G networks will provide mobile multimedia, personal services, the convergence of<br />

digitalisation, mobility, the Internet, and new technologies based on the global standards. The international<br />

standardisation activities for 3G are mainly concentrated in the different regions in the European<br />

Telecommunications Standards Institute (ETSI) Special Mobile Group (SMG) in Europe, Research Institute of<br />

Telecommunications Transmission (RITT) in China, Association of Radio Industry and Businesses (ARIB) and<br />

Telecommunication Technology Committee (TTC) in Japan, Telecommunications Technologies Association<br />

(TTA) in Korea, and Telecommunications Industry Association (TIA) and T1P1 in the United States. In order to<br />

harmonise and standardise in detail the similar ETSI, ARIB, TTC, TTA, T1 WCDMA, and related TDD<br />

proposals the 3rd Generation Partnership Project 22 (3GPP) was established.<br />

In general IMT-2000 consists of four systems and two main technologies summarised in Table 5: IMT2000<br />

radio interfaces and access techniques, Source: Chr. Menzel, SIEMENS AG. The systems are<br />

UMTS, CDMA2000, DECT and UWC-136 (EDGE), while the technologies apply TDMA and CDMA, i.e.<br />

time- and code division multiplexing. UMTS (Universal Mobile Telecommunications System) UTRA-FDD and<br />

UTRA-TDD are the European versions of IMT-2000.<br />

A typical chip rate of 3.84 Mcps is used for the 5-MHz band allocation. Data rates at 364 kbit/s (FDD-mode)<br />

and more are available and for stationary hot-spot services a 2 Mbit/s mode (TDD) is provided. UMTS is a<br />

packet-switched technology.<br />

20 http://www.1394ta.org/Press/2003Press/december/12.08.a.htm<br />

21 http://www.caba.org/standard/wimedia.html<br />

22 http://www.3gpp.org/<br />

<strong>Annex</strong> 2 - Page 11 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Wideband CDMA (W-CDMA), supported by groups in Japan (ARIB) and Europe, and backward-compatible<br />

with GSM, has been selected for the UMTS Terrestrical Radio Access (UTRA) frequency division duplex<br />

(FDD).<br />

Table 5: IMT2000 radio interfaces and access techniques, Source: Chr. Menzel, SIEMENS AG<br />

The IMT2000 frequency ranges are shown in the tab.6 (the exact spectrum available remains country-specific):<br />

Frequency division duplex (FDD)<br />

<strong>Annex</strong> 2 - Page 12 of 282<br />

Time division duplex (TDD)<br />

Region 1 (e.g. Europe and Africa) Region 1 (e.g. Europe and Africa)<br />

1920-1980 MHz Uplink<br />

2110-2170 MHz Downlink<br />

1900 – 1920 MHz<br />

2010 – 2025 MHz<br />

Region 2 (e.g. America) Region 2 (e.g. America)<br />

1850-1910 MHz Uplink<br />

1930-1990 MHz Downlink<br />

1850 – 1910 MHz<br />

1930 – 1990 MHz<br />

1910 – 1930 MHz<br />

Uplink and<br />

Downlink<br />

Uplink and<br />

Downlink<br />

Table 6: IMT2000 Frequency allocations (Source: Overview of 3GPP Release 99, ETSI Mobile Competence<br />

Centre Version xx/07/04 © Copyright ETSI 2004)<br />

<strong>A2.</strong>1.2.2.4 Relation of UMTS/3G to other wireless technologies<br />

For comparison some typical data of UMTS and Wi-Fi/WiMAX are shown in Table 7: Comparison of<br />

UMTS, Wi-Fi/WiMAX, and Mobile-Fi, *WISP: Wireless Internet Service Provider. With<br />

respect to future “beyond-3G” systems the recent evolution and successful deployment of WLANs has yield a<br />

demand to integrate WLANs with 3G cellular networks. Data rates provided by the WLAN standards are far<br />

above the targeted 144 kbps of General Packet Radio Services (GPRS) and 384 kbps — 2 Mbps of the UMTS


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

cellular systems, making the WLANs important and attractive to be used as adds on service to the usual 3G<br />

cellular systems.<br />

The key goal of this integration is to develop heterogeneous mobile data networks, capable to support ubiquitous<br />

data services with very high data rates in strategic locations. The effort to develop such heterogeneous networks<br />

is linked with many technical challenges including seamless vertical handovers across WLAN and 3G radio<br />

technologies, security, 3G-based authentication, unified accounting & billing, WLAN sharing (by several 3G<br />

networks), consistent QoS and service provisioning, etc.<br />

According to the report of the IST-project MUSE 23 (6th FP 507295) MA 2.4, it is recognised that in general a<br />

convergence of the fixed and mobile access architectures is necessary by using 3GPP concepts. By 2007, first<br />

trails of convergent networks based on 3GPP and next generation network approaches will be performed<br />

between fixed and mobile operators. Also trials with IEEE 802.1x focused on DSL radio interworking are<br />

possible. The main drivers are a cost effective sharing of network resources and common platforms, as well as<br />

the offering of new converged services. By 2010, a gradual integration of fixed and mobile networks<br />

internetworking and commercial deployments depending on user demands are seen.<br />

3G Wi-Fi WiMAX Mobile-Fi<br />

Standard UMTS IEEE 802.11 IEEE 802.16 IEEE 802.20<br />

Maximum speed Mbit/s 0.364 FDD<br />

2 TDD<br />

54 10-100 16<br />

Operations/Operators Cell phone Individuals, Individuals, WISPs WISPs<br />

companies WISPs*<br />

Coverage area Micro-/ 100m several km several km<br />

Macro cell<br />

up to 40 km<br />

Spectrum<br />

several km<br />

near 2 GHz 2.4 GHz, 2-11GHz, 3.5GHz<br />

near 5GHz 10-66GHz<br />

Mobility high FDD yes no mobility, high<br />

low TDD<br />

mobile: 802.16e<br />

Table 7: Comparison of UMTS, Wi-Fi/WiMAX, and Mobile-Fi, *WISP: Wireless Internet Service Provider<br />

<strong>A2.</strong>1.2.3 Middleware<br />

In computing, middleware consists of software agents acting as an intermediary between different application<br />

components.. It is used most often to support complex, distributed applications. The software agents involved<br />

may be one or many.<br />

The ObjectWeb consortium gives the following definitions of middleware: "In a distributed computing system,<br />

middleware is defined as the software layer that lies between the operating system and the applications on each<br />

site of the system."<br />

Middleware is now used to describe database management systems, web servers, application servers content<br />

management systems, and similar tools that support the application development and delivery process.<br />

Middleware is especially integral to modern information based on XML, SOAP, Web services, and serviceoriented<br />

architecture.<br />

Middleware is the enabling technology of Enterprise application integration. (Source: Wikipedia)<br />

23 http://www.ist-muse.org/<br />

<strong>Annex</strong> 2 - Page 13 of 282


<strong>A2.</strong>1.2.4 Connectivity<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

By recognizing the need to separate connectivity from applications we have the opportunity to unleash the<br />

power of the marketplace that has served so very well in computing and in the Internet. [Bob Frankston, 2002-<br />

01-29]<br />

Source: www.satn.org<br />

History<br />

Like any other system, our understanding of telecommunications has evolved and changed. In engineering the<br />

phone system, there were a myriad of technical problems to be solved in order to be able to carry voice<br />

conversations over long distances or even around the world. The signal had to be delivered with precise timing<br />

with every component of the network adjusted just right. Because the equipment was so expensive there was<br />

great emphasis on precise planning for capacity.<br />

Similarly, television was an amazing feat of engineering in the 1930's. It took very precise engineering to<br />

synchronize the video beam in the kinescope (the camera) with the image shown in the receiver. Many technical<br />

tricks were used including interlacing so that successive scans filled alternating lines to produce a smoother<br />

image. People then took advantage of the accidental properties, such as adding closed captioning by using the<br />

"vertical blanking interval" (the time it took to move the beam from the bottom back to the top of the screen).<br />

The rise of the Internet in the 1990's (though the process actually started decades earlier) has demonstrated that<br />

we can now treat both telephony and television as streams of bits over a packet network. In the network itself all<br />

packets are treated the same with no special handling for audio or video streams. The network doesn't even have<br />

the notion of a circuit since successive packets needn't go to the same destination.<br />

Connectivity<br />

The pragmatic definition: Connectivity is the unbiased transport of packets between two end points. This is also<br />

the essential definition of "IP" (Internet Protocol).<br />

There is a strong boundary between the IP layer and the applications built upon it. TCP, for example, is an<br />

application protocol. In the term "TCP/IP" the slash emphasizes the separation of the two.<br />

The virtuous cycle<br />

Since no application's packets get special treatment, the IP layer created a new commodity, connectivity, and set<br />

in motion the virtuous cycle of low prices generating new applications. These applications generated new<br />

demand. The new capacity created to meet this demand drove down the unit price but generated higher<br />

aggregate revenue to the connectivity providers.<br />

It is still difficult for many people to grasp the power of the virtuous cycle set in motion by an effective<br />

marketplace structure. In the 1970's the military paid millions of dollars for computers that were far less<br />

powerful than the machines we use for children's video games.<br />

It also means that now telephony and television can be treated as streams of packets built upon the connectivity<br />

layer. There is a caveat in that we need sufficient capacity in our networks to carry this traffic. Early efforts to<br />

send audio and video over the Internet were limited by the capacity of the network but it is now becoming<br />

common and accepted to listen to live events over the Internet and, unlike radio, there is no predefined limit on<br />

the quality.<br />

We now have home networks running at 100 megabits per second bought along with pencils at the local<br />

stationery store. And soon, a billion bits per second will be common. We already have Internet backbones that<br />

support a trillion bits per second per strand of fiber.<br />

<strong>A2.</strong>1.2.5 Initiatives<br />

Internet Engineering Task Force (IETF) 24 , IPv6 Forum 25 , Universal Plug and Play Forum 26 , Open Services<br />

Gateway Initiative (OSGi), 4DHomeNet 27 .<br />

24 IETF, http://www.ietf.org/<br />

25 ipv6forum, http://www.ipv6forum.com/<br />

<strong>Annex</strong> 2 - Page 14 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

4DHomeNet offers 4DAgent and a remote management system that supports OSGi Service Platform Release<br />

2. The system shows how home network operators can remotely manage their OSGi-based distributed<br />

residential gateways. 4DHomeNet also shows how OSGi Frameworks and DVB-MHP can work together<br />

providing a user-friendly interface via a TV set.<br />

Broadband Wireless Association 28 ,<br />

The BWA offers to members an independent voice that is heard by regulators and licence authorities primarily<br />

throughout Europe but with strong links to North America and the Pacific Rim. It also offers essential technical<br />

and market information in its promotion and facilitation of the broadband wireless industry, Members of the<br />

Association are from all parts of the wireless industry including operators, vendors, research groups and<br />

consultants. The association conducts a number of activities, which may result in direct benefits by its members.<br />

TDD Coalition 29<br />

The TDD Coalition is a consortium of manufacturers and operators promoting the use of TDD duplexing<br />

technology in broadband wireless networks.<br />

OFDM Forum 30<br />

The OFDM Forum is a voluntary association of hardware manufacturers, software firms and other users of<br />

orthogonal frequency division multiplexing (OFDM) technology in wireless applications. The OFDM Forum<br />

was created to foster a single, compatible OFDM standard, needed to implement cost-effective, high-speed<br />

wireless networks on a variety of devices. OFDM is a cornerstone technology for the next generation of highspeed<br />

wireless data products and services for both corporate and consumer use. With the introduction of the<br />

IEEE 802.11a, ETSI BRAN, and multimedia applications, the wireless world is ready for products based on<br />

OFDM technology.<br />

Wireless Communication Association 31<br />

The Wireless Communications Association International (WCA, founded in 1988) is the non-profit trade and<br />

professional association for the wireless broadband industry with member companies on six continents<br />

representing the bulk of the sector's leading carriers, vendors and consultants. The WCA's mission is to advance<br />

the interests of the wireless carriers that provide high-speed data, Internet, voice and video services on<br />

broadband spectrum through land-based systems using reception/transmit devices in all broadband spectrum<br />

bands. The WCA is an established leader in government relations, technology standards and industry event<br />

organization. General fixed wireless access scope (including free space optics). The members provide services<br />

or products in spectrum bands as UHF, 2.1, 2.3, 2.5, 12, 18, 23, and 28 GHz.<br />

MMAC-PC 32<br />

Japan: Multimedia Mobile Access Communication Systems Promotion Council (founded 1996). The objective<br />

of the Council is to realize MMAC as soon as possible through investigations of system specifications,<br />

demonstrative experiment, information exchange and popularisation activities and thereby contribute to the<br />

efficient use of radio frequency spectrum.<br />

Wireless World Research Forum 33 (WWRF)WWRF is a global organisation, which was founded in<br />

August 2001. Members of the Forum are: manufacturers, network operators/service providers, R&D centres,<br />

universities, and small and medium enterprises. WWRF provides a global platform for discussion of results,<br />

exchange of views to initiate global cooperation towards systems beyond 3G.<br />

26 plug and play forum: http://www.upnp.org/<br />

27 4DHomeNet, www.4dhome.net<br />

28 BWA, http://www.broadband-wireless.org/home.htm<br />

29 TDD coalition, http://www.tddcoalition.org/<br />

30 OFDM Forum, http://www.ofdm-forum.com/index.asp?ID=92<br />

31 Wireless Communication Association, http://www.wcai.com/<br />

32 MMAC-PC, http://www.arib.or.jp/mmac/e/about.htm<br />

33 WWRF, http://www.wireless-world-research.org/<br />

<strong>Annex</strong> 2 - Page 15 of 282


<strong>A2.</strong>1.2.6 R&D Activities / Projects<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>1.2.6.1 MMAC 34<br />

Within the framework of the Japanese Multimedia Mobile Access Communication Systems project MMAC<br />

multimedia information is transmitted at ultra high speeds and with high quality "anytime and anywhere", details<br />

of the MMAC system specifications are:<br />

High Speed Wireless Access (outdoor, indoor): Mobile Communication System which can transmit at up to 30<br />

Mbit/s using the SHF and other band (3-60 GHz). It can be used for mobile video, telephone conversations.<br />

Ultra High Speed Wireless LAN (indoor): Wireless LAN which can transmit up to 156 Mbit/s using the<br />

millimeter wave radio band (30-300 GHz). It can be used for high quality TV conferences.<br />

5GHz Band Mobile Access (outdoor, indoor): ATM type Wireless Assess and Eathernet type Wireless LAN<br />

using 5GHz band. Each system can transmit at up to 20-25Mbit/s for multimedia information.<br />

Wireless Home-Link (indoor): Wireless Home-Link which can transmit up to 100Mbit/s using the SHF-and<br />

other frequency bands band (3-60GHz). It can be used for between PCs and Audio Visual equipments transmit<br />

multimedia information.<br />

<strong>A2.</strong>1.2.6.2 House_n 35<br />

Is a multi-disciplinary project lead by researchers at the Massachusetts Institute of Technology, USA. This<br />

project includes also advanced communication technologies, e.g. Rondoni, J.C. Context-Aware Experience<br />

Sampling for the Design and Study of Ubiquitous Technologies M.Eng. 36 . Thesis Electrical Engineering and<br />

Computer Science, Massachusetts Institute of Technology, September 2003. “The paradigm of desktop<br />

computing is beginning to shift in favour of highly distributed and embedded computer systems that are<br />

accessible from anywhere at anytime. Applications of these systems, such as advanced contextually-aware<br />

personal assistants, have enormous potential not only to abet their users, but also to revolutionize the way people<br />

and computers interact.<br />

<strong>A2.</strong>1.2.6.3 European projects<br />

(Projects which are interesting for activities “wireless 3G and beyond” and Home-Networks)<br />

WINNER 37 Wireless World Initiative New Radio, Integrated Project (38 partners), IST-2003-507581, Action<br />

Line: Mobile and wireless systems beyond 3G,<br />

The key objective of the WINNER project is to develop a totally new concept in radio access. This is built on<br />

the recognition that developing disparate systems for different purposes (cellular, WLAN, shortrange access<br />

etc.) will no longer be sufficient in the future converged Wireless World. This concept will be realised in the<br />

ubiquitous radio system concept. The vision of a ubiquitous radio system concept is providing wireless access<br />

for a wide range of services and applications across all environments, from short-range to wide-area, with one<br />

single adaptive system concept for all envisaged radio environments. It will efficiently adapt to multiple<br />

scenarios by using different modes of a common technology basis. The concept will comprise the optimised<br />

combination of the best component technologies, based on an analysis of the most promising technologies and<br />

concepts available or proposed within the research community. The initial development of technologies and<br />

their combination in the system concept will be further advanced towards future system realisation. Compared to<br />

current and evolving mobile and wireless systems, the WINNER system concept will provide significant<br />

improvements in peak data rate, latency, mobile speed, spectrum efficiency, coverage, cost per bit and supported<br />

environments taking into account specified Quality-of-Service requirements. The concept will province the<br />

wireless access underpinning the knowledge society and the eEurope initiative, enabling the "ambient<br />

intelligence" vision. To achieve this impact, the concept will be derived by a systematic approach. Advanced<br />

radio technologies will be investigated with respect to predicted user requirements and challenging scenarios.<br />

34 MMAC, http://www.arib.or.jp/mmac/e/what.htm<br />

35 House_n, http://architecture.mit.edu/house_n/)<br />

36 Thesis, http://architecture.mit.edu/house_n/web/publications/publications.htm<br />

37 Winner, https://www.ist-winner.org/<br />

<strong>Annex</strong> 2 - Page 16 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The project will contribute to the global research, regulatory and standardisation process. Given the consortium<br />

pedigree, containing major players across the whole domain, such contributions will have a major impact on the<br />

future directions of the Wireless World.<br />

FUTURE HOME 38 , IST 2000-28133 (closed)<br />

The Future Home project focuses to create a solid, secure, user friendly home networking concept with open,<br />

wireless networking specification. The project introduces usage of IPv6 and Mobile IP protocols in the wireless<br />

home network. It specifies and implements prototypes of wireless home network elements and service points. It<br />

develops new services that use capabilities of the network and verify feasibility of the concept in user trial. The<br />

networking concept defines a wireless home networking platform (HNSP) with network protocols and network<br />

elements. It defines the wireless technologies and network management methods for supporting user friendliness<br />

and easy installation procedures as well as management of the wireless resources. The wireless technologies are<br />

Bluetooth, WLAN, and HiperLAN/2.<br />

BROADWAY 39 The way to broadband access at 60GHz, IST-2001-32686<br />

BROADWAY aims to propose a hybrid dual frequency system based on a tight integration of HIPERLAN/2<br />

OFDM high spectrum efficiently technology at 5GHz and an innovative fully ad-hoc extension of it at 60GHz<br />

named HIPERSPOT. This concept extends and complements existing 5GHz broadband wireless LAN systems<br />

in the 60GHz range for providing a new solution to very dense urban deployments and hot spot coverage. This<br />

system is to guarantee nomadic terminal mobility in combination with higher capacity (achieving data rates<br />

exceeding 100Mbps). This tight integration between both types of system (5/60GHz) will result in wider<br />

acceptance and lower cost of both systems through massive silicon reuse. This new radio architecture will by<br />

construction inherently provide backward compatibility with current 5GHz WLANs (ETSI BRAN<br />

HIPERLAN/2). BroadWay is obviously part of the 4G scenario, as it complements the wide area infrastructure<br />

by providing a new hybrid air interface technology working at 5 GHz and at 60 GHz. This air interface is<br />

expected to be particularly innovative as it addresses the new concept of convergence between wireless local<br />

area network and wireless personal area network systems.<br />

NEWCOM 40 , Network of Excellence in Wireless Communications,<br />

Action Line: Mobile and Wireless Systems beyond 3G, FP6-507325<br />

NEWCOM (more than 60 partners) aims at creating a European network that links in a cooperative way a large<br />

number of leading research groups addressing the strategic objective "Mobile and wireless systems beyond 3G".<br />

NEWCOM will implement an elaborate plan of initiatives which revolve around the key notion and strategic<br />

choice of a Virtual Knowledge Centre: NEWCOM will effectively act as a distributed university, organised in a<br />

matrix fashion: the columns will represent the seven NEWCOM (Disciplinary) departments, while the rows will<br />

represent NEWCOM projects.<br />

Department 1 Analysis and Design of Algorithms for Signal Processing at Large in Wireless Systems<br />

Department 2 Radio Channel Modelling for Design Optimisation and Performance Assessment of Next<br />

Generation Communication Systems<br />

Department 3 Design, modelling and experimental characterisation of RF and microwave devices and<br />

subsystems<br />

Department 4 Analysis, Design and Implementation of Digital Architectures and Circuits<br />

Department 5 Source Coding and Reliable Delivery of Multimedia Contents<br />

Department 6 Protocols and Architectures, and Traffic Modelling for (Reconfigurable / Adaptive) Wireless<br />

Networks<br />

Department 7 QoS Provision in Wireless Networks: Radio Resource Management, Mobility, and Security<br />

Project A Ad Hoc and Sensor Networks<br />

Project B Ultra-wide Band Communication Systems<br />

Project C Functional Design Aspects of Future Generation Wireless Systems<br />

Project D Reconfigurable radio for interoperable transceivers<br />

Project E Cross Layer Optimisation<br />

38 FUTURE HOME, http://future-home.org<br />

39 Broadway: http://www.ist-broadway.org/description.html<br />

40 NEWCOM, http://dbs.cordis.lu/fep-cgi/srchidadb?ACTION=D&CALLER=PROJ_IST&QM_EP_RCN_A=71453<br />

<strong>Annex</strong> 2 - Page 17 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

MediaNet 41<br />

Action Line: Networked Audio Visual Systems and Home Platforms, FP6-507452<br />

The project (IP with more than 30 partners) aims at developing technologies, infrastructure and service solutions<br />

enabling the easy exchange of digital content and audio-video content between creators, providers, customers<br />

and citizens. MediaNet will identify and develop a set of representative and appealing end-to-end applications,<br />

key enabling technologies, a reference architecture and its key interfaces for an easy and smooth exchange of<br />

content all along the media supply chain. By developing and maintaining a shared vision, all stakeholders are<br />

guaranteed of the effective and seamless work between all subparts of the media chain.<br />

The project covers three complementary domains: media networking, multimedia services, and content<br />

engineering. The project will develop new management service platforms for broadband access networks and<br />

open home networking, and storage solutions. Wireless communication solutions for audio-video content, new<br />

end-to-end service provisioning over shared public and private infrastructures, mixing video broadcast over<br />

broadband access, content on demand, and interactive online applications will be proposed, as well as personal<br />

multimedia communications supported by portable terminals and person-to-person communication services over<br />

IP.<br />

A common open and shared delivery platform will be created covering the broadband access and the home<br />

domains, where all devices and services will interoperate, combining advanced multimedia content, services,<br />

and communications in public and residential environments. MediaNet, which is centered at the intersection of<br />

the audio-video, PC and telecom industries actively participates in the development of such next generation<br />

connected digital applications and devices by addressing some of its technical key issues:<br />

• the development of an open multi-vendor/multi-service business reference architecture and a technical<br />

roadmap ,<br />

• broadband access and home networking to support multiple overlay end-to-end applications by third<br />

parties;<br />

• shared services, infrastructures, and equipment while assuring investment protection, interoperability and<br />

competition;<br />

• Digital Rights Management (DRM) and end-to-end content protection solutions;<br />

• N-Services, e-Service platforms, Digital Video Broadcasting over IP, Multimedia Home Platform,<br />

gateways, wireless solutions for audio/video streaming, open distributed storage, Multimedia<br />

communications over IP services and terminals, MPEG4/AVC encoding and decoding circuits, HQ A/V<br />

streaming over IP.<br />

MOCCA 42 The Mobile Cooperation and Coordination Action, FP6-2004-IST-2<br />

Action Line: Programme Level Accompanying Measures<br />

The MOCCA coordination action will facilitate collaboration between projects addressing mobile and wireless<br />

issues, within the European Research Area (ERA), between projects in the ERA and research programmes in<br />

Asia and the US, and between researchers and projects in the ERA and their counterparts in the developing<br />

regions of the world. It will address this collaboration in the context of the research and development of future<br />

mobile and wireless systems, including the services arid applications they serve. MOCCA will facilitate<br />

European and international collaboration regarding research on future wireless systems and their applications. It<br />

will pave the way towards harmonised international standards for future mobile and wireless systems so that the<br />

systems meet the needs of users worldwide. The MOCCA approach is open to all. All interested ERA projects<br />

will be invited to participate in the activities organised by the project. Inter-continental collaboration with all<br />

major mobile and wireless research programmes and standardisation fora will be supported. MOCCA results<br />

will lead to the development of future applications, services and wireless networks, which meet the needs of<br />

users worldwide, building on Europe's strength in the mobile sector. In the long term, MOCCA results will<br />

improve the impact of the research results of the ERA wireless related projects on global standardisation<br />

activities and in the global market.<br />

WIND-FLEX 43 , Wireless Indoor Flexible High Bitrate Modem Architecture, IST-1999-10025 (closed)<br />

41 http://www.ist-ipmedianet.org/home.html<br />

42 http://mocca.objectweb.org/<br />

43 WINDFLEX, http://labreti.ing.uniroma1.it/windflex/<br />

<strong>Annex</strong> 2 - Page 18 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

A high bit-rate flexible and configurable modem architecture is investigated, which works in single-hop, ad hoc<br />

networks and provides a wireless access to the Internet in an indoor environment where slow. The main<br />

emphasis is in the OSI layers 1 and 2. The best possible performance with a reasonable complexity is attained by<br />

using a jointly optimised adaptive system which includes the multiple access method, diversity, modulation and<br />

coding, and equalization and decoding.<br />

The system is not optimised in advance but it will be adaptive and configurable in the run time. This is a step<br />

towards a SDR (Software defined radio), which is presently too far in the future due to technological problems.<br />

Bit rates from 64 kbit/s up to 100 Mbit/s are considered at the frequency band of 17 GHz. Modulation was<br />

OFDM with 128 subcarriers, channel bandwidth was 50 MHz, modulation schemes BPSK, QPSK, 16QAM,<br />

64QAM. The bit rate is variable depending on the user needs and channel conditions. Flexibility is attained by<br />

using a multicarrier modulation method even though single-carrier methods will be also considered. Best<br />

possible methods are used including joint diversity, modulation and coding (such as space-time coding) in the<br />

transmitter and joint equalization, decoding and channel estimation (such as per-survivor processing) in the<br />

receiver. The work is done in high quality research groups who are using compatible simulation tools so that<br />

almost full (off-line) simulation is possible. The driving force is research, not the existing standards.<br />

NGN INITIATIVE 44 Next Generation Networks Initiative, IST-2000-26418 (closed)<br />

The NGN Initiative's mission is to establish the infrastructure to operate the first open environment for research<br />

on the whole range of Next Generation Networks (NGN) topics to be discussed, consensus achieved and<br />

collective outputs disseminated to the appropriate international standards bodies, fora, and other organisations.<br />

Being of worldwide interest, it is inevitable that some of the Internet-related topics addressed here will also be<br />

covered in the US Next Generation Internet (NGI 45 ) programme. Inputs to Eu-FP6 programme.<br />

WWRI 46 Wireless World Research Initiative, IST-2001-37680 (closed)<br />

The WWRI was an accompanying measure under the IST-programme in the Fifth Framework Programme. The<br />

project started in June 2002 for 10 months. Key players in the wireless sector initiated the WWRI project to<br />

provide a launch pad to the wireless community (industry and academica) for a balanced cooperative research<br />

programme for the Wireless World. The work done in WWRI was useful for the preparation of Integrated<br />

Projects for the 6th EU Framework Programme.<br />

<strong>A2.</strong>1.2.6.4 National projects<br />

WIGWAM is part of the Central Innovation Program "Mobile Internet" which is funded by the German Ministry<br />

of Education and Research (BMBF). The objective of WIGWAM is the design of a complete system for<br />

wireless communication with a maximum transmission data rate of 1 Gbit/s.<br />

The targeted spectrum is the 5 GHz band and the extension bands 17, 24, and 60 GHz. Depending on the<br />

mobility of the user, the data rate should be scalable.<br />

The goal is a "1 Gbit/s component" of a heterogeneous future mobile communication system. All aspects of such<br />

a system will be investigated, from the hardware platform to the protocols, which are subject to very strong<br />

requirements given the extremely high data rate of 1 Gbit/s.<br />

The main application area is the transmission of multimedia content in so-called hot-spots (see figure below), in<br />

home scenarios, and in large offices where an enormous data rate back-off is necessary, e.g. to supply the user<br />

with short-term high data rates, or to enable a true plug-and-play without any frequency planning (particularly<br />

important in home scenarios). In order to be able to include such a high data rate air-interface into a future<br />

heterogeneous mobile communications system, also high mobility applications are covered.<br />

<strong>A2.</strong>1.3 Issues and technical trends / gap analysis<br />

<strong>A2.</strong>1.3.1 Technical Trends<br />

<strong>A2.</strong>1.3.1.1 Cabled Home Network<br />

44 NGNI, http://www.ngni.org/overview.htm<br />

45 Next Generation Internet, http://www.ngi.de/<br />

46 http://www.ist-wwri.org/project.html<br />

<strong>Annex</strong> 2 - Page 19 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

While the optical fibre based 10 Gigabit Ethernet is running, the copper cable based 10 Gigabit Ethernet will<br />

come: In September 2004 IEEE has completed its 10GBASE-T draft (802.3an Draft 1.0), which will be accepted<br />

most probably in mid 2006. The copper cable based 10 Gbit/s Ethernet is expected to considerably drop the<br />

costs of the interface compared to the optical fibre based version. Distances of 15m will be supported.<br />

<strong>A2.</strong>1.3.1.2 Wireless Home Network<br />

In the UMTS cellular network HSDPA (High Speed Downlink Packet Access) and HSUPA (High Speed Uplink<br />

Packet Access) are the most recent enhancements. While HSDPA has been standardized in 3GPP Release 5,<br />

HSUPA is not yet fixed.<br />

HSDPA is a packet-based data service in the W-CDMA downlink with data transmission up to 8-10 Mbps over<br />

a 5MHz bandwidth. HSDPA implementations includes Adaptive Modulation and Coding up to 16QAM, Hybrid<br />

Automatic Request (HARQ), fast cell search, and advanced receiver design. Multiple-Input Multiple-Output<br />

(MIMO) systems are the work item in Release 6 specifications, which will support even higher data transmission<br />

rates up to 20 Mbps.<br />

HSDPA is likely to arrive in Asia in 2005, in phones, handhelds, and PC cards. It will arrive in Europe and the<br />

U.S. soon after.<br />

In January 2004 IEEE announced that it will develop a new standard IEEE 802.11n for wide-area wireless<br />

networks. The real speed would be 100 Mbit/s (even 250 Mbit/s in PHY level). As projected, 802.11n will also<br />

offer a better operating distance than current networks. The standardization progress is expected to be completed<br />

by the end of 2006. IEEE 802.11n builds upon previous 802.11 standards by adding MIMO (multiple-input<br />

multiple-output). The additional transmitter and receiver antennas allow for increased data throughput through<br />

spatial multiplexing and increased range by exploiting the spatial diversity through coding schemes like<br />

Alamouti coding.<br />

<strong>A2.</strong>1.3.2 First gap analysis<br />

In the area of wireless communication we see the following items, which have to be tackled:<br />

• Mobile networks / high capacity radio interface<br />

- Improving spectral efficiency of the radio link by multi-antenna- (MIMO-) and<br />

- adaptive signal processing techniques<br />

- MIMO algorithms and protocols for high mobility<br />

- Scalable PHY layer and protocol stack design up to 1 GBit/s<br />

- Smart scheduling<br />

- Cross layer design<br />

- Interference coordination<br />

- software radio<br />

• Ad hoc networks<br />

• Meshed and multi-hop networks<br />

• Sensor networks<br />

<strong>A2.</strong>1.4 Home Networks: Roadmap<br />

The wired home network technology offers presently data rates up to 1GBit/s, see Figure 1: Wired<br />

Technology Roadmap which seems to be sufficient for the most applications.<br />

<strong>Annex</strong> 2 - Page 20 of 282


wired<br />

Ethernet 10Mb/s, 100Mb/s, 1Gb/s<br />

Firewire (iLink) 100, 200, 400, 800Mb/s<br />

USB, 1.5, 12, and 480 Mb/s<br />

CEBUS, Home Plug


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 2: Wireless technology roadmap<br />

In the wireless area the technical problems are much higher to be overcome, therefore the data rates of present<br />

wireless systems are much lower, compared to wired systems, see Figure 2: Wireless technology roadmap.<br />

For the short range communication the trend goes to UWB, however besides technical challenges also<br />

regulatory issues have to be solved, because it is presently not clear that the spectral power density mask of the<br />

FCC will be accepted by the regulatory European administrative body. At 60 GHz, which offers a better power<br />

budget than UWB due to the allowed higher transmit power, the component costs have considerably to be<br />

reduced for a possible market introduction.<br />

For WLANs pre IEEE 802.11n equipment is commercially available, standard conformal equipment will most<br />

probably be available soon after completion of the standard. A GBit WLAN may be the next step in<br />

standardization after completion of the IEEE 802.11n standard.<br />

WiMAX is under way to allow some kind of mobility with 802.16e.<br />

HSDPA and HSUPA are the current enhancements of UMTS. A beyond 3G Technology offering up to 100<br />

MBit/s downlink and up to 50 MBit/s uplink is expected to be available around 2010. A completely new<br />

generation system (4G) is expected not before 2015.<br />

<strong>Annex</strong> 2 - Page 22 of 282


<strong>A2.</strong>2 CABLE<br />

<strong>A2.</strong>2.1 Introduction<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Cable networks have so far been utilised mostly for broadcast TV applications in Europe, but high HFC network<br />

installation and maintenance cost make it hardly competitive with satellite if used for broadcast services only. A<br />

cable network is an ideal medium for converged services delivery as it can convey broadcast, multicast and<br />

single-cast services. Broadband access via cable has approximately a rough 40% world-wide market share, and<br />

has now the challenge to remain a viable alternative to other access technologies.<br />

<strong>A2.</strong>2.2 Requirements<br />

<strong>A2.</strong>2.2.1 High level requirements<br />

Referring to cable MSO issues, the first priority issue is to offer more bandwidth per customers in a more<br />

efficient and cost effective way; this can be translated in a set of technical requirements:<br />

• Increase downstream capacity per subscriber, including both network capacity and terminal capacity<br />

• Increase upstream network capacity, including more efficient use of the upstream<br />

• Allow flexible sharing ratio between upstream and downstream traffic<br />

• Load balancing in upstream and downstream<br />

• Evolve to a full IP architecture including video services, and supporting QoS, billing, security.<br />

• Extend the framework to home network<br />

To define the requirements produced by the services, applications, or regulatory constraints is necessary to build<br />

a coherent technology roadmap. This is still more pertinent since these requirements are evolving rapidly;<br />

examples of such changes are the rapid deployment of VOIP, the explosion of high bit rate access with a rapid<br />

increase of the bit rate, and the increasing demand for nomadic and mobiles services. The table below represents<br />

a first sketch of requirements and their effect on technology roadmap.<br />

Figure 3: requirement and roadmap<br />

<strong>Annex</strong> 2 - Page 23 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

High speed data access: Both Peer to Peer application or fast download, and competition between operators are<br />

pushing the average bit rate offered to the subscriber to a rapid increase; a basis for the average bit rate per<br />

subscriber can be taken from the 93 MUSE project 94 .<br />

Figure 4: bit rate evolution<br />

VOIP is now deployed together with high speed data. VOIP is introducing a requirement for low bit rate QoS<br />

architecture, signalling, security and provisioning, as well as interconnection with metro/backbone networks.<br />

VOIP can be deployed using 2 different paradigms (centralized or decentralized), which are developed later in<br />

this document.<br />

Video on Demand, or more generally rich media content, is a clear roadmap from the cable operators; one short<br />

term solution based on legacy MPEG video, and one longer term video over IP paradigm.<br />

Open access is a regulatory requirement to unbundle the network from the services and applications, and<br />

requires to define a clear interface between the network operator, the service provider, and the application<br />

provider.<br />

Extension of the cable access network to the home is necessary to provide an end to end service to the<br />

subscriber; whether the cable operator will have the control of the wireless home network is subject to<br />

regulation; the features provided to the home will include QoS, provisioning, DRM, home devices management.<br />

Hybrid broadcast/fixed applications: there is a clear trend towards applications using cooperating networks<br />

(cooperation of a broadcast and a mobile network); as the cable network can be terminated by a WLAN<br />

network, the user with a mobile terminal can use the cable network as a cooperating network by itself (as the<br />

cable network support broadcast, multicast and unicast services).<br />

94 http//:www.ist-muse.org<br />

<strong>Annex</strong> 2 - Page 24 of 282


<strong>A2.</strong>2.2.2 Roadmaps<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Different technological choices are possible, mainly based on evolution to a centralized or decentralized<br />

architectures:<br />

• A centralized architecture corresponds to today cable access networks implementations, and has the<br />

following classical advantages:<br />

- Network easy to maintain and reliable, as all the “intelligent” (layer 3 and above) elements are centralized<br />

- Cheap user terminal, as the signalling protocols are very light (like MGCP for instance)<br />

- The network capacity increase can be imagined under these assumptions.<br />

• The current trend is to evolve to a decentralized architectures where the network elements are placed closer<br />

to the subscriber. It is based on a peer to peer signalling paradigm like SIP. The current economical<br />

drawbacks of this architecture (reliability and maintainability, network elements and terminal costs) will no<br />

longer apply in the future, and the complexity is compensated by the scalability of the architecture.<br />

Figure 5: technology roadmap related with network capacity<br />

Different technology roadmaps can be deduced according to the 2 categories of paradigms which will prevail in<br />

the future (centralized or decentralized), corresponding to different application scenarios (peer to peer will most<br />

probably lead to decentralized architectures, whereas client server types of applications would lead to<br />

centralized architectures); the different technology concepts are developed in the next paragraphs.<br />

Note that upper layer and lower layers centralized and decentralized architecture can be uncorrelated (upper<br />

layer decentralized model can be implemented over a centralized architecture).<br />

<strong>Annex</strong> 2 - Page 25 of 282


<strong>A2.</strong>2.3 HFC cable network deployment situation<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Broadband access via cable is without contest one of the 2 main deployed technologies; the main figures being<br />

given below:<br />

<strong>A2.</strong>2.3.1 Cable networks current architecture<br />

The view below summarizes the architecture of a modern HFC (Hybrid Fiber Coaxial network). Many variants<br />

can exist but in general the architecture includes several levels:<br />

• A Main Head-end (Central Node) where all broadcast services are aggregated. The main HE feeds the<br />

secondary Nodes (or Local Nodes), generally through secured fiber optic links.<br />

• The local nodes feed medium size cities or small regions ; many variants like shown in can apply. The local<br />

node serves a number of coaxial area via fiber links using usually analogue transmission. The boundary<br />

node between fiber and each coaxial area is called a Fiber Node. The coaxial area size will determine the<br />

ultimate traffic capacity available per user.<br />

• The coaxial area architecture can be either a star network with different levels, or more commonly a tree and<br />

branch network; this part becomes critical when very high bit rates have to be conveyed<br />

The HFC specific part of the network begins at the Local node (the MAN or WAN between the DN and the LNs<br />

not being specific to HFC); optical transport is usually analogue, transmitting transparently the upstream and<br />

downstream spectrum.<br />

The RF spectrum allocation downstream and upstream are respectively 88-860 MHz and 5-65 MHz (many local<br />

variants exist), the downstream spectrum is occupied by analogue broadcast TV carriers and digital QAM 64 or<br />

QAM256 carries conveying digital TV MPEG signal or data payload.<br />

The ultimate cell capacity (assuming that digital switchover has occurred) can be up to 4.8 Gbps downstream,<br />

and 200 Mbps upstream; this capacity can be shared between broadcast, unicast and multicast traffic. In practice,<br />

when taking into account the broadcast analogue channels, the unicast downstream available capacity is<br />

significantly lower These figures show that very high bit rate access is possible at the expense of segmenting the<br />

network into small cells, which introduces a series of technical challenges.<br />

In summary Cable access provides an competitive alternative to XDSL, as it offers the same kind of capacity,<br />

and allows in addition to deliver multicast / broadcast services; it provides an interesting cost effective<br />

alternative to FTTH. The problem is to find good evolutionary scenarios for cable network in order to increase<br />

significantly their capacity, on an economical way so that it can still compete with XDSL and FWA alternative<br />

technologies.<br />

.<br />

Figure 6: HFC architecture 1<br />

<strong>Annex</strong> 2 - Page 26 of 282


<strong>A2.</strong>2.3.2 Current spectrum situation<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 7: HFC architecture 2<br />

Most of the downstream spectrum is occupied by analog legacy channels as shown in the example below, Figure<br />

8: Example of downstream spectrum allocation, in consequence the cable network paradigm is significantly<br />

different before and after the digital switchover:<br />

• before the digital switchover the spectrum resource is limited, and spreaded around the whole downstream<br />

spectrum (usually 88-860 MHz).<br />

• After the digital switchover the available spectrum will enable the delivery of the capacity mentioned in<br />

Figure 9: bandwidth allocation and capacity.<br />

Figure 8: Example of downstream spectrum allocation<br />

<strong>Annex</strong> 2 - Page 27 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Upstream spectrum<br />

The upstream spectrum has been analyzed by academic R&D centers and European R&D projects like Interact 95<br />

as the upstream spectrum availability can really be a bottleneck to deliver broadband access. The usable band (5-<br />

25 to 5-65 MHz, depending on the plant), is subject to a number of disturbances like Impulse and Ingress noise<br />

which can severely limit the upstream capacity by preventing the use of some bands, or limiting the bandwidth<br />

efficiency.<br />

The 4 mains categories of disturbances that have to be considered are Impulse noise, Ingress noise, Common<br />

path distortion, and Gaussian noise. Impulse noise and Common path distortion are localized disturbances,<br />

whereas there is an accumulated additive process for Ingress and Gaussian noise, due to the tree and branch<br />

architecture of the coaxial network. Another important disturbance is clipping in upstream (and downstream,<br />

which creates bursts of errors in the digital transmission); as disturbance are a crucial issue for upstream<br />

capacity. (See 44)<br />

95 http://www.cordis.lu/infowin/acts/rus/projects/ac086.htm<br />

<strong>Annex</strong> 2 - Page 28 of 282


<strong>A2.</strong>2.4 Plant capacity<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The total HFC plant capacity is represented below; in downstream, efficiencies in the order of 4-5 bits/s/Hz can<br />

be achieved, whereas in upstream, 2-3 bits/s/Hz is possible.<br />

As shown in Figure 9, most of the downstream band is used by current legacy analogue video, therefore part of<br />

the band only can be utilized for IP communication, assuming that digital broadcast video is deployed in parallel<br />

with analogue programs. Ultimately (after 2010), the whole band can be utilized for IP unicast and<br />

multicast/broadcast communications.<br />

Figure 9: bandwidth allocation and capacity<br />

Translating this into average subscriber capacity gives Figure 10:<br />

Figure 10: upstream and downstream subscriber capacity<br />

The upstream capacity is clearly insufficient for the high bit rate requirements described above, whereas<br />

downstream average capacity (before switchover occurs) is not sufficient at high penetration rates.<br />

<strong>Annex</strong> 2 - Page 29 of 282


<strong>A2.</strong>2.4.1 Alternatives for increasing the plant capacity<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Changing the split between upstream and downstream<br />

If more traffic is required for upstream communications, the solution of extending the upstream band to 200<br />

MHz can be chosen, at the expense of the upgrade of upstream and downstream filters in the coaxial amplifiers,<br />

and assuming that upstream digital tuner is possible in this frequency range. The solution does not solve the<br />

overall capacity issue, but allows to adapt the sharing between upstream and downstream traffic.<br />

Technical challenges are related to the realization of a low cost system, i.e. to realize a 200 MHz Modem<br />

without RF tuner.<br />

Cell segmentation<br />

Simple cell segmentation can be done to increase both downstream and upstream traffic; the advantage of the<br />

solution is to keep a centralized architecture, which presents major advantages ion term of cost and<br />

maintainability. However keeping an analogue architecture is not a priori optimal, as it requires the use of<br />

expensive optical components (especially as C or DWDM techniques have to be used if the number of fibers<br />

must be spared).<br />

Figure 11: cell segmentation<br />

In the downstream, the existing RF channels (6 to 8 MHz wide) can be aggregated in order to provide higher bit<br />

rate pipes to the subscriber, and maintain compatibility with legacy customer equipments. 100 to 144 MHz band<br />

blocks are considered to be achievable in the future to provide up to 1 Gbps pipe to the user, both at the terminal<br />

and the HE equipment side.<br />

The second challenge is the optical technologies needed to use as efficiently as possible the fiber bandwidth and<br />

reduce the spacing between optical carriers. Spacing of 100-200 GHz can be considered in the middle term, and<br />

disruptive optical technologies may allow reducing the spacing between carriers to 6.25-12.5 GHz, using either<br />

analogue or digital (in this case an A/D and a D/A are used in conjunction with the optical transmitter and<br />

receiver respectively, see paragraph below).<br />

In the case of analogue transmission, QAM modulation used for downstream carrier will require good C/N,<br />

CSO, CTB, and Cross modulation performances, which will be limited both by the optical components and<br />

optical phenomena’s in the link, like Stimulated Brillion scattering, Stimulated Raman Scattering,<br />

Interferometric noise, Polarisation mode dispersion. These later phenomena’s can be modelled and counter<br />

measures can be applied (like external phase modulation when using MZI based external modulation).<br />

<strong>Annex</strong> 2 - Page 30 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Channel Digitization<br />

In order to overcome the cost and performance issues introduced by analog optical components, the whole return<br />

band can be digitized like in the diagram shown in Figure 12; the 2 main issues associated with this solution are<br />

the sub-optimal use of the bandwidth (as high order constellation (up to 256 QAM) and mixed TDMA and<br />

SCDMA techniques are used in upstream, 12 bit digitization is required, requiring 2 Gb/s links for each<br />

upstream), and a short spacing between optical carriers in order to spare the number of fibers used (developed<br />

above).<br />

Figure 12: Return channel architecture<br />

The downstream band can also be segmented and digitized, but the capacity is in this case limited to around 100-<br />

150 MHz bandwidth, which gives a capacity of around 900 Mbps per cell. When high unicast bit rates are<br />

targeted, the challenge is to implement such interfaces for small cells, which creates both cost an environmental<br />

issues for the interface.<br />

FTTC/Mini Fiber Node Architecture<br />

Classical analog HFC has the advantage to support legacy broadcast video, and of a centralized architecture, but<br />

the architecture can suffers from some issues like:<br />

• The optical transport network is analog in both directions, leading to relatively high cost, especially if the<br />

architecture evolves from a broadcast to a narrowcast model;<br />

• In the upstream, the cost of analog optical return links can become significant, even if return channel<br />

digitization is made, as described above.<br />

In the FTTC, also called Mini Fiber node architecture, the HFC can be now separated into 2 separate networks:<br />

• The optical network which ensures digital bi-directional data communication between the Local Node, and<br />

the Mini Fiber Node;<br />

• The coaxial local network, using classical DOCSIS FDMA/TDMA-SCDMA access in the RF spectrum.<br />

However classical FTTC architectures do not scale well for HFC, as they do not support legacy broadcast video;<br />

a more scalable alternative is the hybrid architecture shown below, which preserves the legacy analogue<br />

architecture, and introduces progressively broadband “islands” in the network.<br />

Both “mini-fiber node” architectures introduce important technological and cost challenges, the main one being<br />

to integrate the Access Node very close to the subscriber. As the access node serves a low number of subscribers<br />

(50 to 200), the product cost is critical and requires the integration of all the Access Node functions in a “System<br />

on chip” architecture. Recent studies and realizations show that this SOC is achievable by using the next<br />

available (10 µm and below) technologies and multi CPU integration.<br />

Let us note that (Euro)DOCSIS standard was designed for large cable areas, and it is appropriate in this new<br />

situation where the AN serves “micro-cells” to evolve current standards; new projects are now investigating<br />

both backward compatible solutions, and new solutions (different use of the cable bands, new physical layers,<br />

baseband Ethernet,..).<br />

<strong>Annex</strong> 2 - Page 31 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 13: Hybrid architecture with digital overlay for interactive services<br />

Figure 14: Ethernet on cable<br />

Use of Ethernet MAC layer<br />

More particularly the low cost of Ethernet based components make this technology particularly cost attractive;<br />

moreover when the cable network is based on a star topology, it is easily possible to have baseband (10 base T<br />

or 100 base T) Ethernet transport over the last cable segment as shown in Figure 14.<br />

The use of baseband Ethernet is adequate in the lower band, as it has a good resistance against impairments like<br />

Ingress and Impulse. Moreover the technology is scalable as it allows to remain compatible with DOCSIS and<br />

EURODOCSIS (used for the upper part of the upstream band), and allows to use keep a coaxial segment for low<br />

penetration rates, or fiber segment is higher bit rate per cell is needed.<br />

<strong>Annex</strong> 2 - Page 32 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The solution however has still to be proven for tree and branch network, which constitute most of the current<br />

cable architectures. Another issue is that unlike DOCSIS 1.1, 2.0 and the future 3.0, which provide layer 2 QoS<br />

mechanisms, this solution does not provide QoS and bandwidth over-provisioning has to be done to respect the<br />

QoS constraints introduced by voice and rich media services.<br />

<strong>A2.</strong>2.5 Physical and MAC layers<br />

<strong>A2.</strong>2.5.1 Current upstream physical and MAC layers<br />

The standard of choice for upstream physical and MAC layer is DOCSIS (standardized in the USA by<br />

CableLabs) and its European variant, EURODOCSIS, standardized in ETSI as EN 201 488.<br />

There are 3 versions of the standard: 1.0, 1.1, and 2.0.<br />

Version 2.0, which has been designed to mitigate efficiently the plant disturbances uses both single carrier<br />

TDMA and SCDMA access techniques, the maximum bit rate per RF carrier being 30 Mbps.<br />

As the Cable network is a shared medium, the MAC layer is a point to multipoint type of MAC layer, where the<br />

subscribers are sharing an upstream channel using ATDMA; the slot allocation is determined by the central<br />

station, called AN (Access Node), which is the interface between the HFC network and the backbone.<br />

The standard is based on IP packet transmission, but IP packet fragmentation is possible to respect the jitter<br />

constraints of services like IP telephony, when mixing data and voice services. EURODOCSIS defines also a<br />

per flow QoS description, which allows to support both an Intserv and diffserv type of architectures in the<br />

access network (a mapping between RSVP QoS parameters, and the MAC layer QoS parameters is defined).<br />

Multicast connection and mapping with IGMP are also described at the MAC level; furthermore the standard<br />

includes layer 2 unicast and multicast encryption and authentication tools to ensure subscriber privacy, and<br />

prevent terminal cloning.<br />

Security is therefore supported both for unicast and multicast sessions.<br />

For future capacity requirement, as 100 Mbps peak bit rates upstream and Gbps downstream are targeted, some<br />

adaptations and changes in the downstream and upstream physical and MAC layers are necessary.<br />

<strong>A2.</strong>2.5.2 Downstream evolution<br />

DVB-C physical layer which is used in downstream can evolve as follows:<br />

In the downstream higher order constellations (256QAM, 1024QAM) single carriers can be used if the plant is<br />

of good quality (limited CSO and CTB), and has limited clipping disturbances; however the bit rate achieved<br />

remains limited. A better/complementary solution may be to use multiple carriers assembly to stay compatible<br />

with legacy DVB-C systems. A new terminal would have the capability to decode multiple single channels<br />

simultaneously (a block of 16 channels would correspond to a Gbps capacity). Issues related to this evolution<br />

are the realization of a one chip – one tuner solution with 100 – 150 MHz channel bandwidth.<br />

This technique, called “channel bonding” would eventually allow supporting several models:<br />

• To increase the capacity to the subscriber while adopting solutions compatible with the current DOCSIS<br />

paradigm (more oriented to a QoS aware MAC layer and flow by flow admission and reservation)<br />

• To adopt alternative paradigms like Gigabit Ethernet to the subscriber. Although it would decrease the<br />

overall traffic efficiency, the lower cot and the simplicity of the solution can justify this evolution.<br />

<strong>A2.</strong>2.5.3 Upstream and MAC layer evolutions<br />

In the upstream single carrier will be difficult to achieve with bandwidth exceeding 6 MHz, due to Ingress and<br />

Impulse noise limitations in the lower part of the band. Multi carrier technique is the right solution to solve both<br />

the Ingress (frequency granularity) and Impulse noise (larger symbol width) limitations; filtered multitone<br />

(FMT) techniques like DWMT or DMT are more efficient at they limit both the Ingress noise effect on adjacent<br />

carriers, and the ICI of unsynchronized carriers during ranging. Upstream total channel bandwidth of 25-30<br />

MHz can be achievable, leading to upstream peak bit rates of 100-150 Mbps. Additional clipping can be an issue<br />

as compared to single carrier, but the use of non synchronized carriers can solve this issue.<br />

A mixed TDMA/SCDMA access scheme can be applied to bring an optimum efficiency to the channel, as<br />

TDMA is well adapted to burst transmission over a clean sub-channel, whereas CDMA optimizes the<br />

throughput. Turbo codes or capacity approaching codes will bring an additional gain to the system.<br />

<strong>Annex</strong> 2 - Page 33 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Technically these solutions can appear optimal; however a strong requirement is the compatibility with legacy<br />

EDOCSIS 1.x and EDOCSIS 2.0 systems (at least with the ATDMA option). In consequence the preferred<br />

alternative, which is currently examined and in the process of finalization in DOCSIS and EDOCSIS 3.0<br />

committee are the same type of channel bonding technique as in downstream; this would similarly allow<br />

decreasing the cost of upstream capacity.<br />

The MAC layer can be derived from the current DOCSIS MAC layer and its current mechanisms (ranging, slot<br />

allocation) adapted to the new physical layer.<br />

The MAC layer is adapted to a mix of data and VOIP services, the later using fixed bit rate coding with silences<br />

(G711, G729, G723-1). The small bit rate required by voice telephony services does not require more<br />

optimization.<br />

However video services will occupy a significant part of the upstream and downstream channel, and the coding<br />

schemes are VBR based with high short term and long term variations (when constant video quality is targeted).<br />

In the case of MPEG, the produced streams are VBR, and bursty over different time scales. Current DOCSIS<br />

mechanisms are not optimal for these hybrid CBR/VBR streams, and need some adaptation. Second issue occurs<br />

with scalable coding schemes: MPEG2 and MPEG4 (including H264) schemes are not very scalable as they use<br />

block transform, whereas wavelet transform based schemes enable fine grain scalability. In the case where video<br />

services can be dominant in the downstream of upstream traffic, a second parameter which is the class of service<br />

(or an equivalent parameter), and will allow in case of network congestion to discard certain class of traffic (like<br />

WRED); for upstream traffic, this type of mechanism has to be managed by the terminal (queue management),<br />

or better as a layer 2 mechanism by the CMTS (in that last case, either different flows per class of service have<br />

to be available to the terminal, or special priority tags have to be set on the requests coming from the terminal<br />

within a service flow). More generally investigation is needed to find what is the best overall scheme for QoS<br />

reservation in the cable network (or in any point to multipoint architecture).<br />

<strong>A2.</strong>2.5.4 Data plane QoS features<br />

As mentioned above, the cable access architecture support an Intserv model for QoS. As it is critical to optimize<br />

the downstream and upstream utilization, a generic layer 2 mechanism of Payload Header suppression in defined<br />

In EuroDocsis, which can allow to optimize the traffic on a per session basis; some more efficient proprietary<br />

techniques (Broadcom), or some TCP related mechanisms (TI) are used.<br />

Certainly further study is needed in that field, which can be made in common with other point to multipoint<br />

architectures (wireless, satellite, terrestrial).<br />

<strong>Annex</strong> 2 - Page 34 of 282


<strong>A2.</strong>2.6 Open access and related issues<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

There is a requirement to mandate the ILEC which has a significant market power to open their access network<br />

to CLEC; this is called “open access”, or local loop unbundling.<br />

To enable a subscriber dynamic open access to any ISP, a tunnel has to be created between the subscriber CPE<br />

and the ISP router; moreover the subscriber has to be dynamically provisioned with the corresponding IP and<br />

QoS parameters. Several solutions are possible:<br />

• Using PPP to establish a tunnel: Layer 2 (PPPOE) or layer 3 (L2TP) tunnel can be established, or a<br />

combination of both. PPPOE can be a logical solution, since DOCSIS is based on Ethernet, but the solution<br />

cannot cross layer 3 devices; moreover the overhead (8 bytes) introduced by this technique is limited, and<br />

multiple links with different ISP can be established by the same subscriber. The drawbacks of the PPPOE<br />

protocol are the following:<br />

- QoS and multicast are not supported;<br />

- In principle the CPE software has to be supported<br />

- All communications are centralized (Peer to Peer communications between subscribers owning to the<br />

same network have to proceed through the ISP)<br />

- Difficulty to monitor and classify the traffic<br />

• L2TP is another solution which has the advantage to support layer 3 device, but is not largely supported, and<br />

introduces large overhead. L2TP can be used between the BAS (PPPOE aggregation router) and the ISP<br />

router to create aggregated ISP tunnels<br />

• Policy based routing is a possible solution, where the access router checks the source address of an uplink<br />

traffic packet, and routes the packet to the right ISP according to this information. The advantages are a full<br />

compatibility with DOCSIS 1.1/2.0, support of QoS, multicast; the drawback are the scalability, complex<br />

processing at the CMTS side, and requirement to pre-provision the cable network with public IP addresses<br />

owning to all the ISPs.<br />

• VLAN based solutions present major advantages of the 2 mentioned solutions: they are compatible with<br />

DOCSIS, introduce low packet overhead (2 bytes), and reduce the cable devices (CMTS) complexity. They<br />

support QoS and multicast paradigms. Authentication and connection to the right ISP domain can be<br />

provided by 802.1x. Drawback is the number of available VLAN tags (limited to 4096), but this issues could<br />

in principle be solved by cascaded tagging.<br />

At the Cable network side, when connecting the CM can receive its VLAN tag during provisioning, avoiding the<br />

use of any particular protocol.<br />

Impact of open access on layer 1 / layer 2 architectures<br />

The L1 issue related to multi-ISP provisioning has 2 aspects: the first one is related to the upstream and<br />

downstream resource limitations for a given cable area; the resource allocation can be divided into 2 levels: the<br />

aggregated bandwidth reserved for the ISP and the bandwidth contract for each subscriber. Additional<br />

complexity is introduced by upstream disturbances, which may require dynamic resources reallocation between<br />

RF channels. The second simple issue is the total bandwidth limitation in one area, which may introduce<br />

important limitations in the ISP potential subscriber coverage. The requirement to introduce equal treatment<br />

between ISP, and between subscribers, is introducing additional complexity in the network.<br />

In conclusion it is difficult to segment the upstream bandwidth per RF channel and dedicate RF channels to<br />

ISPs, as sometimes RF channel dynamic reallocation is necessary; moreover the QoS offered to a subscriber<br />

may depend on the channel used by the subscriber at a given time.<br />

Concerning layer 2 issues, the requirement mentioned above has to be considered and introduces as well<br />

additional complexity for admission and control, and MAC layer resources management. In principle the<br />

requirement to support different types of SLA is covered by a DQOS architecture, but introduces of course<br />

additional complexity in the admission and control process.<br />

The impact on the network architecture is significant as the operator has to set-up a tunnel architecture, and to<br />

provision the sets of IP addresses and services descriptions for each ISP.<br />

Once open access has been set-up, IP architecture for voice and multimedia services can be defined<br />

independently, as described below.<br />

<strong>Annex</strong> 2 - Page 35 of 282


<strong>A2.</strong>2.7 IP architecture<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

A complete IP architecture is defined in the access network for voice communication, and could be extended in<br />

general to multimedia services requiring QoS.<br />

Signalling is based on a centralized architecture and thin client model (MGCP), and mostly applicable to<br />

telephony services. 2 QoS models are defined: the most applicable one (called dynamic QoS) relies on Intserv,<br />

and define the access QoS architecture using RSVP. Another variant assumed a diffserv architecture. COPS is<br />

the protocol of choice for communication between the CMTS and the CMS for policy enforcement and<br />

authorization purpose.<br />

Interdomain signalling defines SIP or H248 for signalling between different domains; interdomain QoS is<br />

obviously not precisely defined yet, and not in the scope of this chapter.<br />

<strong>A2.</strong>2.7.1 Extension to multimedia services<br />

Figure 15: VOIP architecture<br />

The current HFC networks QoS architecture is based on an Intserv paradigm, supporting a per flow QoS. The<br />

DOCSIS MAC layer uses a reservation scheme where the subscriber terminal can request transmission<br />

opportunities to the Access Node, and therefore can supports CBR and VBR type of services; moreover a MAC<br />

service flow can be associated to a particular session of group of sessions. Initial resources reservation for a<br />

session can be made either directly via RSVP (more particularly a variant of RSVP optimized for cable access<br />

networks), or indirectly via signaling (like SIP or RTSP) where the session description can be translated into<br />

MAC QoS parameters.<br />

Current Packet Cable VOIP architectures are built on a centralized model based on a variant of MGCP signaling<br />

adapted for cable;<br />

More generally for multimedia services, the ongoing Packet cable Multimedia project (see Figure 16) is defining<br />

a policy architecture, which recognizes that a variety of signaling protocols will be used (MGCP, SIP,<br />

proprietary), and allows to differentiate clearly between network provider and applications provider. Two<br />

distinct domains are defined:<br />

• The Resource Control Domain (RCD) which is defined as a logical grouping of elements that provides<br />

connectivity and network resource level policy management in the access cable network domain. The<br />

Resource Control Domain includes the AN and the Policy Server (PS).<br />

• The Service Control Domain (SCD), which is defined as a logical grouping of elements that offer<br />

applications and content to service subscribers. The Application Manager resides in the SCD. Note that there<br />

may be one or more SCDs related to a single RCD. Conversely, each RCD may interact with one or more<br />

SCDs.<br />

<strong>Annex</strong> 2 - Page 36 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 16: Packet Cable Multimedia policy architecture<br />

Fundamentally, the roles of the various PacketCable Multimedia components are the following:<br />

• The Application manager is responsible for application or session-level state, and applying SCD policy.<br />

• The Policy Server is responsible for applying RCD policy and for managing relationships between<br />

Application Managers and AN (therefore the PS acts as a PEP for SCD and a PDP for RCD).<br />

• The AN is responsible for performing admission control and managing network resources through DOCSIS<br />

Service Flows.<br />

Currently the interface which are defined by Packet Cable Multimedia are related to QoS (using COPS between<br />

AM and PS, and between PS and AN) and resource accounting (using radius between AN and Record Keeping<br />

Server).<br />

When looking at the Packet Cable and Packet Cable Multimedia architectures, and the general trends in access,<br />

an architecture like PC MM covering a limited set of signalling and session control protocols (like SIP and<br />

RTSP for instance) have to be defined, as logically the client paradigms is shifting from a thin a thick client,<br />

making more logical a solution based on SIP (for both telephony and multimedia) rather then MGCP. Recent<br />

analyses of different signalling evolutions for multimedia have shown that SIP or similar protocols are the most<br />

appropriate solution.<br />

<strong>A2.</strong>2.7.2 Extension to video<br />

Legacy video architectures are based on MPEG transport and use DVB standards both for transport (DVB-C,<br />

DVB-TS), encryption (DVB-CA), CA architectures and interfaces (DVB-Simulcrypt), signalling (DVB-SI),<br />

middleware (DVB-TAM using MHP). (E)DOCSIS protocol is used for transport of interactive information and<br />

IP traffic.<br />

No precise extension is defined at this stage as digital video and data-voice architectures were defined and<br />

standardized separately. The introduction of video in IP services is still to be addressed for cable. An FP5<br />

project (CASSIC) has begun to analyze this aspect (the project has focused on interface between middleware<br />

(MHP) and IPCABLECOM architecture.<br />

Different issues can be investigated:<br />

• How to migrate video services in an IP architecture, ensuring transition paths with coexistence of MPEG and<br />

IP<br />

• Definition of a common framework for:<br />

- Content security (including protection and right management)<br />

- Network security<br />

- QoS<br />

- Signalling<br />

- Provisioning (network, service and application)<br />

- Billing<br />

• Defining an open access architecture separating network resources from application.<br />

<strong>Annex</strong> 2 - Page 37 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Centralized versus decentralized architectures for video<br />

Several alternatives exist for the deployment of video services, from a centralized to a decentralised architecture:<br />

As the cable network media is bi-directional one can apply the concept of “network terminal” to a video<br />

architecture, where the storage is centralized, and the customer equipment has no storage and low processing<br />

power; the middleware is very primitive. Most of the applications are executed in the central server place.<br />

In addition, the storage capacity issue and the STB price is not necessarily a major problem:<br />

• Storage capacity: 2 arguments can be in favour of local storage; the price of local storage decreases regularly<br />

and is now at a reasonable level. In addition the subscriber may prefer to have the content available locally.<br />

• STB price: STB are evolving to a one chip architecture including signal processing, MPEG decoding and<br />

processing, and all the broadband access (including routing, provisioning, and signalling stacks); if a good<br />

yield is achieved, the price difference between low level and high level chips can be insignificant. The<br />

resulting STB price range can become small between “thin client” and “thick client” STB. The figure below<br />

represents an example of price range estimation of thin and thick client STB, and storage additional price.<br />

Another issue of this solution as compared to hybrid architectures with storage in the terminal is that it requires<br />

major upgrade in the network, in order to provide the required capacity.<br />

Broadcast<br />

content On-demand<br />

content<br />

Server<br />

Cable Network<br />

Figure 17: Centralized architecture for VOD<br />

<strong>Annex</strong> 2 - Page 38 of 282<br />

«Thinclient»<br />

STB<br />

Decentralized architecture for VOD<br />

There are several options for decentralized architectures; 2 significantly different architectures are with or<br />

without local storage. Although local storage is not necessarily required currently, the home network segment<br />

traffic can become significantly higher, especially when HD services and user mobility are introduced. The<br />

whole network can be considered as a content delivery network with the main server in the Central Headend,<br />

and proxy servers in the Local Headend and subscriber Home Network.<br />

A decentralized architecture can be set-up with both thin and thick clients, and the application part can be splitup<br />

between the terminal storage and the different servers.<br />

Such architecture can be enabled with defined frameworks for:<br />

• Content delivery with local storage (including storage in the terminal)<br />

• APIs for thick and thin clients.<br />

These requirements are broader then for cable network only, and can be applied for any access networks. (see<br />

112)


<strong>A2.</strong>2.7.3 Convergence with mobile services<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The present paragraph does not provide ca complete framework for convergence with mobile but develops the<br />

issues of QoS and session mobility.<br />

As mentioned above, the Packet Cable Multimedia architecture provides a framework for policy and resources<br />

management in the cable network. The alternative architectures for QoS in fixed access networks are developed<br />

mainly by TISPAN and DSL Forum. The interesting feature in TISPAN is that TISPAN target is to adapt the<br />

IMS architecture developed for 3GPP.<br />

The simplified diagram of the different architectures is given in<br />

NRC: Network resource Control<br />

Figure 18. As shown, these architectures are similar, whereas data format and signalling are different.<br />

Policy<br />

control<br />

AS<br />

I1<br />

I2<br />

Network<br />

elements<br />

Policy<br />

Server<br />

CMTS<br />

bandwi dth<br />

control<br />

Applicati on<br />

Managers<br />

Policy<br />

control<br />

NRC: Network resource Control<br />

GENERIC<br />

MODEL<br />

NRC<br />

AS<br />

Pkt-mm-3<br />

bandwi dth<br />

control<br />

Pkt-mm-2<br />

Network elements<br />

<strong>Annex</strong> 2 - Page 39 of 282<br />

AS<br />

Gq’<br />

A-RACF SPDF<br />

Re<br />

Network elements<br />

A-RACF: Acces-Resource and Admission Control F uncti on<br />

SPDF: Service PolicyDecisionFunction<br />

PACKET CABLE<br />

MULTIMEDIA<br />

MODEL<br />

Figure 18: A simplified diagram of different architectures<br />

Ia<br />

TISPAN<br />

MODEL<br />

Basically there are 3 layers:<br />

• Application layer which handles application signalling with the end user application, and application<br />

provisioning<br />

• The Network resource control layer which handles network resources and policies.<br />

• The network elements which will perform all data plane aspects like congestion control, admission, etc.<br />

As shown in the figure, the TISPAN, IMS, and Packet Cable Multimedia architectures are similar; different<br />

evolution paths can be imagined, one end being convergent architectures, the other end being interoperable<br />

architectures (multi-signalling capabilities at the application and resource management levels).


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

A likely evolution would be most probably interoperability between the different architectures defined (also with<br />

mobile architectures like the ones defined by 3GPP)<br />

The table below presents a summary of the protocols used for the mentioned architectures.<br />

GENERIC PC MM TISPAN<br />

AS M AF<br />

NRC Policy server SPDF + A + RACF<br />

I1 Cops Packetcable Diameter / H248<br />

I2 Cops Packetcable H248 / EMP<br />

<strong>A2.</strong>2.8 Security<br />

Table 8: Used protocols<br />

There are several distinct levels of security in the HFC network:<br />

<strong>A2.</strong>2.8.1 HFC network security:<br />

DOCSIS BPI+ provides a layer 2 security mechanism, which enables the user terminal authentication, payload<br />

content encryption (using secret key algorithms, e.g. DES or triple DES), and key exchange mechanisms (using<br />

public key algorithms and hashing). This layer 2 security ensures the user authentication and privacy in the HFC<br />

network (as HFC is a shared medium), and provides a layer 2 reliable mechanism which can be used by end to<br />

end services and applications.<br />

<strong>A2.</strong>2.8.2 Service/application level security<br />

For voice services, Packet cable defines in summary IPSEC mechanisms for signalling, and secret key<br />

algorithms (based on RC4 and AES) for end to end voice content encryption.<br />

For SD/HD broadcast video (and other services like data), Conditional access systems are used according to<br />

DVB-CA and DVB-Simulcrypt standards. Content encryption use secret key encryption (based on DVB-CS<br />

algorithms), whereas the key exchange and billing mechanisms are proprietary.<br />

Interfacing between the different functional elements of the conditional access system, and the video network<br />

elements (video multiplexer) is defined by the DVB-Simulcrypt standard. This allows network openness, i.e.<br />

several conditional access system to interoperate with the same network. CA systems are designed to operate in<br />

a uni-directional system.<br />

For multimedia services in general, different systems are defined or under definition (DVB, 3GPP, OMA,<br />

MPEG21, IETF, ISMA.<br />

As mentioned above, whereas the network level security is defined and stable, a variety of solutions are used for<br />

service/application level security, and specific to the application (point to point or point to multipoint voice<br />

services do not have the same requirements as video broadcast for instance). However it appears possible to<br />

define a unique security system (which handles encryption and right management) with the following<br />

requirements:<br />

• suitable both to unicast and multicast type of applications<br />

• allowing different levels of security and right management<br />

• linkage of security and right management with QoS (for scalable content right management and encryption<br />

has to be dynamically adaptable to the QoS effectively delivered to the user).<br />

• supporting unidirectional medium but optimised for bi-directional transmission.<br />

<strong>Annex</strong> 2 - Page 40 of 282


<strong>A2.</strong>2.9 Home network<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Home networking can be considered as a general topic, but has also some specific aspects which are related to<br />

cable; part of these aspects is covered by the CableLabs Cable Home project:<br />

Figure 19: Cable home architecture<br />

The CableHome architecture consists of network elements, and functionality within those network elements at<br />

the Headend, the residential gateway, and IP devices in the Home LAN. CableHome 1.0 defines function for<br />

provisioning, management, security and packet handling within the residential gateway. CableHome 1.1 adds<br />

the QoS, firewall, and discovery (including home network elements and services) functionality to Cable Home<br />

1.0.<br />

<strong>A2.</strong>2.9.1 CableHome 1.0<br />

CableHome 1.0 defines functions for provisioning, management, security, and packet handling within the<br />

residential gateway. Following are descriptions of these functions.<br />

Provisioning<br />

The CableHome 1.0 provisioning functions consist of a DHCP client, a DHCP server, a configuration file<br />

processing, and a time of day client. CableHome 1.0 defines two provisioning modes, the DHCP Provisioning<br />

Mode and the SNMP provisioning mode. The DHCP provisioning mode is compatible with the DOCSIS 1.0<br />

provisioning infrastructure, and requires no authentication. In this mode, DHCP messages contain information<br />

about where the RG can find its configuration file. The SNMP provisioning mode is similar to the PacketCable<br />

provisioning process, and the RG is authenticated via a Kerberos Server. In this mode, configuration file name<br />

and location information is passed to the RG via secure SNMPv3. In both modes, the RG then initiates a TFTP<br />

session to download the specified configuration file. CableHome 1.0 configuration files are comprised of TLVs<br />

(like DOCSIS) and include a hash to verify file integrity.<br />

The DHCP client in the RG acquires IP address leases from the DHCP server in the cable operator’s data<br />

network, and the DHCP server implemented by the RG assigns private IP addresses to networked elements in<br />

the home. CableHome 1.0 defines two WAN side interfaces for the residential gateway, the WAN-Management<br />

IP interface, and the WAN-Data IP interface, each requiring a unique MAC address.<br />

<strong>Annex</strong> 2 - Page 41 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Management<br />

Like DOCIS, CableHome management is SNMP based, and consists of a wide variety of RG MIBs that allow<br />

for configuration and control of the CableHome suite of functionality. Via the NMS system in the Headend, the<br />

cable operator can configure the RG functionally as most appropriate for their particular home-networking<br />

service and to satisfy specific customer needs.<br />

Like DOCSIS 1.1 & 2.0, CableHome 1.0 defines two management modes, and a number of standard SNMP<br />

traps for event reporting.<br />

In addition, LAN IP Device connectivity and throughput test functionality has been defined for the RG, which<br />

employs ping-like exchanges between the RG and LAN IP Devices.<br />

Security<br />

CableHome 1.0 security consists of secure software and configuration file download, mutual authentication, a<br />

firewall, and secure SNMPv3 management messaging. Configuration file integrity is ensured via a hash<br />

function, and the residential gateway authenticates downloaded images using code verification checks supplied<br />

within the configuration file.<br />

The residential gateway is authenticated via KDC servers and device certificates. The keying material for<br />

SNMPv3 is provided via Diffie-Hellman in DHCP Provisioning Mode, and via Kerberos in SNMP Provisioning<br />

Mode.<br />

The firewall functionality consists of a standardized download mechanism, triggered in the configuration file, or<br />

via SNMP. The integrity of the firewall configuration files is ensured via a hash within the firewall<br />

configuration file.<br />

In addition, firewall event monitoring is provided via SNMP MIB variables and event messages, which indicate<br />

suspicious activities.<br />

Packet Handling<br />

CableHome 1.0 provides NAT and NAPT functions within the residential gateway.<br />

These functions allow for IP address conservation and also provide a common logical IP sub-network in the<br />

home. In addition “Passthrough” addressing is defined, whereby public IP address are served directly to devices<br />

in the home from the Headend DHCP server. Passthrough addressing is meant to support applications that do<br />

not work well with NAT (such as PacketCable telephony applications). Mixed mode addressing is also<br />

supported, which allows a combination of NAT/NAPT and Passthrough addressing simultaneously. Finally, an<br />

“Upstream Selective Forwarding Switch” function is defined in the RG, which keep home traffic local to the<br />

LAN.<br />

<strong>A2.</strong>2.9.2 CableHome 1.1<br />

CableHome 1.1 builds upon CableHome 1.0, the primary additions being QoS, firewall, and discovery<br />

functionality. These additional CableHome 1.1 features are described below.<br />

QoS<br />

CableHome 1.1 defines a QoS system for LAN IP devices, meant for services for which quality assurances are<br />

important. CableHome 1.1 QoS employs a priorities based solution, which allows specified applications to have<br />

priority access to the home network physical media. The priorities are also used in forwarding decisions within<br />

the residential gateway.<br />

Applications on the home network are identified by the IP address and port upon which they communicate. The<br />

cable operator assigns priorities for these applications via a QoS MIB in the RG. LAN IP devices pass a list of<br />

pertinent resident applications, to the RG, via SOAP/XML/HTTP messaging, and the RG replies with the<br />

assigned priorities for each of the applications advertised. The LAN IP devices use these assigned priorities<br />

when sending traffic, and the RG uses the assigned priorities when forwarding LAN IP traffic within the home.<br />

Firewall<br />

The CableHome 1.1 firewall definition includes standardized firewall configuration, a minimum set of firewall<br />

functionality, a list of applications that are required to work through the firewall, and a set of MIBs to support all<br />

of this functionality.<br />

While CableHome 1.0 requires firewall functionality and a policy download mechanism, it does not specify the<br />

format or contents of the firewall policy file. CableHome 1.1 standardizes the firewall configuration to provide a<br />

uniform firewall management scheme.<br />

<strong>Annex</strong> 2 - Page 42 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The firewall configuration is accomplished via a filter MIB in the RG, which is modelled after the DOCSIS IP<br />

filter table. IP packets are filtered based upon packet attributes, or by RG interface they arrive though. The<br />

firewall configuration also allows limits to be placed on day and time, which can serve as the basis for simple<br />

parental control applications. A minimum set of filters rules are defined as a default firewall policy.<br />

Discovery<br />

CableHome 1.1 also defines a discovery feature, whereby the cable operator is provided with information about<br />

devices and applications in the home. The information is passed from LAN IP Devices to the RG via<br />

SOAP/XML/HTTP messaging, and includes information such as device type, manufacturer, hardware revision,<br />

serial number, model name and number, software version, physical address, and resident applications (via port<br />

ID.) The RG saves this information and makes it available to the MSO via MIBs. Miscellaneous Additions<br />

CableHome 1.1 requires the RG to receive and process SNMP traffic arriving from the LAN interfaces. Also<br />

defined is static port forwarding, which supports servers in the home. Incoming traffic that is not in response to<br />

a LAN IP Device initiated message, is routed to a configured private LAN IP address. CableHome 1.1 also<br />

defines VPN support in the form of smart port recognition.<br />

VPN applications typically require key exchange communications to occur on port 500, and if port translation<br />

occurs at the RG, the key exchange messaging is broken. CableHome 1.1 RGs are required to recognize, and not<br />

translate port 500. Configuration file authentication, and optional encryption has also been added in CableHome<br />

1.1, for DHCP provisioning mode. This is accomplished via Transport Layer Security (TLS) (RFC-2246).<br />

Again how to integrate video in this architecture is not defined yet, and the different issues related to DRM in<br />

the home, content location, storage, user mobility have to be added.<br />

Cable home network<br />

As an important proportion of home networks are cabled, and the cable bandwidth is adequate for wireless<br />

signal transmission, work is in progress in CENELEC (committee number) to define physical interface for<br />

wireless LAN transmission through cable.<br />

<strong>A2.</strong>2.10 Potential issues and topics to develop<br />

As a summary of the different issues and gaps identified and developed in the document, the following R&D<br />

topics have to be investigated for HFC:<br />

• Techno economical analysis of the HFC network architectural evolution for high downstream and upstream<br />

bit rate access: decentralized and centralized architectures,<br />

• related technological issue: optical components, terminals and cable routers, tuners, RF components.<br />

• Analysis and modeling of the upstream band (5-55MHz) for upstream capacity optimization (Noise, Ingress,<br />

non-linear effects). Work has been performed on the topic, but modeling and measurement methods, i. e.<br />

how to dynamically characterize the upstream disturbance dynamically in an operational situation) have still<br />

to be investigated.<br />

• Optimization of the upstream physical layer and dynamic adaptation to the upstream condition (spectrum<br />

management)<br />

• IP architecture including video service and providing QoS, provisioning, security, AAA, open access:<br />

- Signalling architecture definition<br />

- Interface definitions for QoS and security between application, service, and network layers: evolution of<br />

the Packet Cable multimedia architecture<br />

• Extension of the framework to the Home environment, related to home storage and video support:<br />

- QoS provisioning in the Home; connection between CabelHome and Packet Cable Multimedia.<br />

- Interconnection between Home Networks<br />

- Device provisioning<br />

- Terminal and session mobility with mobile networks<br />

Evolutions for Cable Home Network<br />

• Let us first notice that the Cable Home architecture framework and principles could be applied in<br />

conjunction with any access technology other then cable (WIMAX etc..)<br />

• As far as cable is concerned, Cable Home, jointly with Packet Cable multimedia, could provide a<br />

comprehensive end-to-end architecture.<br />

<strong>Annex</strong> 2 - Page 43 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>2.11 Appendix 1: Analysis of disturbances in cable upstream<br />

<strong>A2.</strong>2.11.1 Impulse noise<br />

Impulse noise is defined as including Impulse length of limited duration (


<strong>A2.</strong>2.11.2 Ingress noise<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Ingress is defined as frequency selective impairment in contrast with Impulse noise, and can be categorized as<br />

follows:<br />

• Narrowband Ingress injected in the cable network itself: the major causes are identified as being AM shortwave,<br />

amateur band, maritime radio transmission; the amplitude of the injected Ingress vary during the day<br />

according to the propagation condition; this slow amplitude variation can be as high as 20 dB<br />

• Location specific interference: electronic equipment in the subscriber premise can inject a high level of<br />

Ingress in a poorly shielded coaxial installation.<br />

The relative degree of importance of these 2 sources of Ingress will vary according to the cable network<br />

architecture; for instance:<br />

• A cable network with aerial cabling will be more sensitive to Narrowband ingress, whereas man made noise<br />

will be of primary importance in an “underground” network;<br />

• Some networks with a passive return path (working therefore at low operating levels) will be more sensitive<br />

to Narrowband Ingress.<br />

<strong>A2.</strong>2.11.3 Common Path Distortion<br />

Common Path distortion (CPD) is produced by poor contacts in the cable network; these contacts create a<br />

rectifier effect, which produces mainly second order non-linear distortion product (and to a minor extent third<br />

order products) coming from downstream carriers. The main frequencies at which CPD will occur are the<br />

multiple of channel frequency spacing (multiple of 6, 7 or 8 MHZ according to the frequency plan).<br />

In general the CPD effects can be accurately calculated using a limited Volterra; in practice a good simplified<br />

model has been developed, assuming that the non linear behavior did not depend on frequency (Taylorexpansion),<br />

and that the major part of the analog channels energy is located at the vision and sound carrier<br />

frequencies.<br />

Figure 21: Example of CPD spectrum measurement at 24 MHz (SCTE)<br />

In summary the CPD frequencies are well determined as fixed by the downstream frequency plan, and the level<br />

of CPD can vary broadly during the day.<br />

<strong>A2.</strong>2.11.4 Clipping<br />

Two non-linear devices will contribute to distortion and clipping in the upstream:<br />

• Upstream amplifiers, which can be characterized by CTB, CSO and noise figure for 2nd and 3rd order nonlinear<br />

distortion and noise respectively.<br />

<strong>Annex</strong> 2 - Page 45 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Upstream laser transmitters can use uncooled Fabry-Perot or DFB, with and without optical isolators. A<br />

detailed description is out of scope but let us recall that:<br />

- Lasers diode show a hard clipping behavior under the threshold current;<br />

- The noise and non linear distortion behavior of the laser diode is complex and will depend on both on the<br />

amplitude and frequency of the incoming signals according to the following optical phenomena:<br />

� Discrete reflections to the laser diode can change both the noise and distortion characteristics;<br />

� Fabry-Perot external cavity created by 2 reflections will create non linear effect (due to the laser<br />

chirping);<br />

� Fiber combined double backscattering associated with homodyne detection at the receiver will<br />

induce an 1/f frequency dependant noise in the lower part of the spectrum;<br />

� Mode partition noise associated by Fiber dispersion and Polarization mode dispersion can also<br />

affect the optical transmission characteristic.<br />

As a result a return path optical system is better characterized by its Noise Power ratio (the NPR) which will<br />

determine the range of total input power that is acceptable for a given C/(N+I).<br />

BER<br />

NPR and BER vs. composite power input to the laser<br />

QPSK<br />

QAM16<br />

QAM64<br />

NPR<br />

Composite power input to laser<br />

<strong>Annex</strong> 2 - Page 46 of 282<br />

QPSK<br />

Figure 22: Curve showing example of NPR, and BER for different constellations<br />

The NPR approximates the acceptable operating level for a given spectral efficiency (via the C/(N+I)<br />

requirement).<br />

The NPR estimation is accurate when the input signal to the laser is gaussian, i.e. for instance if the signal is<br />

composed of a sufficient number of similar QAM carriers;<br />

Some slight corrections should be made in the following cases:<br />

• The additional power introduced by Ingress and Impulse noise may be significant and may require to add an<br />

additional margin;<br />

• The total signal can differ slightly from a gaussian profile. This can be the case of an heterogeneous<br />

upstream spectrum containing different type of carriers, for example CDMA and TDMA carriers;<br />

• The amount of Error correction applied on each carrier, linked with the service availability requirement, will<br />

also require some correction.<br />

An enhanced NPR curve can take into account these situations, and will determine more exactly the required<br />

laser operating level.<br />

The Ingress noise power can be significantly higher than the useful carrier power; in such case:<br />

• The most frequent situation is where the Ingress situated between 5 and 10 MHz is the most disturbing; a<br />

low pass filter can be placed at the input of the impaired transmitters;<br />

• If Ingress is mainly produced by the subscriber, Filters or Noise blockers can be used at the subscriber<br />

premises.<br />

In conclusion the described disturbances affect mainly the low frequency (5-25 Mhz) upstream band, and each<br />

creates different kind of impairments.<br />

The situation is particularly critical in the 5-20 MHz band where both Impulse and Ingress noise are important.<br />

Optimization of the upstream capacity even in that band is necessary, as it delays a costly plant upgrade for the<br />

operator (refer to economical analysis).<br />

NPR


<strong>A2.</strong>3 FTTx Updated 05/01/2006<br />

<strong>A2.</strong>3.1 Introduction<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

“All-Optical” is the vision for future wired networks, where we use fibre for all wires in the WAN, MAN and<br />

Access. Nowadays, optical fibres are ubiquitous in the backbone network, and an extension to access networks<br />

is the logical next step. An optical fibre based access network offers of all available technologies by far the<br />

highest speed and can support an unlimited set of services. FTTx (Fibre to the x, where x stands for Curb or<br />

Cabinet (C), Premises (P), Building (B), Home (H), Desk (D)) would thus be a future-proof access solution.<br />

With FTTH or FTTB the optical connection reaches the home of the building of the end-user. This implies a<br />

tremendous investment. Fibre to the Curb or Cabinet (FTTC) will bring the optical fibre to a service node in a<br />

nearby location outside the customer premises, which is a more cost-effective solution. Combinations of a fibre<br />

optic access network with a traditional twisted pair or coax access network (HFC) or a wireless access network<br />

(FWA) are intermediate solutions.<br />

Traditionally, an optical fibre access network was not realistic due to the very high installation and equipment<br />

cost. Dropping cost of the end equipment and new roll-out techniques will make FTTx however a feasible<br />

option. At the moment, optical components are getting cheaper but are still relatively expensive, and it is<br />

important the service fees meet the consumer price points.<br />

Finally, are their sufficient applications for the huge bandwidths which come available with FTTx? There are<br />

some potential drivers, but their extent of demand is not clear: e.g. Video Conferencing, Multimedia<br />

Entertainment, Long Distance Learning, Gaming, Video on Demand. And will increasing bandwidth spur on<br />

new applications or will new applications spur bandwidth? Speed seems the most important driver for new<br />

applications, according to the view of the leading broadband countries in Asia<br />

<strong>A2.</strong>3.2 State of the Art<br />

<strong>A2.</strong>3.2.1 Architecture<br />

The ultimate goal is to bring the fibre to the home user by replacing the existing copper/coax cables. The<br />

evolution of the network will therefore be limited to the part going from the central office (CO) to the home<br />

user. In an optical access network, the CO contains an optical line terminal (OLT) which provides the networkinterface,<br />

and this OLT is connected to one or more optical network units (ONU) at the user-side. The<br />

replacement of the links in an access network by an optical cable leads to numerous possible topologies..<br />

<strong>A2.</strong>3.2.1.1 Point-to-multipoint connections: Passive Optical Network (PON)<br />

A first option is to make use of point-to-multipoint connections, deploying a PON (Passive Optical Network).<br />

Nowadays, the most important point-to-multipoint configuration of an optical access network is a power<br />

splitting time division multiplexing (TDM)-based PON. A PON is made up of fibre optic cabling, of passive<br />

splitters and couplers that distribute an optical signal through a branched "tree" topology to connectors that<br />

terminate each fibre segment. A PON has some important advantages:<br />

• Point-to-multipoint deployment requires less fibre layout to cover a given area than its point-to-point<br />

counterpart using individual fibres to each customer (the fibre complexity for (C)WDM PON architectures is<br />

comparable).<br />

• The equipment at the CO is also lower cost since one optical interface services an entire network instead of<br />

one dedicated user.<br />

• The PON approach, with its lack of active devices along the fibre route, means that power is needed only at<br />

the fibre’s termination (home user and CO).<br />

<strong>Annex</strong> 2 - Page 47 of 282


OLT<br />

(In CO)<br />

1 or 2 fibres<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 23: Passive Optical Network (PON).<br />

Advantages Disadvantages<br />

No remote active Same bandwidth has to be divided between several<br />

users<br />

Full passive network Optical power split among output ports, which<br />

limits maximum distance<br />

Download broadcast allow easy video and data<br />

sharing<br />

Implementation with less possible transceivers<br />

number<br />

Splitter(s) up to<br />

1:32<br />

Up to 32 subscribers<br />

Up to 20 km<br />

Same optical signal received by all ONUs, some<br />

concerns about network security<br />

Upstream bandwidth is not broadcast (less<br />

bandwidth than full P2P)<br />

Lower life cycle cost Need a strong algorithm to catch upstream traffic<br />

(time sharing for upstream link)<br />

Minimum fibre More complex transceivers (optical power, burst<br />

mode capability)<br />

Table 9: Advantages and disadvantages of PONs.<br />

Compared to other access technologies, PON eliminates much of the installation, maintenance, and management<br />

expenses needed to connect to customer premises. However, a power splitting TDM-PON has also some<br />

important inherent drawbacks:<br />

• The same bandwidth has to be divided between several users.<br />

• The optical splitter divides the optical power among its output ports, which causes large insertion losses.<br />

This limits the maximum transmission distance possible between OLT and ONUs.<br />

• All the ONUs connected to the same optical splitter receive the same optical signal. This is a benefit in case<br />

of multicast traffic, but in case of unicast traffic this raises some concerns about network security. Thus, a<br />

good encryption is of great importance.<br />

• A PON demands that only one ONU is active at each time, but a malicious user that emits light continuously<br />

can corrupt the entire upstream transmission.<br />

<strong>Annex</strong> 2 - Page 48 of 282<br />

ONU


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Upstream bandwidth is not broadcast. To catch upstream traffic, there will be a need for an intelligent time<br />

division multiple access (TDMA) protocol.<br />

Current TDM-PON standards specify the line rate up to 2.5 Gb/s and maximum link reach of 20 km, and<br />

typically with a split ratio of 1:32.<br />

In the first-generation optical access networks, the major thrust has been economical deployment, and a power<br />

splitting PON was the most opportune solution. Nowadays, the cost of optical devices has decreased a lot, and<br />

design considerations other than cost will become important. To overcome some of the demerits of a pure power<br />

splitting TDM-PON, some different types of PONs are also available: WDM PONs, WDM power splitting<br />

PONs, WDM PONs with overlay for broadcast. Thanks to the use of WDM, a PON can also set up a virtual<br />

point-to-point connection. WDM has been considered an ideal solution to extend the capacity of PONs without<br />

drastically changing the currently deployed fibre structure. Further, it also shares many benefits of TDM-PON:<br />

e.g. by using an arrayed waveguide grating (AWG) the signal path can still be completely passive.<br />

<strong>A2.</strong>3.2.1.2 Point-to-point connections: Active Node (Ethernet Switch)<br />

Instead of having a PON, it is also possible to deploy an active network, which looks very similar to a PON,<br />

however with some important differences. The most fundamental one is to replace the passive, unmanageable<br />

splitters in the field by an active node. An important consequence is that a power line between the CO and the<br />

active node will be necessary. Besides a branched tree architecture as used in a PON, an active network can also<br />

be deployed by a ring or star architecture. The choice of any particular architecture depends on type of<br />

deployment, availability and topology of fibre, cost and availability of equipment.<br />

Second, instead of sharing bandwidth among multiple subscribers, each end user is provided a dedicated<br />

connection that provides full bi-directional bandwidth. This can be implemented using the SDM (Space Division<br />

Multiplexing) or WDM (Wavelength Division Multiplexing) technique. Because of its dedicated nature, this<br />

type of architecture is also referred to as point-to-point (P2P).<br />

The third architectural difference between PON and active node is the distance limitation. In a PON, the furthest<br />

subscriber must be within 10-20km from the CO, depending on the total number of splits (max. 1:32). An active<br />

network, on the other hand, has a distance limitation of ca. 80km, regardless of the number of subscribers being<br />

served. The number of subscribers is limited only by the switches employed, and not by the infrastructure itself,<br />

as in the case of PON. The active node will typically be an Ethernet Switch, and the available bit rate is now up<br />

to 10 Gbps.<br />

OLT<br />

(In CO)<br />

1 or 2 fibres<br />

Power line<br />

Powered Device<br />

(Ethernet Switch)<br />

Up to 70 km Up to 10 km<br />

Figure 24: Point-to-multipoint connection with an Active Node (Ethernet Switch).<br />

<strong>Annex</strong> 2 - Page 49 of 282<br />

ONU


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Advantages Disadvantages<br />

Higher bandwidth<br />

Necessity of a power line<br />

Higher distance possible Cable infrastructure more complicated<br />

Greater security<br />

Table 10: Advantages and disadvantages of using an Active Node (Ethernet Switch).<br />

<strong>A2.</strong>3.2.1.3 Hybrid PONs<br />

Finally, hybrid PONs are also being developed; they are a literal combination of an active node and a PON<br />

architecture. The reachable distance is higher than in case of using a power splitting PON, and this with a<br />

simpler infrastructure than a completely active topology.<br />

Advantages Disadvantages<br />

High reachable distance Necessity of a power line<br />

Simpler infrastructure than in active topology<br />

OLT<br />

(In CO)<br />

<strong>A2.</strong>3.2.2 Transmission protocol<br />

Table 11: Advantages and disadvantages of hybrid PONs.<br />

1 or 2 fibers<br />

Power line<br />

Splitter(s) up to<br />

1:32<br />

Up to 32 subscribers<br />

Up to 70 km Up to 10 km<br />

Figure 25: Hybrid PON<br />

The transmission protocol used by a FTTx network will be either Ethernet or ATM. Ethernet represents today<br />

90% of the installed interfaces for LAN. It offers large spectrum of data rates, from 10 Mb/s and up to 10 Gb/s<br />

for the last evolution. This predominance leads to very low cost: 100 Mb/s Ethernet interfaces are more than 10<br />

times cheaper than 155 Mb/s ATM interfaces used for metropolitan and core networks. Moreover, Ethernet is<br />

based on a simple protocol, while offering advanced services: quality of service (QoS), good granularity, high<br />

<strong>Annex</strong> 2 - Page 50 of 282<br />

ONU


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

throughput … Systems based on ATM standards are slower (reach only 622 Mb/s) and have more expensive<br />

components than Ethernet based ones, but they offer higher quality of service and are connection oriented, while<br />

Ethernet is connectionless.<br />

Using Ethernet for FTTH seems to be a good solution. Nevertheless, the actual Ethernet standards do not offer a<br />

reachable distance compatible with this application (hundred meters while up to 20 km is needed). Aware of this<br />

problem, new fibre based standards providing point-to-point connections and point-to-multipoint connections<br />

are being developed. Two groups are working on these standards: IEEE 802.3ah (EFM: Ethernet in the First<br />

Mile) and FSAN (Full Service Access Network)<br />

<strong>A2.</strong>3.2.2.1 IEEE 802.3ah: Ethernet in the First Mile (EFM)<br />

In July 2001, the IEEE 802.3 (Ethernet) working group initiated a task force, called 802.3ah to draft a standard<br />

to address Ethernet in the First Mile (EFM) (also promoted by Ethernet in the First Mile Alliance (EFMA). The<br />

challenge for EFM is to enable effective Ethernet network designs for subscriber access networks that can<br />

deliver quantifiable enhancements to current offerings at a reasonable cost, including both capital expenses and<br />

operating expenses. The EFM task force has targeted three subscriber access network topologies: point-to-point<br />

on optical fibre, point-to-multipoint on optical fibre and point-to-point on copper. The first two of them lead to<br />

two important standards for FTTH deployment.<br />

The EFM objective to standardize Gigabit Ethernet (1000BASE-X) optics to operate point-to-point over a single<br />

strand of single mode will enable cost-effective, high-performance broadband access to single family homes<br />

(FTTH) and businesses (FTTB). Historically, a major barrier to delivery of FTTH or FTTB has been the<br />

inability to achieve a low absolute cost and an appropriate price to performance ratio. Optical Ethernet over<br />

point-to-point fibre will be able to leverage the high-volume, low-cost advantages of 1000BASE-X transceivers.<br />

Ethernet is ideal for community networks that are deployed over point-to-point fibre topologies. Since the fibre<br />

runs all the way to the subscriber, it is possible to provide homes or businesses a full gigabit of bandwidth.<br />

Point-to-point gigabit networks offer incredible flexibility and scalability for the future. When the business case<br />

exists to deliver not just high speed Internet access, but other services such as voice and video, point-to-point<br />

Ethernet over optical fibre is an excellent solution.<br />

The EFM objective to support passive optical networks (Ethernet PONs or EPONs) is based on a number of<br />

economic advantages. The aggregation device, called the optical line terminator (OLT) supports a minimum of<br />

16 subscribers per port by means of a passive optical splitter. Thus the Ethernet PON minimizes the number of<br />

fibres that need to be managed in the service provider’s point of presence or central office (CO), minimizes the<br />

number of central office transceivers, and reduces the rack space required in the central office, compared with a<br />

point-to-point topology. This economic benefit is significant.<br />

Additionally, the EPON topology reduces maintenance costs by removing the need for electrical power and<br />

active electronics in the field, although the diagnostic and troubleshooting overhead is increased. In addition,<br />

passive optical splitters have no need for curb side batteries or environmentally protected enclosures. The EPON<br />

physical layer specification will support distances between the OLT and ONU up to 20 kilometres depending on<br />

split ratio and optical link budgets.<br />

<strong>A2.</strong>3.2.2.2 FSAN: Full Service Access Network<br />

FSAN (Full Service Access Network) group was founded in 1995 by a group of seven major<br />

telecommunications services providers and equipment suppliers. FSAN is not a standardization body. The<br />

mission of FSAN is to drive applicable standards, where they already exist, into the services and products in the<br />

industry, while simultaneously advanced its own specifications into the appropriate standards bodies to provide<br />

further definition to the Full Service Access Network.<br />

FSAN defined requirements for APON (ATM PON), BPON (Broadband PON) and GPON (Gigabit PON), and<br />

they feed their recommendations into ITU–T G.983 and G.984 family.<br />

APON systems are based upon ATM as the bearer protocol. The transmission protocol is based upon a<br />

downstream frame of 56 ATM cells (53 bytes each) for the basic rate of 155 Mb/s, scaling up with bit rate to<br />

224 cells for 622 Mb/s. The upstream frame format is 53 cells of 56 bytes each (53 bytes of ATM cell + 3 bytes<br />

OH) for the basic 155 Mb/s rate.<br />

<strong>Annex</strong> 2 - Page 51 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The initial PON specifications defined by the FSAN committee used ATM as their layer 2 signaling protocol. As<br />

such, they became known as ATM-based PONs or APONs. Use of the term APON led users to believe that only<br />

ATM services could be provided to end-users, so the FSAN decided to broaden the name to Broadband PON<br />

(BPON). BPON systems offer numerous broadband services including Ethernet access and video distribution.<br />

In 2001 the FSAN group initiated a new effort for standardizing PON networks operating at bit rates of above 1<br />

Gb/s. Apart from the need to support higher bit rates, the overall protocol has been opened for re-consideration<br />

and the sought solution should be the most optimal and efficient in terms of support for multiple services,<br />

OAM&P (Operations, Administration, Maintenance and Provisioning) functionality and scalability. As a result<br />

of this latest FSAN effort, a new solution has emerged into the optical access market place: Gigabit PON<br />

(GPON), offering unprecedented high bit rate support while enabling transport of multiple services, specifically<br />

data and TDM, in native formats and at an extremely high efficiency.<br />

<strong>A2.</strong>3.3 Issues and trends<br />

<strong>A2.</strong>3.3.1 Some examples of FTTH deployment worldwide<br />

<strong>A2.</strong>3.3.1.1 Europe<br />

FTTx is today mainly concentrated in a limited number of European countries. More than 95% of the current<br />

FTTx subscribers are located in only 4 countries: Sweden, Italy, The Netherlands and Denmark. In Sweden as<br />

well as in the Netherlands, to a large degree, the FTTx success can be attributed to government support (both<br />

central government and municipalities). The participation of incumbent operators in the deployment of FTTx is<br />

not very high in Europe. The roll out of a complete fibre network is still rather expensive (20 to 50 EUR/meter,<br />

dependent on rural or urban areas) and most of them do not see a profitable advantage in deploying such a<br />

network.<br />

Sweden<br />

• Bredbandsbolaget (http://en.bredband.com/en/index.jsp): Founded in July 1999, and today, the largest<br />

FTTH deployment in Sweden. Offers 10 Mbps bi-directional services. Their network is based on point-topoint<br />

Ethernet connections over optical fibre (IEEE 802.3ah). In September 2005, 350 000 Internet<br />

costumers were connected via Bredbandsbolaget FTTH network.<br />

• Stokab (http://www.stokab.se): dark fibre provider in Stockholm county. The company is wholly owned by<br />

the City of Stockholm. The purpose of the Stokab’s operations and the infrastructure provided by the<br />

company is to stimulate positive growth in the Stockholm region by creating favourable conditions for IT<br />

development.<br />

An important FTTH developer, PacketFront (http://www.packetfront.com) is also based in Sweden (Stockholm).<br />

PacketFront is a world-wide leader in FTTH technology and next generation broadband aggregation. They<br />

develop and market leading, purpose-built and intelligent solutions for True Broadband Networks. PacketFront<br />

is involved in many Swedish projects:<br />

• Malarenergi Stadnat (Swe)<br />

http://www.packetfront.com/malarenergi.php<br />

http://www.packetfront.com/snews.php?id=40<br />

• OresundsKraft (Swe)<br />

http://www.packetfront.com/oresundskraft.php<br />

• Hammarby Sjostad (Swe)<br />

http://www.packetfront.com/viaeuropa.php<br />

• Pite Energi (Swe)<br />

http://www.packetfront.com/piteenergi.php<br />

Next to the above Swedish FTTH initiatives, PacketFront is also active in The Netherlands, Denmark and<br />

Canada.:<br />

• Nuenen project (NL)<br />

http://www.packetfront.com/snews.php?id=46<br />

• Rentre' Wonen (NL)<br />

<strong>Annex</strong> 2 - Page 52 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

http://www.packetfront.com/snews.php?id=38<br />

• NESA (DK)<br />

http://www.packetfront.com/nesa.php<br />

• CMOM (Canada)<br />

http://www.packetfront.com/snews.php?id=33<br />

In September 2005, there were 1 682 397 Broadband connections in Sweden, and 360 000 (or 21.4%) of them<br />

are realized via FTTx. As already stated, Bredbandsbolaget delivers most of them (350 000, source: Point Topic,<br />

http://www.point-topic.com).<br />

Italy<br />

• FastWeb (http://www.fastweb.it/): Founded in September 1999 in Milan. The Company’s first operational<br />

objective was to lay a widespread fibre optic network covering the metropolitan area of Milan. They<br />

launched services in 2000, and currently serve several Italian cities as well as Hamburg, Germany. FastWeb<br />

offers fibre connections as well as DSL. In the areas where they deploy fibre, after fibre rings are installed,<br />

customers are offered DSL until fibre is laid to each building. When fibre arrives to a DSL customer he is<br />

switched to fibre. In September 2005, FastWeb served 231 840 costumers with fibre optic connections. The<br />

FastWeb network provides a variety of value added services to their subscribers: data (10 Mbps bidirectional),<br />

VoIP, broadcast video, VoD and Pay per view services.<br />

• Acantho (http://www.acantho.it/): FTTH network in the Bologna region. Offers Ethernet connections over<br />

fibre between 10 Mbps and 100 Mbps, VoD, gaming, videophones and VoIP.<br />

In the statistics for September 2005, the number of broadband connections in Italy was equal to 5 904 825, and<br />

231 840 (or 3.93%) connections were delivered by FTTH offered by FastWeb. Other FTTH providers were<br />

lacking in these information (source: Point Topic)..<br />

The Netherlands<br />

Different FTTH pilots in The Netherlands:<br />

• Eindhoven: “Kenniswijk” (www.kenniswijk.nl), about 3500 FTTH connections in October 2004, and about<br />

1700 subscribers (= ca. 50%).<br />

Kenniswijk is an initiative of the Dutch General Directorate of Telecommunication and Post (DGTP) of the<br />

Ministry of Economics. It is an experimental environment in the Eindhoven area where consumers have<br />

access to innovative products and services in the area of computers, (mobile) communication and internet.<br />

The intention is that the developments within the Kenniswijk-area are, on average, two years ahead of the<br />

rest of the Netherlands in 2005, resulting in a "consumer market of the future".<br />

• Almere: “UNET, First Mile Ventures” (www.unet.nl), about 1200 homes connected, municipality<br />

participates in physical layer.<br />

• Amsterdam: “Citynet” (www.citynet.nl), the fibre optic network in Amsterdam has still to be installed, but<br />

at the moment the construction is closer to realization. The municipality participates in physical layer and<br />

the ambition is to cover the entire city with FTTH.<br />

• Appingedam: “Damsternet” (www.damsternet.nl), municipality also invests in passive infrastructure, but a<br />

cable company (Essent) filed legal proceedings for unfair competition. The municipality of Damsternet has<br />

asked permission at the European Commission to install its fibre network (November 2004).<br />

Dutch Governments actively enhance Broadband, Kenniswijk is supported by the central Government, and a lot<br />

of municipalities also invest in FTTH realizations. Next to the three examples above (Almere, Amsterdam and<br />

Appingedam), there are also many other fibre pilots in the Netherlands (Enschede, Rotterdam, Amersfoort,<br />

Nuenen, Groningen, The Hague, Dordrecht). In the broadband statistics from Point Topic, FTTH in the<br />

Netherlands is still very limited. This can be explained by the fact that all the FTTH projects are very recent. In<br />

September 2005, The Netherlands contained 3 830 000 broadband connections, and only 50 000 (or 1.31%) of<br />

them were FTTH connections.<br />

Germany<br />

More than 1.8 million subscriber terminals installed that use fibre technologies in the access network. The vast<br />

majority of these solutions are based on narrow-band PON solutions. They are realized with non-standardized<br />

vendor specific implementations as FTTC and FTTB solutions. Some new city carriers (e.g. wilhelm.tel in<br />

Norderstedt, MDCC in Magedburg, EWETel in Oldenburg) use, or plan to use, fibre technologies in the access<br />

network.<br />

<strong>Annex</strong> 2 - Page 53 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

France<br />

PBC (Pau Broadband Country): launched in April 2002. Unsatisfied demand for broadband led to communitybased<br />

projects and the municipal/regional governments decided to invest in fibre (FTTH). The PBC network<br />

(belonging to the local authority) offers FTTH connections of 100 megabits/s for € 30/month to the habitants of<br />

Pau.<br />

Austria<br />

Wienstrom (largest utility company in Austria, a part of Wien Energie) is connecting 5000 homes. Forecast<br />

another 30 000 over the next two years.<br />

<strong>A2.</strong>3.3.1.2 Other European countries<br />

Other important countries where FTTH is already available in some cities, region are: Denmark, Finland,<br />

Norway, Estonia and Romania.<br />

<strong>A2.</strong>3.3.1.3 Asia (leading FTTH countries)<br />

Japan, South Korea and Hong Kong are the world leaders in the field of FTTx. There, the incumbents are very<br />

well involved in FTTx. FTTx is mainly Fibre to the Building (FTTB) in these cases (in combination with UTP5<br />

cable), and by the concentration of population in few urban centres and in multi-dwelling units, the investment<br />

in a fibre network will be cheaper per resident.<br />

Japan<br />

A lot of (incumbent) telecom operators provide their clients with FTTB/H: e.g. USEN, NTT West, NTT East,<br />

etc. Japan is world leader in FTTB/H deployment. Of the 20 912 900 broadband connections, 3 155 000<br />

(15.09%) were delivered by FTTB/H in September 2005. In November 2004, NTT set a target of moving 30<br />

million customers to FTTH by 2010.<br />

Interesting in Japan, is that the optical fibre is not always buried underground, but reaches the homes over aerial<br />

poles (lower installation cost). And multiple fibres are passing the same homes (Figure 26: Japan: optical<br />

fibres also reach the homes over aerial poles..).<br />

Figure 26: Japan: optical fibres also reach the homes over aerial poles..<br />

<strong>Annex</strong> 2 - Page 54 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

South Korea<br />

Also in South Korea, a lot of service providers offer FTTB/H to their clients: KT, Hanaro Telecom, Thrunet,<br />

Onse Telecom, Dreamline, Dacom, etc. In September 2005, South Korea had 11 993 610 broadband<br />

connections, and 1 422 333 (11.86%) were FTTB/H connections. KT (Korea Telecom), the incumbent telecom<br />

operator in South Korea, delivers about 50% of these FTTB/H connections.<br />

Hong Kong<br />

An important factor in Hong Kong is the population density, e.g. in some buildings there live more than 400<br />

families. By the concentration of population, the investment in a fibre network will be cheaper per resident. E.g.<br />

City Telecom offers 10 Mbps for less than 10 euro/month and 100 Mbps for 20 euro/month. Today, they already<br />

offer 1 Gbps. In September 2005, Hong Kong had 1 610 500 broadband connections, of which 310 000<br />

(19.25%) were FTTB/H connections. There are five carriers of which City Telecom, with a market share of<br />

45%, is the most important today.<br />

<strong>A2.</strong>3.3.1.4 Other Countries (North America, Australia)<br />

US<br />

Mainly FTTH deployment in the USA is in rural areas, less in major cities. 94 communities with lit services<br />

(e.g., iPROVO (http://www.iprovo.net/), Palo Alto, etc.). The distinction between the USA and the rest of the<br />

world can be declared by the fact that the size and population distribution of the USA is distinct from the other<br />

countries. The subscriber loop lengths are typically longer in the USA (see also ), and by consequence the cost<br />

per resident to roll out a FTTH network is higher. This declares why in the USA, FTTx is mainly attractive in<br />

rural areas, especially in green field where a complete new network still has to be deployed, but not in more<br />

densely populated areas..<br />

Canada<br />

CANARIE (http://www.canarie.ca/about/index.html): CANARIE's mission is to accelerate Canada's advanced<br />

Internet development and use by facilitating the widespread adoption of faster, more efficient networks and by<br />

enabling the next generation of advanced products, applications and services to run on them.<br />

Australia<br />

COLT (Collaborative Optical Leading Testbed) (http://www.mmv.vic.gov.au/colt): Australia’s most advanced<br />

fibre optic network, launched in July 2004. The network is built up of a Gigabit-rate EPON. Students and<br />

academics at the University of Ballarat, Victoria, Australia, are the first users to witness the power of FTTH<br />

technology. COLT will initially serve over 700 users at the University of Ballarat and the Ballarat Technology<br />

Park.<br />

<strong>A2.</strong>3.3.2 Research projects<br />

FP5 project GIANT: GIgaPON Access NeTwork<br />

(http://www.alcatel.be/Giant/)<br />

In the GIANT project, a next-generation, optical access network optimised for packet transmission at Gigabit/s<br />

speed will be studied, designed and implemented. The resulting GigaPON will cope with future needs of higher<br />

bandwidth and service differentiation in a cost-effective way. The studies will take into account an efficient<br />

interworking at the data plane and control plane with a packet-based metro network. The activities will<br />

encompass extensive studies defining the new GigaPON system. Innovative transmission convergence and<br />

physical medium layer subsystems will be modelled and developed. An important outcome of the system<br />

research will be the selection of a cost-effective architecture and its proof of concept in a lab prototype.<br />

Recommendations will be given for the interconnection between a GigaPON access network and metro network.<br />

Contributions will be made to relevant standardisation bodies.<br />

FP5 project TONIC: TechnO-ecoNomICs of IP optimised networks and services<br />

(http://www-nrc.nokia.com/tonic/)<br />

<strong>Annex</strong> 2 - Page 55 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

TONIC is a project that concentrates on techno-economic evaluation of new communication networks and<br />

services, in order to identify the economically viable solutions that can make the Information Society to really<br />

take place.<br />

TONIC's main objectives are:<br />

• to assess the new business models associated with offering IP-based mobile services in a competitive<br />

context.<br />

• to evaluate the cost and benefits of providing broadband access to both competitive and non-competitive<br />

areas, and to determine the most appropriate network infrastructure from an economic viewpoint.<br />

• to analyse the results of the above studies in order to formulate pertinent recommendations to<br />

policymakers, network operators and service providers regarding communications investment strategies.<br />

FP6 project MUSE: Multi Service access Everywhere<br />

(http://www.ist-muse.org)<br />

MUSE is a large integrated R&D project on broadband access. Within the 6th Framework Programme, MUSE<br />

contributes to the strategic objective "Broadband for All" of IST (Information Society Technologies). The<br />

overall objective of MUSE is the research and development of a future low-cost, full-service access and edge<br />

network, which enables the ubiquitous delivery of broadband services to every European citizen.<br />

MUSE also contains techno-economic activities which objective is to provide a validation of the different<br />

architectural and technological choices which will be developed within the MUSE subprojects. The MUSE<br />

techno-economic activities develop further the former IST TONIC project methodology.<br />

FP6 project E-NEXT<br />

(http://www.ist-e-next.org)<br />

E-NEXT is an FP6 Network of Excellence that focuses on Internet protocols and services. The general objective<br />

of E-NEXT is to reinforce European scientific and technological excellence in the networking area through a<br />

progressive and lasting integration of research capacities existing in the European Research Area (ERA).<br />

German project MaiNet: Multimedia Access and Indoor Networks<br />

MaiNet is a part of the MultiTeraNet national research initiative on optical communication technologies in<br />

Germany (http://www.multiteranet.de/). The focus of MaiNet is the development of new concepts for broadband<br />

access and indoor networks<br />

<strong>A2.</strong>3.3.3 Standardization initiatives and technical initiatives<br />

In the part about the state of the art of FTTx, two important standardization bodies are already considered:<br />

Ethernet in the First Mile Alliance (EFMA) and Full Service Access Network (FSAN).<br />

• EFMA (http://www.efmalliance.org/) promotes standards based Ethernet in the First Mile technology and<br />

encourages the utilization and implementation of Ethernet in the First Mile as a key networking technology<br />

for local subscriber access networks. EFMA especially promotes and supports the IEEE.802.3ah standard<br />

(http://www.ieee802.org/3/efm/index.html) which ·focus is to bring Ethernet technologies in the access<br />

area.<br />

The IEEE.802.3ah standard consists of four parts:<br />

o Ethernet in the First Mile over point-to-point Fibre (EFMF)<br />

o Ethernet Passive Optical Network (EPON)<br />

o Ethernet in the First Mile Operations, Administration and Maintenance (EFM OAM)<br />

o EFM over Copper (EFMC)<br />

• FSAN (http://www.fsanweb.org/) is not a standardization body. The members of FSAN are<br />

telecommunications services providers and equipment suppliers. The mission of FSAN is to drive<br />

applicable standards, where they already exist, into the services and products in the industry, while<br />

simultaneously advanced its own specifications into the appropriate standards bodies to provide further<br />

definition to the Full Service Access Network.<br />

<strong>Annex</strong> 2 - Page 56 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

FSAN defined requirements for APON (ATM PON), BPON (Broadband PON) and GPON (Gigabit PON),<br />

and they feed their recommendations into ITU–T G.983 and G.984 family (http://www.itu.int/).<br />

Next to EFMA and FSAN, two other important standardization bodies in the field of FTTx are: the PON forum<br />

and FTTH council.<br />

• The PON Forum (http://www.ponforum.org) is focused on the business and marketing aspects of the PON<br />

industry. Technical issues related to PON are being handled by a number of industry standards and<br />

advocacy groups and are not being addressed by the PON Forum. The PON Forum will liaise with these<br />

groups and will offer business and market information to help drive their efforts. Thus, the PON Forum has<br />

two primary goals: Evangelize the PON market, agnostic of technical variations of PON and Identify market<br />

needs to be fed back to the various technical and advocacy bodies (EFMA, FSAN, ITU, IEEE, etc).<br />

o The FTTH Council (www.ftthcouncil.org/) is a market development organization whose mission is to<br />

educate, promote, and accelerate FTTH and the resulting quality-of-life enhancements. The Fibre to the<br />

Home (FTTH) Council is a non-profit organization established in 2001 to educate the public on the<br />

opportunities and benefits of FTTH solutions. FTTH Council members represent all areas of broadband<br />

industries, including telecommunications, computing, networking, system integration, engineering, and<br />

content-provider companies, as well as traditional telecommunications service providers, utilities and<br />

municipalities. There also exists a FTTH Council specifically for Europe: FTTH Council Europe<br />

(http://www.europeftthcouncil.com/).<br />

<strong>A2.</strong>3.3.4 Introduction to trends and issues<br />

<strong>A2.</strong>3.3.4.1 FTTx situation in Europe<br />

FTTx deployments<br />

In June 2004, the situation in Europe (EU 25 + Norway & Iceland) was as follows [3]: there were 167 locations<br />

where FTTx initiatives have been launched, and 103 players were involved in it (see next table). Among them,<br />

nearly 70% are municipalities or power utilities. Furthermore, approximately 60% of these deployments are at<br />

commercial phase, 20% are pilots and 20% are at a project phase.<br />

Incumbent operators 8 7,8%<br />

Municipalities / power utilities 72 69,9%<br />

Alternative operators / ISPs 9 8,7%<br />

Housing companies & Other 14 13,6%<br />

Table 12: Players involved in FTTx initiatives in Europe<br />

By the end of June 2004, there were approximately 547 900 subscribers (source: IDATE, this corresponds with<br />

the numbers from Point Topic for the European countries at the same time). Roughly 1.96 millions<br />

homes/buildings were passed, showing a penetration rate of 28%. We should also notice that more than 95% of<br />

these FTTx subscribers are concentrated in 4 countries (Sweden, Italy, Denmark and The Netherlands).<br />

<strong>Annex</strong> 2 - Page 57 of 282


250.000<br />

200.000<br />

150.000<br />

100.000<br />

50.000<br />

0<br />

Austria<br />

Belgium<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

FTTx subscribers by the end of June 2004<br />

Denmark<br />

Finland<br />

France<br />

Germany<br />

Italy<br />

Figure 27: FTTx subscribers in Europe by the end of June 2004 (source IDATE, corresponds with the<br />

figures from Point Topic).<br />

Figure 28 is an updata of the previous graph, and by the end of September 2005, the number of FTTH<br />

subscribers amounts to approximately .752 000 (corresponds to an increase of 27%).<br />

400.000<br />

350.000<br />

300.000<br />

250.000<br />

200.000<br />

150.000<br />

100.000<br />

50.000<br />

0<br />

Austria<br />

FTTx subscribers by the end of September 2005<br />

Belgium<br />

Denmark<br />

Finland<br />

France<br />

Germany<br />

Italy<br />

Figure 28: FTTx subscribers in Europe by the end of September 2005 (update of Figure 27, with data<br />

from Point Topic).<br />

In Sweden, to a large degree, the FTTx success can be attributed to government action plans and national and<br />

regional funding schemes. The government is also actively investing in fibre infrastructure: e.g. Stokab, a dark<br />

fibre provider in the Stockholm county, is wholly owned by the City of Stockholm. Their purpose is to stimulate<br />

positive growth in the Stockholm region by creating favourable conditions for IT development. Also in the<br />

Netherlands, there is a lot of government support: central government as well as municipalities. Kenniswijk is an<br />

experimental environment in the Eindhoven area where FTTH connections are deployed, and the project is<br />

supported by the central government.<br />

Netherlands<br />

Netherlands<br />

<strong>Annex</strong> 2 - Page 58 of 282<br />

Spain<br />

Spain<br />

Sweden<br />

Sweden<br />

UK<br />

UK<br />

Others EU 17<br />

Others EU 17<br />

Iceland<br />

Iceland<br />

Norway<br />

Norway


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The intention is that the developments within the Kenniswijk-area are, on average, two years ahead of the rest of<br />

the Netherlands, resulting in a "consumer market of the future". Besides, a lot of municipalities also invest in<br />

FTTH realizations, and today there are many fibre pilots: Almere, Amsterdam, Appingedam, Enschede,<br />

Rotterdam, Amersfoort, Nuenen, Groningen, The Hague, Dordrecht...<br />

The success of FTTH in Italy can be mainly attributed to FastWeb, which is a joint venture between the Milan<br />

publicly owned gas and electric utilities and a private group known as e.Biscom. Originally, they only operate in<br />

the Milan region, but nowadays they deploy their fibre network also in several other Italian cities.<br />

In Europe, the participation of incumbent operators in the deployment of FTTx is not very high. The roll out of a<br />

complete fibre network is still rather expensive and most of them do not see a profitable advantage in deploying<br />

such a network. Finally, in a lot of European countries (UK, France, Spain, Belgium), the FTTH deployment is<br />

very low or totally not available..<br />

FTTx forecast<br />

Figure 31: Division of the Broadband connections in Japan, South Korea, Sweden and Italy,<br />

four important FTTx market players (source: Point Topic) presents the forecast for both FTTH<br />

rollout (homes passed) and subscribers over FTTH networks (source: Yankee Group [4]). The forecast proposes<br />

three potential scenarios: expected case, best case and worst case. To estimate these numbers, four different<br />

indicators have been taken into account:<br />

- Competitive considerations (presence of competing technology platforms).<br />

- Advanced services (e.g. consumers’ interest in emerging video services).<br />

- Regulatory factors (effect of regulatory uncertainty or its removal).<br />

- Operational factors (e.g. degree of urbanization, degree of municipality...).<br />

<strong>A2.</strong>3.3.4.2 Worldwide situation<br />

Figure 29: FTTx forecast between 2004 and 2008: possible scenarios.<br />

FTTx deployment<br />

Outside Europe, important FTTx countries are Japan, South Korea and Hong Kong, they are the world leaders in<br />

the field of FTTx. There, the incumbents are very well involved in FTTx. FTTx is mainly Fibre to the Building<br />

in these cases, and by the concentration of population in few urban centres and in multi-dwelling units, the<br />

investment in a fibre network will be cheaper per resident.<br />

The next graph shows the FTTx deployment in the most important countries in the field of FTTx (September<br />

2005).<br />

<strong>Annex</strong> 2 - Page 59 of 282


customers<br />

3500000<br />

3000000<br />

2500000<br />

2000000<br />

1500000<br />

1000000<br />

500000<br />

0<br />

Japan<br />

South Korea<br />

Sweden<br />

Hong Kong<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

FTTx, worldwide<br />

Italy<br />

Denmark<br />

Netherlands<br />

<strong>Annex</strong> 2 - Page 60 of 282<br />

USA<br />

Norway<br />

FTTx<br />

Figure 30: FTTx, number of customers in the most important countries in the field of FTTx (source:<br />

Point Topic).<br />

Below, overview of the different broadband connections in some countries where FTTx is already an important<br />

market player (also September 2005).<br />

Japan<br />

DSL<br />

Cable<br />

FTTx<br />

South Korea<br />

DSL<br />

Cable<br />

FTTx


Sweden<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

DSL<br />

Cable<br />

FTTx<br />

FWA<br />

LCs<br />

<strong>Annex</strong> 2 - Page 61 of 282<br />

Italy<br />

DSL<br />

FTTx<br />

Satellite<br />

Figure 31: Division of the Broadband connections in Japan, South Korea, Sweden and Italy, four important<br />

FTTx market players (source: Point Topic)<br />

Market growth<br />

Worldwide the number of FTTx users is growing. Like any new or advanced method, Fibre to the Home started<br />

out as a ‘something the service providers can do, but it’s really expensive’ approach. But, as already mentioned<br />

in the introduction, an optical fibre access network is getting cheaper. The research firm IDC<br />

(http://www.idc.com) expects 20.6 million FTTx connections of 208 million broadband connections in total<br />

(approximately 10%). Because of the growing bandwidth demand of all customers, the fibre will come closer<br />

(FTTC, FTTB) to the 90 % of customers, which are using copper based or wireless access technologies.<br />

In the United States, one forecasts FTTH systems will reach 2.65 million homes by 2006, after starting from just<br />

89,000 homes in 2001. As a result the FTTH equipment market will grow from just $100 million in 2001 to<br />

nearly $1 billion in 2006. Factors that may accelerate this thirst would be faster-than-expected cost reductions or<br />

applications that drive bandwidth demand beyond what non-fibre-based technologies can provide. However,<br />

factors that may hinder the rollout of deep fibre solutions include slower cost reductions or improvements in<br />

non-fibre-based technologies that keep pace with future bandwidth demands.<br />

2001 was a historic year in the FTTH business, because for the first time in history, FTTH is less expensive than<br />

deploying copper, twisted pair or coax. However, there are two caveats: It needs to be in a converged services<br />

environment with two or more services of voice, video or data. And it needs to be in a greenfield application.<br />

FTTH has been going into rural areas and small cities predominantly because it’s less expensive. There are<br />

fewer roads, fewer driveways to cross and the installation costs can be lower.<br />

Worldwide trends<br />

The two most important technologies within the scope of FTTx are Ethernet in the First Mile over Point-to-Point<br />

Fibre and Passive Optical Networks (EPON, APON, BPON, GPON). Beyond the USA, PON deployment is<br />

very low, in a lot of other countries (Japan, Sweden, Italy…), the FTTx networks are mainly based on EFM over<br />

Point-to-Point connections. The distinction between the USA and the rest of the world can be declared by the<br />

fact that the size and population distribution of the USA is distinct from the other countries with greater per<br />

capita broadband deployment. Network construction in South Korea, Taiwan and especially Hong Kong is<br />

reduced by the concentration of population in few urban centres and in multi-dwelling units (MDU). Also in<br />

Europe, the FTTH deployments are installed in densely populated cities (e.g.: Sweden: Stockholm, Italy: Milan,<br />

The Netherlands: Eindhoven, Amsterdam). Longer USA subscriber loop lengths compared to other countries is<br />

shown in Figure 32: International subscriber access loop lengths.[2].<br />

LCs


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 32: International subscriber access loop lengths.<br />

It is remarkable the FTTH deployments in the USA are mainly in rural areas, while in Japan and Europe, FTTH<br />

appears in densely populated areas.<br />

As can be seen in Figure 30: FTTx, number of customers in the most important countries in the field of FTTx<br />

(source: Point Topic). Japan and South Korea are leaders in FTTH deployment. In these countries, there is a lot<br />

of activity by the incumbents. In Europe (Sweden, The Netherlands…), the incumbent participation is not very<br />

high, but the success there is the consequence of a lot of government support (central government as well as<br />

municipalities). In the USA, incumbent activity as well as government activity lacks, and as consequence the<br />

FTTH deployment is very low, especially in the cities.<br />

<strong>A2.</strong>3.3.5 Gap analysis/key issues<br />

Technologically, there are two important challenges in the future:<br />

• To increase capacity.<br />

• To extend the distance between CO and user.<br />

The extension to WDM-PONs gains interest. WDM efficiently exploits the large capacity of optical fibre<br />

without much change in infrastructure. Gigabit-capable passive optical networks (GPON) are now standardised<br />

and commercially available, which means that PONs are approaching bit rates where one could consider using<br />

them for metro as well as access applications. To use a PON for long reach access needs three new features:<br />

reach extension to ~100 km, a high split ratio (>64); and ideally WDM should be used. Nowadays, there is a lot<br />

of research in this domain.<br />

Another important topic is the cost of the optical components, especially for the ONUs. Maturity of optical<br />

components and large-scale integration/manufacturing of ONU have dropped the prices. To keep the cost of the<br />

ONU low, there is also a lot of interest for colourless ONUs. This means there is no laser available in the ONU,<br />

however in the OLT an optical carrier for upstream signals is generated, and the upstream transmission is then<br />

provided by externally modulating these signals.<br />

Finally, considering the deployment of FTTx, the following factors are essential:<br />

• Policy/regulatory support, either active participation by the government by financial support (cf.<br />

Sweden, The Netherlands), or passive by creating a positive regulation climate (cf. Japan, South Korea).<br />

• The interest of the incumbent telecom operators (cf. Japan, South Korea), this can be stimulated by a<br />

clear government policy.<br />

• Advanced (new) services that can create a clear revenue opportunity.<br />

<strong>Annex</strong> 2 - Page 62 of 282


<strong>A2.</strong>3.4 Roadmap<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

FTTB and FTTH can be regarded as the logical endpoint of an ongoing evolution that is shown in . In the prefibre<br />

days, Telco COs were interconnected by coax and microwave, while cable head ends were fed by<br />

microwave or satellite. The first step in “fiberizing” the entire system was Hybrid Fibre Coax (HFC) (being<br />

widely deployed by the cable industry today), in which the node becomes an optical network unit. HFC serves<br />

several hundred homes per fibre end, each using copper (coax) in both directions between “node” and<br />

subscriber, but with limited bit rate and very demanding design rules.<br />

HFC<br />

FTTC<br />

HE<br />

PON<br />

-2- WDM PON<br />

CO<br />

node<br />

CO<br />

CO/ ONU<br />

OLT<br />

-1- power splitting PON<br />

FTTH/B<br />

Endrun<br />

CO<br />

CO: Central Office<br />

HE: Head End<br />

OLT: Optical Line Termination<br />

ONU: Optical Network Unit<br />

Figure 33: FTTx networks[1].<br />

The slowly emerging Fibre to the Curb (FTTC) systems split each upstream or downstream fibre into 10-100<br />

subscriber copper paths. It is not clear that FTTC systems offer any economic advantage over full FTTH/B,<br />

which is just the logical extension of HFC to FTTC to a single subscriber per fibre. Because of its non-optimum<br />

economics, FTTC is likely to be overtaken by the clean passive-all-the-way FTTH option, i.e. PON.<br />

The current PON technology mainly uses power splitting PONs, but (C)WDM is another important PON<br />

approach, which will gain more and more interest in the future.<br />

<strong>Annex</strong> 2 - Page 63 of 282<br />

fibre<br />

copper (coax, twisted pair)<br />

(200-500 homes per fibre)<br />

(10-100 homes per fibre)<br />

(1 home/building per fibre)<br />

(1 home/building per fibre)


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

When deploying a PON, a technological choice between either ATM-based (APON or its successors BPON,<br />

GPON) or Ethernet-based (EPON) has to be made. Today, both APONs and EPONs have their enthusiastic<br />

partisans.<br />

APON promoters argue that the FSAN standard is approaching maturity faster than that for EPONs (802.3ah),<br />

that significant APON volumes have already been achieved, that the significance of the infamous “cell tax”<br />

overhead of ATM (with its processor interrupts every 53 bytes) is partially mitigated by the large IP and<br />

Ethernet header overheads, that ATM software and hardware is already qualified with the ILECs (independent<br />

local exchange carriers), and that quality of service guarantees (e.g. constant and low latency for POTS) are<br />

established attributes. EPON partisans, on the other hand, argue that Ethernet (with 350 million ports already<br />

installed) will always be the lower-cost solution, that POTS traffic (for which ATM was invented) constitutes a<br />

negligible part of the PON traffic anyhow, that any remaining quality of service issues have been resolved by<br />

small architectural changes to Voice Over IP (VOIP), that the addressing limitations of the old IPv4 are being<br />

fixed in Ipv6, that the ATM cell tax and segmenting-reassembly processing overheads are significant, that time<br />

jitter does not occur anyhow since there is no packet/cell queuing between subscriber and CO, only at the CO,<br />

that the offered bit rates are higher, that Ethernet is easier to manage, that APONs are seen nowhere except<br />

North America with Ethernet the norm elsewhere, and that most data begins and ends its life as IP/Ethernet<br />

traffic anyhow, so why interpose still another protocol encapsulation? If the world is not all-IP today, it is<br />

certainly rapidly moving in that direction.<br />

According to [1], it really makes little difference architecturally whether the internals of the PON speak Ethernet<br />

or ATM, since the interfaces to the user and to the CO equipment are identical in both cases. The important issue<br />

will be cost to the end user and speed of acceptance. In view of the superior component cost for Ethernet chips<br />

compared to ATM chips, perhaps the only thing that will give the race to APONs is that, if the large incumbent<br />

local exchange ever get moving with PONs, they may cling to ATM because of its familiarity and its existing<br />

certifications. The prediction in [1] is that in the long run APONs will go the way of ISDN – not completely<br />

dead, but a minority player.<br />

Another all-fibre solution is the active architecture, which gains more and more interest. Active Ethernet is<br />

quietly becoming the preferred choice among leading service providers worldwide for their fibre deployments.<br />

From an economical view, in the past, PONs were the preferred choice. But nowadays, fibre costs have dropped<br />

to a fraction of what they were just a few years ago. And also some technological developments have an<br />

important influence:<br />

• The completion of the IEEE 802.3ah Ethernet in the First Mile (EFM) standard that defines, among<br />

other things, a method for delivering Ethernet over a single strand of fibre.<br />

• The evolution of environmentally hardened Ethernet devices that can be placed in the outside plant.<br />

Prior to the availability of this type of gear, network operators would have to pull the fibre from every<br />

subscriber all the way back to their CO. Then at the CO, a large electrooptic port count was needed.<br />

An Active architecture has one arguable drawback from a deployment perspective, and that is the requirement<br />

for power in the outside plant. However, Active electronics in the field are nothing new and many Telco’s have<br />

already powered electronics in the field.<br />

Figure 34: Depth of fibre penetration to node, curb and residences.<br />

NAP = network access point (pedestal). [1] takes a different cut at the evolution, and shows the<br />

historical and predicted percent penetration of all fibre carrying both Telco and cable traffic whose ultimate<br />

terminus is at residences.<br />

<strong>Annex</strong> 2 - Page 64 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 34: Depth of fibre penetration to node, curb and residences.<br />

NAP = network access point (pedestal).<br />

Most FTTx products today use point-to-point or power splitting PON solutions. Some people also believe the<br />

PON approach for deep fibre deployments is geared more toward Telco’s who have traditionally not had power<br />

in the field. For people like cable operators and utility companies that already have power in the field, it’s not a<br />

big issue.<br />

<strong>A2.</strong>3.5 References<br />

[1] Paul E. Green, “Fiber-to-the-Home white paper”, 2003.<br />

[2] John A. Jay, “An Overview of International Fiber to the Home Deployment”, 2002.<br />

[3] IDATE, “FTTH situation in Europe”, 2005<br />

(http://www.europeftthcouncil.com/extra/Articles/IDATE_study.pdf)<br />

[4] YANKEE GROUP, “Mass-Market Fiber remains distant on the European Horizon”, 2005<br />

(http://www.europeftthcouncil.com/extra/Articles_Yankee_study.pdf)<br />

<strong>A2.</strong>4 HAP Updated 01/06<br />

<strong>A2.</strong>4.1 Introduction<br />

High altitude platforms (HAPs) have the potential to cost effectively deliver broadband services. A HAP is an<br />

airship or plane which operates 17-20km above the earth’s surface and provides a platform for communications.<br />

HAPs are planned to provide high capacity to users (as with terrestrial networks), but provide a high coverage<br />

(as with satellite systems). The CAPANINA project intends to demonstrate systems capable of providing<br />

wireless bursts of data to users of up to 120Mbit/s to approximately 1000X the number of users per unit area<br />

serviced by satellite systems. The cost is eventually expected to be in the region of 10% of that of a satellite<br />

system and the low amount of infrastructure coupled with its potential amortisation across 1000s of users will<br />

also make these systems cost competitive with terrestrial systems.<br />

In the first instance it is expected that the services will be available to geographically fixed users. Developments<br />

will rapidly allow access by mobile users travelling at 100s km/hr e.g. on trains.<br />

<strong>Annex</strong> 2 - Page 65 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The wide coverage is coupled with a low infrastructure cost, will enable cost effective provisioning for remote<br />

and rural areas. Wireless remote sensing (e.g. of atmospheric CO2 conditions) will be possible, as will<br />

navigation and surveillance applications. Dynamic provisioning of services and bandwidth is an important facet<br />

and will allow novel uses, such as disaster relief communications provision.<br />

Expectations are that HAPs will be communicating with fixed users in the 3 to 5 year time period, and to mobile<br />

users about 2 years after that.<br />

<strong>A2.</strong>4.2 Platforms<br />

Figure 35 Line of sight coverage of a 20km HAP situated over London (from [2)<br />

The HAP will be a solar powered airship, or plane operating in the stratosphere 17-20km above the earth’s<br />

surface. The platform will be quasi-stationary. An airship will be located within a cube of side 1km. A plane will<br />

orbit a fixed point using a radius of 1-3km.<br />

The platforms will be unmanned, but unlike satellites may be returned to earth for periodic maintenance. Thus<br />

the reliability constraints (and costs) on components may be less stringent than those for satellites in orbit.<br />

The 17-20km altitude will allow a wide coverage area e.g.<br />

Figure 35 shows the line of sight radius of coverage of a HAP situated 20km over London is greater than the<br />

distance to Land’s End (total ~500km).<br />

Some developers are also proposing the use of tethered balloons as platforms. Although these are not strictly<br />

speaking high altitude, the technology is similar and should be considered as an alternative method. The<br />

discussion available implies that this may be an interim step which is realisable in a shorter term than the<br />

stratospheric solution.<br />

<strong>Annex</strong> 2 - Page 66 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The environment in the stratosphere will have impact on the requirements of the payloads.<br />

As an example of this Capanina (see below) is paying attention to avoiding discharge in rf antennas, caused by<br />

low air pressures. Similarly, the thermal design of the payload must take into account the low air temperatures<br />

and wide variations in solar radiation from one side of the payload to another.<br />

For the HAP solution, the predictable challenges include power supply budgeting and unmanned navigation<br />

(ascent, station maintenance and descent):<br />

• Although solar powered platforms are intended for long term deployment, early trials for short term<br />

deployment may involve a platform taking up its own power supply. Although a stepping stone, this<br />

solution may have some practical applications such as short term (~1 month) deployment for disaster<br />

recovery communications. Since the power supply is required both for flight/station keeping and<br />

payload purposes, power budgeting needs to take a holistic view.<br />

• Although unmanned flight/station keeping has been demonstrated, there have been several mishaps e.g.<br />

[3]. These are considered to be learning opportunities, but are expensive in terms of lost resources.<br />

<strong>A2.</strong>4.3 Connections<br />

Figure 36 Vision of HAP connections (from [2])<br />

Figure 36 shows the predicted connections for a HAP. A variety of beams will be available from the earth<br />

facing antennas. For broadcast services such as HDTV low gain, low directivity antennas will be employed to<br />

provide maximum coverage. Medium gain antennas will be employed to provide bi-directional broadband fixed<br />

wireless access cells for fixed users. Steer-able high gain antennas will allow dynamic allocation of large<br />

capacity for purposes such as emergency communications during a disaster, and will also service high speed<br />

mobile users.<br />

mm-Wave are expected for platform to user connections. As well as broadcast, these may provide burst data<br />

connections of up to 120 Mbit/s. Free-space optical communications, allow higher capacity (622 Mbit/s) in clear<br />

conditions and will be used for inter-HAP communications and to supplement mm platform to ground backhaul<br />

communications.<br />

Connections to satellites and ground based hubs for back haul purposes will be provided. Inter-HAP connections<br />

will reduce the demand for terrestrial backbone network capacity and will also allow migration of highbandwidth<br />

optical connections to the ground from links which are obscured by rain or clouds to links with clear<br />

<strong>Annex</strong> 2 - Page 67 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

weather. The lack of clouds at stratospheric altitudes prevents inter-HAP optical connections from being<br />

obscured.<br />

Some of the major technological challenges here are the development of the steerable antennas and the ability to<br />

point acquire and track the links both for optical and mm-wave transmission. The likely solutions appear to be a<br />

mixture of mechanical and phase array techniques.<br />

<strong>A2.</strong>4.4 CAPANINA Project<br />

CAPANINA is a European project supported by the European Framework 6 initiative (www.capanina.org). The<br />

project consists of 13 European partners, supplemented and strengthened by collaboration with the National<br />

Institute of Information and Communications Technology of Japan. It is a €5.9M project involving 60 people<br />

and has the aim of providing “Broadband for All” from high altitude aerial platforms. The project started in Nov<br />

03 and is due to run for 3 years.<br />

The research is divided into 4 areas namely<br />

• Applications and services<br />

o This workpackage evaluates candidate applications and operating scenarios for delivery by<br />

broadband HAPs. The study will provide marketing and business models and will generate<br />

information of network requirements. The most suitable applications will be selected e.g. those<br />

that require a high capacity in both directions. Cost and revenue analysis will be performed on<br />

these applications, as will risk analysis.<br />

o Currently the project has identified 5 candidate applications:<br />

� Broadband internet access for residential/SOHO market<br />

� Broadcast based broadband e.g. HDTV<br />

� Special events and disaster recovery broadband connections<br />

� WiFi on trains and bus-coaches – this is seen as “a compelling argument” providing up to<br />

120Mbit/s to a high speed train<br />

� Internet backhauling<br />

These applications and associated business models form the basis of an issued deliverable.<br />

• Communications Links and Networking<br />

o This workpackage addresses all aspects of the physical communication links including<br />

HAP HAP, HAP Satellite and HAP Ground Node. The work builds on previous studies in<br />

HELINET to identify suitable access standards extending coverage to new requirements, such as<br />

high-speed mobile access and available equipment. Changes to support the new high mobility<br />

application and architecture, and other aspects are being fed back to the relevant standards bodies<br />

(e.g. already to OFCOM in the UK and in December 2005 to CEPT)<br />

Propagation studies will complement existing applicable measurements, with new HAP specific<br />

environmental measurements. Path impairments, such as Doppler effects, rain outage and<br />

multipath interference, and mitigation will be investigated. Test bed measurements are<br />

contributing.<br />

Advanced signal processing techniques will be developed to minimise power drain while<br />

mitigating environmental effects and managing the high aggregate data rates. Resource allocation<br />

strategies will mitigate the mobility and interference issues, while making efficient spectral usage,<br />

maintaining QoS, and sharing the spectrum with terrestrial and satellite operators.<br />

End-end networking and inter-working with other technologies will be of paramount importance.<br />

o Progress in this section includes:<br />

� Demonstration of both the rf and optical links in trials 1 and 2 (described below)<br />

� Modelling of same spectrum re-usage for cell coverage by different HAPs. Here different<br />

HAPs would broadcast using the same frequency to a cell. The signals would be separated by<br />

the directional nature of the ground antenna and the physical separation of the HAPs<br />

� Modelling on interference to ground BFWA base-stations. This work is being fed into<br />

standards bodies as a discussion document for methodologies for standards<br />

� Accelerometer studies to understand stability and its impact on tracking technology<br />

<strong>Annex</strong> 2 - Page 68 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

� Doppler studies and mitigation of e.g. tunnels for the high speed rail application<br />

• Communications Nodes<br />

o This workpackage generates the equipment and techniques for the HAP and the ground nodes.<br />

The equipment is being employed in the test bed and other trials. The workpackage will also<br />

develop techniques such as smart antennas and the signal processing for beam forming and optical<br />

links.<br />

The work will cover mm-wave transmission at around 28GHz and free space optical links. The<br />

HAP antennas will need to be stabilised and use pointing, acquisition and tracking techniques.<br />

RF antenna work will investigate controlling beam shape and direction. The high speed vehicle<br />

connection is anticipated to be the most stringent requirement, with both mechatronic and<br />

electronic (phase array) steering are anticipated to be required. Beamforming algorithms will be<br />

developed. Again here low power consumption will be important.<br />

Free space optical links will allow high capacity connections. These will be used to augment<br />

connections to ground based back haul stations (in clear air conditions). Optical interplatform<br />

links will not suffer rain & cloud outages as they are well above cloud levels and can replace<br />

terrestrial links (none available or lower infrastructure) and provide spatial diversity and backhaul<br />

availability e.g. if it is raining at one ground site to allow optical connection (and capacity) at<br />

another site. The project will design and produce HAP and ground optical terminals, and make<br />

measurements of the communication channels. The workpackage includes work to design and<br />

prototype a mechanical optical beam steering unit, and also to simulate non-mechanical methods<br />

such as 2D laser phase arrays<br />

o As described above the equipment has been demonstrated successfully in the first trial and second<br />

trial. The payload shape for the second and third trials is different to that for the first. Other<br />

progress includes:<br />

� Lens antenna demonstration<br />

� Modelling of smart antenna to reuse a frequency for several users<br />

• System Test Bed<br />

o The system test bed works in collaboration with the other workpackages. The applications and<br />

services identified in WP1 will be trialled to fixed users. Measurement data will be generated for<br />

the high speed train applications and the propagation environment for the fixed user will be<br />

accurately characterised.<br />

Three platforms will be used:<br />

1. 300m altitude 15m tethered aerostat, will demonstrate:<br />

• Broadband FWA (BFWA) of up to 120Mbit/s to fixed user (28GHz)<br />

and associated propagation measurements<br />

• End-end network connectivity<br />

• Services<br />

• Suitability of tethered aerostat capability in its own right<br />

• Optical backhaul communications (622 Mbit/s)<br />

2. Stratospheric balloon<br />

• Selection of BFWA tests from 1 (28/32 GHz or 47/48 GHz) - station<br />

keeping, payload, make-up of equipment<br />

• Propagation measurements<br />

• Optical communications<br />

• Backhaul link (up to 622 Mbit/s) point, acquire and track<br />

• Comparison with Japanese trial<br />

3. Trials with Japanese partners on Hawaii<br />

• Details have been agreed for this trial with the platform being the<br />

NASA pathfinder plus aircraft. The trial will include a free space optical<br />

<strong>Annex</strong> 2 - Page 69 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

pod. The Japanese partners NICT and Japan Stratosphere<br />

Communications are providing the vehicle and logistics for the 3 rd trial.<br />

o Achievements here include:<br />

� The first set of test bed trials have already been completed using a low altitude tethered<br />

balloon at Pershore UK. Bi-directional rf transmission was demonstrated and several<br />

broadband applications were trialled. Optical tracking tests were also carried out. The weather<br />

was unusually bad for summer in the UK in this case; however this provided useful stability<br />

data with the wind moving the aerostat 5-10 times in excess of that expected for a stratospheric<br />

platform. Tracking could still be achieved.<br />

� While the Capanina project uses this trial as a stepping stone to the stratospheric platform<br />

vision, some organisations, including the Capanina partner SkyLINC, are developing solutions<br />

using tethered aerostat platforms similar to that used here.<br />

� The second trial was completed in August 2005. Successful wireless (4 Mbps) and optical<br />

(1.25 Gbps) links were demonstrated with the free space optical link believed to be the first<br />

demonstration of such a stratospheric to earth link. The wireless link used a wide beam<br />

antenna on the platform and a tracking antenna on the ground, while the optical link used<br />

tracking antennae at both ends of the link. The trial allowed evaluation of the effect of the low<br />

temperature and pressure payload environment, along with the assessment of the effect of<br />

atmospheric turbulence.<br />

� The details of the collaborative trials with the Japanese have been agreed<br />

Capanina does not include research into the platforms themselves. There appear to be difficulties in generating<br />

funding for research projects upon these lines within the EU framework because of the way funding is<br />

organised. This appears to be a frustration for those involved in the project. Many of the interesting issues to be<br />

solved in these types of projects are aeronautical. Some interested parties mention it is a pity that the EU is not<br />

taking a lead in developing these technologies.<br />

<strong>A2.</strong>4.5 Roadmap<br />

Dr David Grace of CAPANINA has projected the following timeline for deployment:<br />

Near Term<br />

Suitable platforms are already developed such as NASA’s pathfinder plus aircraft. The initial deployments are<br />

expected to be temporary (


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

used up another could be deployed covering the same service area and sharing the same spectrum and so on<br />

[14]. This should be attractive to a network supplier.<br />

Long term<br />

The most interesting and “compelling” application for Western markets is the provision of high speed broadband<br />

to fast mobile users. Current train WiFi systems only provide about the same bandwidth as a dial-up to<br />

individual users. HAPs would allow >>2Mbit/s to each individual.<br />

This is the application which is requires most progress to be realised. The main developments to be made here<br />

are to do with tracking antennas. Other issues which need to be solved are the Doppler shifts of the signal and<br />

mitigation of obstructions (tunnels etc) and potential multipath fading. Various methods might be used to solve<br />

these. For example with broadcast the train may cache enough data for a tunnel passage. For interactive<br />

communications the HAP might talk to a fixed receiver which then uses WiFi to re-broadcast along the tunnel.<br />

Dr Grace believes that the HAPs would integrate with other transmission methods. For example, in large cities<br />

like London the trains may receive their signals from ground based transmitters whereas when the train left the<br />

city the HAP may take over. In this case handover strategies would be necessary.<br />

<strong>A2.</strong>4.6 Other research<br />

There are a number of other research projects in this area:<br />

• USE-HAAS is an EC framework 6 project which started on 1 st March 2005. Its objectives are to<br />

develop a roadmap and research strategy for High Altitude Airships and High Altitude Aircraft<br />

• NASA has just announced continuation of its research in the High Altitude Long Endurance Remotely<br />

Operated Aircraft project. This project uses the lessons learned from the Environmental Research and<br />

Sensor Technology project (1994 – recently) which has recently concluded. The project is planned to<br />

last 15 years [3]<br />

• The Korea Aerospace Research Institute is in the 5 th year of a 10 year program to develop a<br />

stratospheric airship [4]<br />

• The Japanese “SkyNet” project[5], [6] is a big (€100M to date) project for delivery of broadband and<br />

3G communications. Working in conjunction with a NASA spin off they have recently demonstrated<br />

communication provision from their Pathfinder Plus aircraft.<br />

• An ESA study into the delivery of Broadband from HAPs [7]<br />

• British National Space Centre contract into the study of V-band for HAPs and satellites<br />

• EPSRC study contract into delivery of 3G from HAPs<br />

• A similar Korean 3G delivery project<br />

• Indonesia Post and Telecoms HAPs project<br />

• UK companies Advanced Technologies Group (Stratsat) [8] and Lindstrand Balloons [9] developing<br />

HAPs<br />

• There are several initiatives in the US. SkyStation and Angel Technologies [10] have carried out a<br />

number of studies. Angel Technologies solution uses a piloted aircraft. Sanswire [11]. has announced it<br />

will develop HAPs for high speed internet delivery, has demonstrated its “stratellite” and announced an<br />

agreement to build and launch “stratellites” in South America There are several projects being financed<br />

for “Homeland Security” purposes. Boeing is involved in these activities<br />

• As mentioned above SkyLINC Ltd [12] and Platforms Wireless International [13] are developing<br />

tethered aerostat solutions.<br />

• ITU has had wide activities related to HAPs<br />

<strong>A2.</strong>4.7 References<br />

[1] http://www.capanina.org/ - most information in this report is gathered from this web-site<br />

[2] http://www.capanina.org/CAPANINA-Overview.pps<br />

<strong>Annex</strong> 2 - Page 71 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

[3] Del Frate J.H.: "Developing Technologies for High Altitude and Long Endurance", 5th Stratospheric<br />

Platform Systems Workshop, Tokyo, Japan, Feb 23-24, 2005<br />

[4] Kim D-M., Lee Y-G., Lee S-J., and Yeom C-H.: “Research Activities for the Development of Stratospheric<br />

Arship Platform in Korea”, 5th Stratospheric Platform Systems Workshop, Tokyo, Japan, Feb 23-24, 2005<br />

[5] http://www2.nict.go.jp/mt/b181/english/spf/strat-e.htm<br />

[6] http://www.tele.soumu.go.jp/e/system/satellit/skynet.htm<br />

[7] http://telecom.esa.int/telecom/www/object/index.cfm?fobjectid=8188#3<br />

[8] http://www.atg-airships.com/prod/stratsat_frames.htm<br />

[9] http://www.lindstrand.co.uk/<br />

[10] http://www.angeltechnologies.com/<br />

[11] http://www.sanswire.com/<br />

[12] http://www.skylinc.co.uk/<br />

[13] http://www.plfm.net/<br />

[14] D. Grace, J. Thornton, G. Chen, G.P. White, T.C. Tozer, Improving the System capacity of Broadband<br />

Services Using Multiple High Altitude Platforms, IEEE Transactions on Wireless Communications, To appear,<br />

Spring 2005<br />

<strong>Annex</strong> 2 - Page 72 of 282


<strong>A2.</strong>5 Mobility<br />

<strong>A2.</strong>5.1 Seamless Mobility: Convergence in networks and services<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Seamless mobility is expected to be achieved through IP- based networks that support portability for seamless<br />

interoperation across converged networks.<br />

<strong>A2.</strong>5.1.1 WLAN services<br />

Through deployment of WLAN service, users can be provided with the same level of multimedia service in a<br />

wireless environment as they have been provided in a wired environment. However, despite its high data<br />

transmission rate and low price, WLAN service can be provided only in a limited coverage. So the possibility<br />

that the user will be provided with the data service like the Internet at any time and anywhere is limited.<br />

The interest in providing service continuity through mutual interworking between mobile and WLAN systems is<br />

growing towards the evolution to all-IP networks in mobile communication to enable the services that can<br />

operate in different networks.<br />

<strong>A2.</strong>5.1.2 Horizontal and vertical mobility<br />

A multimedia service that provides voice, data, and images services through wired, wireless and satellite<br />

environment and a global roaming service that can transmit all over the world are the concepts behind the 3G<br />

mobile communication system. The future broadband mobile communication system aims to achieve a seamless<br />

global roaming service through inter-system handover (vertical handover) designed to enable handover between<br />

different technological systems and different bandwidth frequencies<br />

One of the biggest problems in providing wireless service to fast moving subscribers is service continuity.<br />

Unlike the 3G mobile communication developed to date, future broadband mobile communications set the goal<br />

of ensuring high frequency efficiency and high transmission rate to a moving terminal providing various types<br />

of QoS, etc. The development of future broadband mobile communications is expected to centre around the<br />

micro/pico cell.<br />

In the existing mobile communications system, transmission of voice traffic is most important. Therefore,<br />

services are provided on the basis of protocols for voice calls, and the handover technology is limited to<br />

handovers between cells or between mobile switching centres (MSC). However, as the demand for data traffic<br />

increases and connections to the wired network server for the World Wide Web (WWW) or FTP service become<br />

more frequent, not only is handover between cells and between MSCs needed, but so is mobility in the upper<br />

layer. In addition, because all systems on the mobile communications network are moving to an ALL-IP or pure-<br />

IP structure, mobility in the IP layer and handover have become a major issue requiring consideration.<br />

The Internet Engineering Task Force (IETF) has defined mobile IP to support host mobility. Using the concepts<br />

of home agent (HA) and care-of address (CoA), it enables a packet incoming to a moving terminal to be<br />

transferred to CoA via HA through tunneling. Further enhancements are being discussed in IETF to improve the<br />

routing and security with Mobile-IPv6.<br />

To support mobility of the terminal in an IP network, the most easy and hierarchical method is to provide<br />

transparent handover to the upper layer using Mobile IP that supports mobility in the IP layer.<br />

<strong>A2.</strong>5.2 Broadband mobile convergence network<br />

With 3G networks, connectivity of up to 2 Mbit/s, terminal mobility and multimedia services can be<br />

provisioned. Mobile operators and application developers are continuing to seek new applications and services<br />

that could generate additional revenues and increase usage. The vision of future mobile networks in the<br />

converging environment, will be the provision of broadband access, seamless global roaming, widely available<br />

multimedia, and utilization of the most appropriate connectivity technology.<br />

<strong>Annex</strong> 2 - Page 73 of 282


<strong>A2.</strong>5.2.1 Perspectives on mobile convergence<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

In general, there is a movement towards integration and convergence of heterogeneous wireless access<br />

networks. This includes cellular network but also emerging systems, such as WLAN, WPAN, wireless sensor<br />

networks (WSN), mobile ad-hoc networks, digital broadcasting networks, and the Internet, which will<br />

complement or expand current and next-generation cellular networks. It is envisioned that the network<br />

environment for future broadband mobile communications will consist of an IPbased packet network<br />

infrastructure offering converged services.<br />

Figure 37: Evolution of seamless services across heterogeneous Networks<br />

The convergence can ultimately provide seamless and high-quality broadband mobile communication<br />

service and ubiquitous service through wired and wireless convergence networks without spatial and temporal<br />

constraints, by means of connectivity for anybody and anything, anytime and anywhere.<br />

Various mobile wireless access systems will coexist to provide integrated services. Satellite, cellular, WLAN,<br />

digital broadcast, and other access systems will be connected to provide integrated and seamless services via a<br />

common IP-based core network.<br />

Research centres such as the Wireless World Research Forum (WWRF) 96 is working on the direction of future<br />

strategic research in the wireless fields, and to generate, identify, and promote research topics and technical<br />

trends for mobile wireless system technologies. It is thus intended to contribute to the work done within the<br />

UMTS forum, ETSI, 3GPPx 97 , IETF, ITU and other relevant bodies in relation to commercial and<br />

standardization issues deriving from the research.<br />

96 http://www.wwrf.org<br />

97 http://www.3gpp.org<br />

<strong>Annex</strong> 2 - Page 74 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>5.2.2 Technical approaches to mobile convergence networks<br />

From the service perspective, mobile network architecture will become flexible and versatile, and new services<br />

will be easy to deploy. The new architecture would be based in IP-based core network components that are<br />

access technology independent, and access network components which are technology dependent, consisting of<br />

functional components such as control functions and transport functions.<br />

The next-generation convergence network incorporates the provision of a common, unified, and flexible service<br />

architecture that can support multiple types of services and management applications over multiple types of<br />

transport networks. There are essential attributes of this next-generation service architecture such as layered<br />

architecture, open service interface, and distributed network intelligence. It has characteristics such as open<br />

network architecture for ease of deploying new services, ALL-IP based integrated transport networks, integrated<br />

services and billing management, heterogeneous access network and multi-function terminals.<br />

Figure 38: Convergence Network scenario<br />

<strong>A2.</strong>5.2.2.1 Integration, interworking and interoperability<br />

The future broadband mobile communications requires interoperation not only with WPANs or WLANs, but<br />

also with the 2G or 3G mobile communication system. Interoperability means the availability of well-defined<br />

gateway points and functions between networks. Interoperability is a key technical issue to ensure widespread<br />

adoption of services. From the perspective of system interoperability, the future mobile network will be required<br />

to support global standards such as standardized interfaces between networks, effective and user-friendly<br />

operation, administration, maintenance and provisioning (OAM&P) facilities, and backward compatibility with<br />

existing legacy mobile networks.<br />

Next-generation broadband mobile communications, beyond 3G (4th generation) will be able to provide diverse<br />

multimedia convergence and ubiquitous service. The new generation of mobile multimedia applications in IP<br />

environments will provide synergy between the mobile world and the Internet.<br />

The WWRF is driving a single open mobile wireless Internet architecture that enables seamless integration of<br />

mobile telephony and Internet services, meeting the needs of network operators and Internet service providers.<br />

Although IP is widely accepted protocol, it still weaknesses, like limited address space, lack of mobility and<br />

QoS mechanisms, and poor performance over wireless links. IPv6 movement for a next-generation may resolve<br />

some of these problems.<br />

Cellular network operators are integrating WLAN into their cellular data networks to exploit the bandwidth and<br />

roaming facilities, in provisioning the cost-effective seamless services.<br />

<strong>Annex</strong> 2 - Page 75 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>5.2.2.2 Digital broadcasting networks<br />

Convergence of communications and broadcasting occurs as demarcation between these services becomes<br />

obscure. Cooperation of mobile network operators with the broadcasting network will open up new applications<br />

such as navigation, traffic information, and interactive multimedia services. A performance improvement by the<br />

broadband mobile convergence network will create the basis for providing various multimedia contents in the<br />

future and lay the groundwork in advance for a converged mobile communications broadcasting service. The<br />

convergence technology is also able to show the interactive data broadcasting service in terrestrial and satellite<br />

DMB systems. It is suitable to transmit entertainment and information programmes as well as traffic information<br />

and events into cars, buses or trains.<br />

<strong>A2.</strong>5.2.2.3 Mobile ad hoc networks<br />

Ubiquitous computing can be realized with the mobile ad hoc networks, which are being experimented<br />

currently. A mobile ad hoc network does not involve the use of existing networks, and it can be constructed<br />

even in extraordinary conditions like a disaster. In a mobile ad hoc network, nodes communicate with each other<br />

without the help of any pre-existing structure. The network is autonomously formed among many nodes such as<br />

PDA, laptops with varying functionalities and power levels. It will be the enabler for ubiquitous computing as<br />

well as perform significant functions during natural disasters where pre- existing infrastructure may be<br />

destroyed. The most representative ad hoc network is the multi-hop mobile ad hoc network based on IP<br />

(MANET) that is being standardized by IETF. The most conspicuous features of the mobile ad hoc network are<br />

dynamic network topology change associated with node mobility, which directly affects the routing protocol that<br />

manages the routes.<br />

<strong>A2.</strong>5.3 Extra information:<br />

• ITU-R; Broadband Mobile Communications towards a converged world, March. 2004<br />

• P. S. Henry, “Wi-Fi: What’s Next ?”, IEEE Communications Magazine December 2002.<br />

• Theodore Zahariadis and Demetrios Kazakos, “(R)Evolution toward 4G Mobile Communication<br />

Systems”, IEEE Wireless Communications, August 2003.<br />

• Yungsoo Kim et al: “Beyond 3G: Vision, Requirements, and Enabling Technologies”, IEEE<br />

Communications Magazine, March 2003.<br />

• Willie W. Lu, “Fourth-Generation Mobile Initiatives and Technologies”, IEEE Communications<br />

Magazine, March 2002.<br />

• Theodore B.et al, “Global Roaming in Next-Generation Networks”, IEEE Communications Managine,<br />

Next-Generation Broadband Wireless Networks and Navigation Services, Februrary 2002.<br />

• Johan de Vriendt, Philippe Laine, Christophe Lerouge, and Xiaofeng Xu, “Mobile Network Evolution :<br />

A Revolution on the Move”, IEEE Communications Magazine, April 2002.<br />

• Kyungsoo Jeong, Kimo Chung, Youngho Jo, and Jaehwang Yu, “Convergence Technologies of Mobile<br />

Communications and Broadcasting over Mobile Communication Networks”, SK Telecommunications<br />

Review, August 2003.<br />

• Mehmet Ulema and Barcin Kozbe, “Management of Next-generation Wireless Networks and Services”,<br />

IEEE Communications Managine, February 2003.<br />

• Sandy Teger and David J. Waks, “End-User Perspective on Home Networking”, IEEE<br />

Communications Managine, April 2002.<br />

• Ian F. Akyildiz, Weilian Su, Yogesh Sankarasubramaniam, and Erdal Cayirci, “A Survey on Sensor<br />

Network”, IEEE Communications Magazine, August 2002.<br />

• Alexender Linden, “Emerging Technology Scenario”, Gartner Symposium ITXPO 2003, March 2003<br />

<strong>Annex</strong> 2 - Page 76 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>6 VIDEO in ALL-IP BROADBAND NETWORKS<br />

<strong>A2.</strong>6.1 Audio Video Coding<br />

The field of audio-visual coding has known, during the last twenty years, a significant evolution leading to the<br />

emergence of a large number of International standards. In 1984, the CCITT study group XV specifies a<br />

standard – H.261 - for videophony and videoconferencing applications operating at integral multiples of 64<br />

Kbits/s in the range of 64 Kbits/s to 1.92 Mbits/s. In parallel, the CCIR was issuing the recommendations 721<br />

and 723 specifying the coding of CCIR 601 signals with contribution quality at 140 Mbits/s for transmission<br />

over H4 channels and between 30 and 45 Mbits/s for transmission over H3 channels with a distribution quality.<br />

The CCITT – which later on became the ITU – recommendation H.261 evolved into the H.263 standard for<br />

videophony, videoconferencing and mobile communications. In 1998, the ITU added 12 extra modes to the<br />

H.263 standard leading to H.263 version 2 (or H.263+). These extra modes were introduce to improve the<br />

compression performance, to provide some error resilience and some degrees of scalability or layered<br />

representation.<br />

In 1992, the joint photographic expert group (JPEG), joint group between ISO/SC2/WG8 and CCITT SG VIII<br />

specifies the standard ISO 10918 for still image coding. At the same time, the moving picture expert group<br />

(MPEG) defines the MPEG-1 coding standard for digital storage media with access rates up to about 1.5<br />

Mbits/s. The advances in the area of digital signal compression have rapidly allowed considering the distribution<br />

of several TV programmes over a transmission channel (Satellite, Cable or Terrestrial). In the early 1990, ISO<br />

has thus initiated a new standardization phase which led in 1995 to the MPEG-2 standard for digital TV and<br />

HDTV distribution.<br />

The emergence of the Internet, of mobile applications over narrowband networks has given the impulse for<br />

pursuing this standardization trend, giving rise to MPEG-4 version 1 in 1999, version 2 in 2000 and more<br />

recently to MPEG-4 part 10 known also as H.264 102 , result of a joint effort between ISO and ITU. The<br />

performance gap of H.264 with respect to MPEG-4 versions 1 and 2 is such that it is likely to impose itself on<br />

the market for all applications in need of compression efficiency.<br />

Notwithstanding this large number of solutions, compression at low bit rates remains a widely sought capability<br />

for audiovisual communication over voice-band and wireless networks. However, even if compression remains a<br />

key issue, this is not the only one that has to be taken into account. Scalability and error resilience for<br />

transmission in heterogeneous environments with non guaranteed delivery QoS have become important features<br />

of compression solutions for a large number of applications (see below and annex 1). However, scalability<br />

versus simulcast is still at the core of the debates in some standardization bodies (ISO).<br />

<strong>A2.</strong>6.1.1 Scalable audio-visual content representation<br />

Scalable AV coding has become a very active field of research. It is basically motivated by the following<br />

application assets.<br />

• Flexible and seamless adaptation to heterogeneous receiving devices;<br />

• Optimized delivery QoS with optimized protection and flexible and seamless dynamic adaptation to varying<br />

bandwidth in wired and wireless networks; A scalable solution would allow the server to adapt dynamically<br />

to the network load by reducing/increasing the amount of data per client (with a guaranty on a minimum<br />

quality).E.g., in UMTS (this is also true for GPRS), the available bandwidth on a node (384 kbps targeted)<br />

has to be shared between several users. Storing several files on the server is not a good solution for<br />

bandwidth adaptation. A broad-range scalability ranging from 32 to 384 kbps with a granularity of 32 kbps<br />

would be highly beneficial (mainly temporal and SNR scalabilities) for UMTS. In the context of GPRS,<br />

granularity of 10 kbps would be appreciated in order to fit with time slots size.<br />

• Easier roaming support in home networks or outdoor mobile applications: changing device or network would<br />

not imply changing the source file/bit-stream, hence would make the synchronisation more natural.<br />

102 T. Wiegand, ed., “Version 3 of H.264/AVC,” JVT-L012d2<br />

<strong>Annex</strong> 2 - Page 77 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Optimize the trade-off between the number of users served and the QoS perceived by each of them; the<br />

bandwidth available for a customer may be 2.5 Mbps instead of the 3 Mbps necessary to receive the service<br />

(TV over DSL). Truly scalable coders would allow serving this group of customers even though their<br />

capacity is below the requested capacity.<br />

• Extract multiple resolutions for a single target bit-rate: for instance, 300 kbps for mobile terminals and for<br />

TV applications. For the mobile terminal we would use a limited spatial resolution, while for the TV the<br />

spatial resolution would be higher. In such a case, a predefined encoding at 300kbps is obviously not a<br />

suitable solution. A fully scalable solution allowing for multiple paths in the spatial, temporal and quality<br />

resolutions is the appropriate way forward<br />

• Allow for seamless network and service offer evolutions: For example for ADSL, a three levels solution<br />

based on 500K/ 1Mbps / 6Mbps could be acceptable in one year from now, but what about in 3 or 5 years<br />

time?: would the most appropriate offer be 2Mbps / 5Mbps / 10Mbps ? or 2.5Mbps / 7Mbps / 13Mbps ? Or<br />

250 Kbps / 1Mbps / 4 Mbps?). With a coder providing only three or four levels of scalability, all the content<br />

may need to be re-encoded again each time the network capability or service offer evolves. In contrast, with<br />

a truly scalable coder, the content will be encoded only once, the appropriate bistream (rate, resolution) will<br />

be extracted from the scalable bitstream.<br />

• Minimize production costs: In the context of TV over ADSL, the MPEG2 bit-rate is set to a predefined<br />

value. The same content (TV bouquets) is also broadcasted over DVB channels, with a different bitrate. This<br />

implies a multiple coding of the same content. It is also worth noting that DVB is working on new mobile<br />

DVB-H devices that will able to receive TV content over DVB-T networks at a different bit-rate. The<br />

emerging Wimax networks will also show the capacity to broadcast TV over wireless. The future GE<br />

networks and intelligent DSLAMs will probably show the capability to send the same content over Wimax<br />

and ADSL. It would therefore be highly desirable for content providers to use the same compressed TV<br />

content for DSL, Wimax, and DVB networks.<br />

• Easier DRM support: Different DRM schemes can be applied to different layer or packets to enable<br />

differential services. In addition, transcoding of content can alter the watermarks present in the video stream.<br />

Alternatively, if scalable coding is used, then only a ‘baseline’ scalable layer can be watermarked and<br />

adaptation of content can be done easily without removing the watermarks.<br />

• Would allow to make full advantage of the MPEG-21 DIA framework.<br />

Amendment to MPEG-4 Part 10: SVC<br />

An activity aiming at the specification of a scalable video coding standard has been launched in the context of<br />

MPEG-21. An overview of the MPEG-21 standard is given in <strong>Annex</strong> 1. This activity was intended to lead to a<br />

standard to be included in MPEG-21 as MPEG-21 part 13. However, the emerging solution being essentially an<br />

extension of MPEG-4 part 10 (or MPEG-4 AVC, or H.264), in January 2005, it was decided to move this<br />

activity to the JVT group (joint video team between the ITU and MPEG, group which has specified MPEG-4<br />

AVC or H.264). It is now a JVT (ITU+MPEG) amendement of the MPEG-4 Part. 10 (H.264/AVC). SVM is<br />

now called JSVM.<br />

The MPEG video coding standards (essentially MPEG-1, MPEG-2) have known a very large success over the<br />

past decade. However, the existing standards show several limitations, e.g. in terms of scalability, loss and error<br />

resilience that have become increasingly apparent with growing and evolving needs and requirements of<br />

multimedia applications (e.g., Internet and mobile video). Early 2003, the MPEG committee began to<br />

investigate the possibility of creating a scalable video coding standard that would provide a range of features in<br />

a single compressed bitstream. The objective was to specify a coding scheme allowing for reliable delivery of<br />

video to diverse clients over heterogeneous networks using available system resources, particularly in scenarios<br />

where the downstream client capabilities, system resources, and network conditions are diverse and not known<br />

in advance.<br />

A call for evidence was issued in March 2003. Proposals have been submitted for consideration by the<br />

committee in July 2003. Their evaluation led to launching a call for proposal in October 2003. Proposals have<br />

been submitted in reply to the call for proposals. The calendar of the corresponding work item is as follows<br />

(N6003):<br />

• WD - January 2005<br />

• CD - October 2005<br />

• FCD - March 2006<br />

• FDIS - July 2006<br />

<strong>Annex</strong> 2 - Page 78 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The SVC standard was initially targetting the following set of requirements. The first requirement targeted is<br />

Coding efficiency:<br />

The embedded bitstream shall not incur a coding penalty larger than 10% in bit-rate for the same PERCEIVED<br />

quality as compared with the bitstream provided by state-of-the-art non-scalable coding schemes under errorfree<br />

conditions. The reference system considered is H.264.<br />

A set of requirements with respect to scalability has also been defined with more or less importance:<br />

• Spatial scalability: The standard should be able to support a variety of resolutions, including QQQCIF,<br />

QQCIF, QCIF, CIF, SD, and HD, and also higher resolutions such as 1436 x 1080 to 3610 x 1536. The same<br />

stream should enable more than 3 levels of spatial resolution to be viewed on a variety of devices having<br />

different spatial resolutions.<br />

• Temporal scalability: The temporal scalability should support decoding of moving pictures with frame<br />

rates up to 60 Hz. The same stream should enable at least 3 levels of temporal resolutions. For example,<br />

7.5Hz, 15Hz and 30Hz should be supported for certain applications.<br />

• SNR Scalability : The codec shall support a mechanism that enables quality (SNR) scalability. The SNR<br />

scalability refers to decoding of moving pictures having quality varying progressively between acceptable<br />

and visually lossless in fine-grained steps. A potential application of this functionality is network adaptation.<br />

• Complexity scalability: The coded bitstream shall be adjustable to the complexity levels and power<br />

characteristics of the receiving devices. A device coule then trade-off quality against complexity and/or<br />

longer battery life.<br />

• Combined scalability: This refers to combined quality (SNR), temporal, spatial and complexity scalability.<br />

Also, from a user experience perspective, scalable coding should provide a gradual change (fine-granularity)<br />

in quality (SNR), temporal and spatial scalability. When a device moves from a high bandwidth to a low<br />

bandwidth connection, the change in the user experience should be gradual.<br />

• Multiple adaptations: The bitstream shall permit multiple successive extractions of lower quality bitstreams<br />

from the initial bitstream. When mobile video is being streamed over a 3G IP network with Multimedia<br />

Broadcast/Multicast Services (MBMS), the video bitstream can potentially be ‘transcoded’ at various points<br />

in the network.<br />

In addition to being compression efficient and allow for flexible scalable content representation, the compressed<br />

bitstream should also<br />

• Be robust to different types of transmission errors: For transmission over error-prone networks (wireless,<br />

Internet), the scalable coding should provide acceptable quality video under different types of error patterns<br />

(burst, independent, uniformly distributed, etc.).<br />

• Be robust under “best-effort” networks and in presence of server and path diversity: This is an<br />

important feature for streaming over best-effort Internet or for video over 802.11b. The distributed<br />

infrastructure of a content delivery network may be used to stream video from multiple servers and over<br />

multiple paths to each client, thereby overcoming problems afflicting a single server or a single path.<br />

• Allow for graceful degradation, i.e. give acceptable quality with graceful degradation under different<br />

transmission error conditions<br />

Additional targeted functionalities are:<br />

• Colour depth: The standard shall support coding of moving pictures containing up to 10 bits per pixel<br />

component (linear and logarithmic). The standard should support coding of moving pictures containing up to<br />

12 bits per pixel component (linear and logarithmic). The standard shall also support coding of moving<br />

pictures in YCbCr formats. For the YCbCr format, 4:4:4, 4:2:2 and 4:2:0 samplings should be supported if<br />

the source is in this format. The standard should support coding of moving pictures in RGB formats.<br />

• Base-layer compatibility: The standard should be able to build the scalable coding on top of existing baselayer<br />

standards (MPEG-4 AVC or H.264).<br />

• Low complexity codecs: shall enable low complexity implementations for encoding as well as decoding.<br />

• End-to-end delay: shall support a low delay mode with a maximum end-to-end delay of 150ms for the video<br />

encoding and decoding, e.g., for conversational services.<br />

• Random access capability: should provide random access at any scalability layer (e.g. spatial, temporal,<br />

quality).<br />

<strong>Annex</strong> 2 - Page 79 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Support for coding of interlaced signals: For certain resolutions (SD, HD) scalable coding shall provide a<br />

mechanism for coding interlaced material. For these given resolutions and applications, the final bitstream<br />

should permit scalability between interlaced and progressive formats (e.g. ITU-R 601 interlaced material and<br />

CIF progressive material).<br />

• System interface to support quality selection: shall define a uniform way to manipulate and adapt scalable<br />

streams that can be mapped easily to used protocols. E.g., the industry requires a standard interface to<br />

manipulate the scalable video contents created by the proposed technology via popular system layers and<br />

control protocols, such as MPEG-4 systems, MPEG-21 DIA, RTSP/SDP, and SIP.<br />

Emerging technologies<br />

Two main technologies have been at first investigated in parallel:<br />

• MCTF 103 /Wavelet based solutions producing fully scalable and embedded bitstreams, at the expense of some<br />

compression vs quality penalty at low resolutions/rates;<br />

• MPEG-4 AVC-based, with a conformant AVC base layer and with scalability extensions.<br />

The technology based on a scalable extension to MPEG-4 AVC has been retained as the basis for the definition<br />

of the new standard. It is likely to be introduced as a JVT (ITU+MPEG) amendment of the MPEG-4 Part. 10<br />

(H.264/AVC). The reference software is called the JSVM (JVT Scalable Video Model).<br />

An adhoc group continues to explore wavelet-based technologies within MPEG. This group is supposed to<br />

identify the requirements not fulfilled by the forthcoming MPEG-4 part 10 amendment, requirements which<br />

could then justify starting yet another standardization phase. This activity might again lead to a new call for<br />

evidence if the technology shows sufficient maturity.<br />

<strong>A2.</strong>6.1.2 New audio-visual concepts<br />

<strong>A2.</strong>6.1.2.1 3D A/V<br />

3D A/V refers to technology in support of interactive and 3D TV. Interactivity is understood in the sense that<br />

the user can navigate within real word audio-visual scenes and freely choose a viewpoint and/or view direction.<br />

Different set-up and technologies are envisaged for interactive and 3D TV: omni-directional video, free<br />

viewpoint video and stereoscopic or multi-view video. The “omni-directional video” set-up refers to 360-degree<br />

view from one single viewpoint or spherical video. The notion of “free viewpoint video” refers to the<br />

possibility for the user to choose an arbitrary viewpoint and/or view direction within a visual scene, creating an<br />

immersive environment. Stereoscopic video is composed of two-view videos, the right and left images of the<br />

scenes which, combined, can recreate the depth aspect of the original scene. The user can also get a depth<br />

impression of the scene by generating separate views for each eye.<br />

In multi-view set-ups, multiple synchronized video streams depicting the same scene from different viewpoints<br />

must be encoded. This represents a huge amount of raw image data. By applying standard compression<br />

technology to each stream individually, the redundancy among the streams is not exploited. New algorithms<br />

must be developed for encoding and transmitting multi-view sequences. Navigation within 3D scenes requires<br />

an image based representation and a 3D reconstruction and representation of the scene. This requires capturing<br />

the 3D geometry of the scene. Technology to reconstruct 3D objects and geometries from real captured imagery<br />

and 3D audio information from real captured sound needs to be developed. The 3D geometry model can be used<br />

to relate the video images recorded from different viewpoints. View-dependent geometry and mesh coding,<br />

scalable representation for geometry and 3D meshes are required for efficient delivery with low latency of the<br />

3D scene over networks. Since Dec. 2001 MPEG has been exploring 3D audio-visual data representation and<br />

coding technology.<br />

Multiple Description Coding (MDC) and Joint Source-Channel Coding<br />

Most compression systems are based on transform coding, i.e., rely first on a linear transform that projects the<br />

time or spatial signal representation into another space of representation. In theory, this projection should aim at<br />

first concentrating the signal energy on a restricted number of close to independent (or at least de-correlated)<br />

samples (or coefficients). However, the optimal transform being signal distribution dependent, most practical<br />

systems are based on sub-optimal approaches, and the resulting coefficients are not independent. The transform<br />

is often chosen so that the resulting signal representation allows to best exploit the residual redundancy and<br />

103 MCTF : Motion-Compensated Temporal Filtering<br />

<strong>Annex</strong> 2 - Page 80 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

possibly end-user perceptual characteristics. The redundancy is then removed by using prediction techniques<br />

and/or using signal statistics dependent entropy codes.<br />

The considerations above are completely independent of possible degradations induced by the transport of the<br />

compressed streams over the networks. Compression systems aim at removing as much as correlation as<br />

possible, i.e., at producing a sequence of independently and uniformly distributed bits.<br />

However, the counterpart is an increased sensitivity to transmission noise, i.e. to both erasures and errors. The<br />

underlying assumption is that the lower layers (transport, link and physical layers) support erasure and error<br />

recovery, via appropriate ARQ and/or FEC mechanisms. However, this separation principle suffers from wellknown<br />

limitations: delay and/or reduced throughput for the higher layers. Considering FEC, Reed-solomon<br />

codes are often used at the packet level, i.e., after the encapsulation in RTP packets, to recover some of the<br />

erasures. Granularity of code rate adaptation is restricted as dependent on the packet size. The delay induced can<br />

also be very high. In addition, all data in a packet are protected with the same code rate. Given that the data<br />

transported in a packet have different levels of priority (e.g., for video motion is more important than texture),<br />

this is sub-optimal. This has motivated recent research trends in the direction of joint source-channel coding<br />

(JSCC).<br />

The JSCC paradigm aims at coupling compression and protection (against both types of impairments, i.e.<br />

erasures and errors) functions in a very flexible and efficient way. The approach is in complete compatibility<br />

with 3GPP and IETF trends exemplified by UDP-lite and RHOC. The goal is to provide an optimum trade-off<br />

between compression efficiency and resilience with dynamic adaptation on the basis of information<br />

Let us, for example, consider the problem of erasures or packet losses. In this type of degradation, the receiver<br />

either receives the message (packet) correctly or does not receive it at all. In addition, it knows the locations of<br />

the erased samples (the lost packets). Compression standards, such as MPEGx and H26x, support mechanisms<br />

such as prediction modes restriction, coding mode selection taking into account the signal distortion induced by<br />

the erasures 104 105 . The adaptation of coding modes to the loss characteristics of the network is a first approach<br />

in the direction of joint source-channel coding. Choosing an Intra coding mode rather than a temporal predictive<br />

mode, amounts to keeping some temporal correlation in the signal representation in order to avoid or restrict loss<br />

propagation on consecutive images. One can however push further the idea in the direction of solutions allowing<br />

a finer control of the rate-distortion performances of the end-to-end chain. Rather than trying to de-correlate the<br />

signal samples, to eventually produce close to independent samples, in presence of erasures, it may be preferable<br />

to maintain or introduce redundancy in the compressed source representation. These considerations have<br />

motivated research in the area of frame expansions as joint source-channel codes.<br />

To make an optimal use of the redundancy introduced in the compressed stream, one can in addition exploit<br />

channel diversity. This amounts to transmit the data in such a way that the probability of corrupting correlated<br />

data is low. This way, one can then on the receiver side estimate corrupted information from correlated received<br />

information. In source coding, this is referred to as multiple description coding. Several correlated coded<br />

representations of the signal are created and transmitted on different channels. Note that, alternately to making<br />

use of different channels, one can design a packetization scheme (with interleaving) so that the probability of<br />

having two packets transporting correlated information corrupted at the same time would be low. Multiple<br />

description coding principles originated back in the 70’s with the idea of channel splitting of odd and even<br />

speech samples to cope with phone lines outages. Multiple description coding has then been formalized as a<br />

generalization of source coding subject to a fidelity criterion for communication systems that use diversity to<br />

overcome channel impairments. In other words, the question is how to achieve the best average rate-distortion<br />

performance when all the channels work, subject to constraints on the average distortion when only a subset of<br />

channels is received correctly. This question has led to the definition of theoretical optimal achievable ratedistortion<br />

regions 106 107 and to significant research effort dedicated to the design of practical systems for<br />

generating descriptions that would best approach these theoretical bounds. An overview of research directions in<br />

this area is given in 108 .<br />

104 R.O.Hinds, T.N. Pappas, and J.S. Lim, “Joint block-based video source/channel coding for packet-switched networks,” in Proceedings<br />

SPIE Visual Communication and Image Processing, Feb. 1997, vol. 3309, pp. 124–133<br />

105 V. K. Goyal, “Multiple description coding : compression meets the network,” IEEE Signal Processing Magazine, vol. 18, no. 5, pp. 74–<br />

93, Sept. 2001<br />

106 L. Ozarow, “On a source coding problem with two channels and three receivers,” Bell Syst. Tech. J., vol. 59, pp. 1909–<br />

1921, Dec. 1980.<br />

107 A.A. El Gamal and T.M. Cover, “Achievable rates for multiple descriptions,” IEEE Trans. on Information Theory, vol. IT-28, no. 6, pp.<br />

851–857, Nov. 1982.<br />

108 C. Guillemot and P. Christ, “ Joint source-channel coding as a framework for 4G wireless multimedia ”, Eurasip computer<br />

communication journal, special issue on “ Research directions for 4th generation networks ”, vol.27, No. 8 , pp. 762-779, Mai 2004<br />

<strong>Annex</strong> 2 - Page 81 of 282


<strong>A2.</strong>6.2 Emerging transport protocols<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Interactivity requirements precluding the usage of TCP, UDP and RTP have become the protocols of choice for<br />

the transport of multimedia continuous streams on the Internet. They are widely used in existing products for<br />

multimedia transport (see Apple’s quicktime streaming server, Microsoft windows media server, Real networks<br />

helix universal server, etc… ). Multimedia streaming over RTP/UDP has been also widely investigated in part<br />

IST FP5 projects and is being retained by initiatives such as ISMA (Internet Streaming Media Alliance).<br />

<strong>A2.</strong>6.2.1 RTP/RTCP<br />

The real-time transport protocol (RTP) provides end-to-end delivery services for data with real-time<br />

characteristics, such as interactive audio and video. Those services include payload type identification, sequence<br />

numbering, timestamping and delivery monitoring. Applications typically run RTP on top of UDP to make use<br />

of its multiplexing and checksum services; both protocols contribute parts of the transport protocol functionality.<br />

However, RTP may be used with other suitable underlying network or transport protocols. RTP supports data<br />

transfer to multiple destinations using multicast distribution if provided by the underlying network.<br />

However, RTP itself does not provide any mechanism to ensure timely delivery or provide other quality-ofservice<br />

guarantees, but relies on lower-layer services to do so. It does not guarantee delivery or prevent out-oforder<br />

delivery, nor does it assume that the underlying network is reliable and delivers packets in sequence. The<br />

sequence numbers included in RTP allow the receiver to reconstruct the sender’s packet sequence, but sequence<br />

numbers might also be used to determine the proper location of a packet, for example in video decoding, without<br />

necessarily decoding packets in sequence. While RTP is primarily designed to satisfy the needs of multiparticipant<br />

multimedia conferences, it is not limited to that particular application. Storage of continuous data,<br />

interactive distributed simulation, active badge, and control and measurement applications may also find RTP<br />

applicable. RTP consists of two closely-linked parts:<br />

• the real-time transport protocol (RTP), to carry data with delay-constraints;.<br />

• the RTP control protocol (RTCP), to monitor the quality of service and to convey information about the<br />

participants in an on-going session.<br />

RTP is intended to be malleable to provide the information required by a particular application and is often<br />

integrated into the application processing rather than being implemented as a separate layer. RTP is a protocol<br />

framework that is deliberately not complete. Unlike conventional protocols in which additional functions might<br />

be accommodated by making the protocol more general or by adding an option mechanism that would require<br />

parsing, RTP is intended to be tailored through modifications and/or additions to the headers as needed.<br />

Therefore, a complete specification of RTP for a particular application will require one or more companion<br />

documents:<br />

• a profile specification document, which defines a set of payload type codes and their mapping to payload<br />

formats. A profile may also define extensions or modifications to RTP that are specific to a particular class<br />

of applications. Typically an application will operate under only one profile.<br />

• payload format specification documents, which define how a particular payload, such as an audio or video<br />

encoding, is to be carried in RTP.<br />

RTP Data Transfer Protocol<br />

The RTP header consists of a fixed field, a variable field and an extension field. The first twelve octets fixed<br />

field are present in every RTP packet, while the variable field which contains a list of 0 to 15 CSRC identifiers is<br />

present only when inserted by a mixer, and the extension field is optional, it’s usage as well as length and<br />

contents will depend upon individual implementations. See RFC 1889.<br />

In RTP, multiplexing is provided by the destination transport address (network address and port number) which<br />

define an RTP session. It is not intended that the audio and video be carried in a single RTP session and<br />

demultiplexed based on the payload type or SSRC fields. Interleaving packets with different payload types but<br />

using the same SSRC would introduce several problems:<br />

If one payload type were switched during a session, there would be no general means to identify which of the<br />

old values the new one replaced.<br />

An SSRC is defined to identify a single timing and sequence number space. Interleaving multiple payload types<br />

would require different timing spaces if the media clock rates differ and would require different sequence<br />

number spaces to tell which payload type suffered packet loss.<br />

<strong>Annex</strong> 2 - Page 82 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The RTCP sender and receiver reports can only describe one timing and sequence number space per SSRC and<br />

do not carry a payload type field.<br />

An RTP mixer would not be able to combine interleaved streams of incompatible media into one stream.<br />

Carrying multiple media in one RTP session precludes: the use of different network paths or network resource<br />

allocations if appropriate; reception of a subset of the media if desired, for example just audio if video would<br />

exceed the available bandwidth; and receiver implementations that use separate processes for the different<br />

media, whereas using separate RTP sessions permits either single- or multiple-process implementations. Using a<br />

different SSRC for each medium but sending them in the same RTP session would avoid the first three problems<br />

but not the last two.<br />

The existing RTP data packet header is believed to be complete for the set of functions required in common<br />

across all the application classes that RTP might support. However, in keeping with the ALF design principle,<br />

the header may be tailored through modifications or additions defined in a profile specification while still<br />

allowing profile-independent monitoring and recording tools to function.<br />

The header extension mechanism is provided to allow individual implementations to experiment with new<br />

payload-format-independent functions that require additional information to be carried in the RTP data packet<br />

header. This mechanism is designed so that the header extension may be ignored by other interoperating<br />

implementations that have not been extended.<br />

RTP Control Protocol - RTCP<br />

The RTP control protocol (RTCP) is based on the periodic transmission of control packets to all participants in<br />

the session, using the same distribution mechanism as the data packets. The underlying protocol must provide<br />

multiplexing of the data and control packets, for example using separate port numbers with UDP. RTCP<br />

performs four functions:<br />

• Provision of feedback on the quality of the data distribution<br />

• Carriage of persistent transport-level identifier for an RTP source – CNAME<br />

• Observation of the number of participants for control of sending rate<br />

• Optional function to convey minimal session control information<br />

There are five types of RTCP packets defined in RFC 1889 for carrying of a variety of control information:<br />

• SR: Sender report, for transmission and reception statistics from participants that are active senders<br />

• RR: Receiver report, for reception statistics from participants that are not active senders<br />

• SDES: Source description items, such as CNAME, NAME and EMAIL<br />

• BYE: Indicates end of participation APP: Application specific functions<br />

• APP: Application specific functions<br />

Each RTCP packet begins with a fixed part similar to that of RTP data packets, followed by structured elements<br />

that may be of variable length according to the packet type but always end on a 32-bit boundary. The definition<br />

of packet format for each type of RTCP packets can be found in RFC 1889.<br />

Multiple RTCP packets may be concatenated without any intervening separators to form a compound RTCP<br />

packet that is sent in a single packet of the lower layer protocol, for example UDP. There is no explicit count of<br />

individual RTCP packets in the compound packet since the lower layer protocols are expected to provide an<br />

overall length to determine the end of the compound packet. Each individual RTCP packet in the compound<br />

packet may be processed independently with no requirements upon the order or combination of packets.<br />

RTP is designed to allow an application to scale automatically over session sizes ranging from a few participants<br />

to thousands. It is assumed that the data traffic is subject to an aggregate limit called the "session bandwidth" to<br />

be divided among the participants. The control traffic should be limited to a small and known fraction of the<br />

session bandwidth: small so that the primary function of the transport protocol to carry data is not impaired; It is<br />

suggested that the fraction of the session bandwidth allocated to RTCP be fixed at 5%. While the value of this<br />

and other constants in the interval calculation is not critical, all participants in the session must use the same<br />

values so the same interval will be calculated. Therefore, these constants should be fixed for a particular profile.<br />

Calculation of the RTCP packet interval depends upon an estimate of the number of sites participating in the<br />

session. A participant may mark another site inactive, or delete it if not yet valid, if no RTP or RTCP packet has<br />

been received for a small number of RTCP report intervals (5 is suggested).<br />

<strong>Annex</strong> 2 - Page 83 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

When a site is later marked inactive, the state for that site should still be retained and the site should continue to<br />

be counted in the total number of sites sharing RTCP bandwidth for a period long enough to span typical<br />

network partitions. A timeout of 30 minutes is suggested.<br />

<strong>A2.</strong>6.2.2 DCCP<br />

The DCCP group within the IETF aims at designing a new protocol, DCCP, Datagram Congestion Control<br />

protocol, which implements a congestion-control unreliable transport of flows of datagrams. DCCP is intended<br />

for applications which have preference for delivery of timely data over in-order delivery or reliability. Many of<br />

the applications targeted by DCCP (e.g., streaming multimedia) currently use RTP over UDP. DCCP is<br />

intended to be a standard way to implement congestion control and congestion control negotiation.<br />

This protocol is intended to be used by applications such as streaming multimedia. It can be regarded as a mix of<br />

UDP and TCP with the following features:<br />

• Unreliable flow of datagrams with acknowledgements<br />

• Reliable handshake for connection set-up and teardown<br />

• Reliable negotiation of features<br />

• Choice of TCP-friendly congestion control mechanisms<br />

• Incorporation of ECN (Explicit Congestion Notification)<br />

• Path MTU discovery<br />

Note that RTP-over-DCCP may however lead to overhead, e.g., duplicated sequence numbers, duplicated<br />

acknowledgement information.<br />

DCCP provides many features to users, and has been criticized for the resulting complexity. A simplified<br />

version of DCCP, called DCCP-Lite is also under specification.<br />

<strong>A2.</strong>6.2.3 UDP-lite<br />

For UDP, the default operating mode is to discard erroneous packets – in IPv6 the checksum is even mandatory<br />

and cannot be switched off. This leads to a waste of bandwidth in wireless networks for multimedia<br />

applications. In order to save bandwidth, the error-detection mechanism of the transport layer must be able to<br />

protect vital information such as headers, but also to optionally ignore errors in the payload best dealt with by<br />

the application. What should be verified by the checksum is best specified by the sending application.<br />

The possibility of passing erroneous SDU from a link layer through the error-ignorant IP layer to the application<br />

layer is also worked on in the IETF under the name of UDP-lite. From the draft, the UDP-lite motivation can be<br />

summarized as follows:<br />

• Some applications prefer erroneous data over lost ones;<br />

• Radio links typically and naturally are characterized by - possibly high -and varying error rates;<br />

• Intermediate layers should not prevent error-tolerant applications.<br />

UDP-lite is a lightweight version of UDP. UDP-lite provides increased flexibility in the form of a partial and<br />

adaptable checksum. Accordingly, the data in a packet can be divided into two parts, a sensitive and an<br />

insensitive part. Bit errors in the sensitive part of a packet will cause packets to be discarded at the receiving<br />

side, while errors in the insensitive part will be ignored. While providing this extra flexibility, the UDP-lite<br />

protocol is compatible with the classic UDP protocol. If the length of the insensitive part is zero, i.e., the length<br />

of the sensitive data is the packet length, the value of the ‘coverage” is replaced by classic UDP length. In other<br />

words, UDP becomes a special case of the UDP-lite.<br />

<strong>A2.</strong>6.2.4 ROHC: Robust header compression<br />

The Robust Header Compression (ROHC) working group of the IETF addresses key requirements in existing<br />

and future mobile and wireless networks, namely spectrum efficiency and robustness against errors. The<br />

incentive arose from links with significant error rates, long round-trip times, and bandwidth limited capacity.<br />

The goal was to design robust and efficient header compression based upon a flexible and extensible framework.<br />

The ROHC WG has developed a header compression framework on top of which various profiles can be defined<br />

for different protocol stacks, or for different compression strategies. Compressor and decompressor are defined<br />

as finite state machines with three states. The first state is the unidirectional mode (U-mode) using periodic<br />

refreshes and timeouts in order to keep the context valid.<br />

<strong>Annex</strong> 2 - Page 84 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The second mode is bi-directional optimistic (O-mode) makes use of the feedback channel to send recovery<br />

requests. The last mode, bi-directional reliable mode (R-mode) utilises the feedback channel to a greater degree<br />

in order to best prevent loss of context synchronization. All context updates are acknowledged in this mode.<br />

ROHC allows to use different compression schemes for the different fields in order to tailor the compression<br />

method to the characteristic of the field. ROHC also specifies the format of the corresponding compressed<br />

packets.<br />

Due to the demands of the cellular industry for an efficient way of transporting voice over IP over wireless,<br />

ROHC has mainly focused on compression of IP/UDP/RTP headers. ROHC RTP has become a very efficient,<br />

robust and capable compression scheme, able to compress the headers down to a total size of one octet only. An<br />

Internet draft has been submitted, specifying a ROHC profile for UDP-lite. One of the UDP-lite authors has<br />

shown, that a combination of a ROHC-like compression (he used ROCCO, a precursor of ROHC) plus UPDlite<br />

can cope with bit error rates up to 10 -3 with a packet loss rate of less than 1%. The authors claim that this<br />

would be sufficient to provide good speech quality.<br />

<strong>A2.</strong>6.3 Application-layer QoS mechanisms<br />

MDC and JSCC described above aim at adapting the signal compressed representation to the network<br />

characteristics which may vary in time, and this in order to minimize the impact of losses on the quality of the<br />

reconstructed signal, hence to optimize the end-to-end QoS. Adaptation of the transmitted flows to the network<br />

characteristics can be pushed further at the level of the streaming functions. Experiments have already shed the<br />

light on the benefits of such adaptation mechanisms. By handling delay jitter (delay adaptation), packet loss<br />

(forward error correction, error concealment), and variable source rate availability, Congestion control, rate<br />

control, forward error correction, scheduling in the application layer allow to improve the quality of<br />

multimedia communications over best-effort networks. They can be regarded as alternative or complementary<br />

solutions to the problem of optimum stream adaptation to network varying bandwidth, loss rate or delay<br />

characteristics.<br />

<strong>A2.</strong>6.3.1 Congestion control (CC)<br />

The delivery of multimedia streams, due to their real-time nature, cannot make use of responsive and reliable<br />

protocols such as TCP. For this reason, so far, they had to make use of unresponsive transport protocols, e.g.,<br />

User Datagram Protocol (UDP) and/or Real-time Transport Protocol (RTP). The fact that these protocols do not<br />

embed any congestion control mechanism raises major concerns: not only multimedia sessions such as video<br />

sessions require more bandwidth, but they are also unresponsive, i.e., they do not back off their rate when<br />

congestion occurs as TCP does. First, fair share of the network resources is no longer maintained. Second, as<br />

more and more greedy sessions are established across the network, the goodput of the network decreases<br />

because unresponsive sessions typically send data packets at full rate even if these packets are later dropped<br />

inside the network. While the first point is a threat to TCP-based applications, the second one may potentially<br />

lead to network collapse.<br />

To cope with the above issues, it has been envisaged to enhance UDP-based video communications with some<br />

kind of congestion control. Congestion control strategies dedicated to continuous streams have been designed.<br />

The goal is to reflect at best multimedia QoS requirements, yet at the same time be almost sufficiently reactive in<br />

order to maintain some fairness between traditional data exchanges and multimedia sessions. Note that<br />

multimedia QoS requirements are in sharp contrast with those of traditional computer communication. Smooth<br />

rate variations are a prerequisite for an acceptable quality. Furthermore, end-to-end delay variations have a<br />

greater impact on continuous data streams.<br />

Congestion control relies on bandwidth prediction mechanisms coupled with rate control performed either by a<br />

real-time encoder or by the streaming server.<br />

<strong>Annex</strong> 2 - Page 85 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

A number of studies has been devoted to TCP-compatible congestion control schemes in the past few years both<br />

in Unicast 109 110 111 and in multicast 112 113 114 115 . A TCP-compatible flow is defined as a flow that, in steadystate,<br />

uses no more bandwidth than a conformant TCP connection under similar loss and round trip time<br />

conditions. A number of studies have been devoted to TCP-friendly or to TCP-compatible congestion control<br />

schemes in the past few years. The idea consists in adjusting the transmission rate of the senders, in a way that<br />

would be fair to TCP connections, yet not using TCP window based algorithms. The goal is thus to predict the<br />

bandwidth that TCP would use under the same conditions of transmission (loss rate, delay).<br />

Congestion control in Unicast<br />

Early works, based on steady state analysis of the TCP congestion avoidance mechanism, have shown that the<br />

stationary throughput of TCP connections varies in the inverse order of the square root of the loss rate observed<br />

by the connection. However, above 5% loss rate, this model gives a predicted rate which diverges from the<br />

reality. The analytical formula also shows a built-in unability to handle rate control in absence of packet loss<br />

events (below 1%). A more accurate model capturing both the behavior of TCP's fast retransmit mechanism and<br />

the effect of TCP's timeouts has been proposed in 116 .<br />

Adopting the TCP response equation to derive the allowed sending rate, a control protocol called TFRC - TCPfriendly<br />

rate control protocol – has been introduced, in order to avoid the oscillations in the sending rate that<br />

result from TCP's AIMD - Additive Increase Multiplicative Decrease - congestion control mechanisms. The key<br />

issues of the approach are first the measurement of a loss event rate p instead of a packet loss rate, to best model<br />

the reaction to multiple losses of the congestion mechanisms used by most TCP variants. A loss event consists of<br />

several consecutive packets lost. In order to improve the stability, the changes in measured RTTs are smoothed<br />

by an EWMA - exponentially weighted moving average - and possible slow-start overshoots during the slow<br />

start phase are limited by restricting the sending rate to the minimum value of 2 times the actual previous<br />

sending rate and 2 times the actual previously received rate. The sending rate (or conversely the inter-packet<br />

spacing) is in addition smoothed by a weighting factor given by the ratio of the square roots of the RTTs divided<br />

by the square root of the most recent RTT sample.<br />

However this protocol still suffers from some limitations: 1) it assumes packets of constant size, which is not the<br />

case for multimedia streams such as video streams; 2) the packet losses within a round-trip-time are considered<br />

as a single congestion event. However, the loss event fraction is calculated as the ratio of congestion events to<br />

the number of received packets. This leads to an over-estimation of the loss event rate. To have a more reliable<br />

estimation of the losses, one has to take into account the packet size in the loss rate computation. New protocol<br />

features have been introduced in 117 to cope with the above limitations..<br />

These mechanisms are placed in the application layer, tightly coupled with the rate control mechanism of a realtime<br />

encoder or with the streaming mechanism of the streaming servers. In order to promote and facilitate the<br />

use of congestion control, the IETF is currently working towards the specification of the Datagram Congestion<br />

Control protocol (DCCP, formerly known as DCP) 118 .<br />

Congestion control in Multicast<br />

The usage of feedback schemes in a multicast scenario faces two major issues. The first one deals with the socalled<br />

feedback implosion, which results from a straightforward re-use of a unicast feedback scheme in a<br />

multicast framework.<br />

109 M. Mathis, J. Semke, J. Mahdavi, and T. Ott, “The macroscopic behaviour of the TCP congestion avoidance algorithm,” Computer<br />

Communication Review, vol. 27, no. 3, pp. 67–82, July 1997.<br />

110 J. Mahdavi and S. Floyd, “TCP-friendly unicast rate-based flow control,” Technical note sent to the end2end-interest mailing list, Jan.<br />

1997<br />

111 J. Padhye, V. Firoiu, D. Towsley, and J. Kurose, “Modeling TCP throughput: a simple model and its empirical validation,” in ACM<br />

SIGCOMM, Aug. 1998<br />

112<br />

S. McCanne, M. Vetterli, and V. Jacobson, “Low-complexity video coding for receiver-driven layered multicast,” IEEE Journal on<br />

Selected Areas In Communications, vol. 15, no. 6, pp. 983–1001, Aug. 1997<br />

113<br />

L. Vicisano, L. Rizzo, and J. Crowcroft, “TCP-like congestion control for layered multicast data transfer,” in INFOCOM (3), 1998, pp.<br />

996–1003<br />

114<br />

B. J. Vickers, C. Albuquerque, and T. Suda, “Source adaptive multi-layered multicast algorithms for real-time video distribution,”<br />

IEEE/ACM Trans. on Networking, vol. 8(6), pp. 720–733, Dec.2000<br />

115<br />

D. Sisalem and A. Wolisz, “Mlda: A tcp-friendly congestion control framework for heterogeneous multicast environments,” Tech. Rep.,<br />

GMD FOKUS, Berlin, Germany, 2000<br />

116 J. Padhye, V. Firoiu, D. Towsley, and J. Kurose, “Modeling TCP throughput: a simple model and its empirical validation,” in ACM<br />

SIGCOMM, Aug. 1998<br />

117 J. Viéron and C. Guillemot, “Real-time constrained TCP-compatible rate control for video over the Internet”, IEEE Transactions on<br />

Multimedia, vol. 6, No. 4, Augustt 2004<br />

118 E. Kohler, M. Handley, J. Padhye, and S. Floyd, “Datagram congestion control protocol,” http://www.icir.org/kohler/dcp/, May 2002<br />

<strong>Annex</strong> 2 - Page 86 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

As the number of participants in the multicast session increases, so does the number of receiver reports that must<br />

be carried by the network and processed by the source. Moreover the arrivals of receiver reports may well be<br />

synchronised, raising the source's burden to an unbearable level. A number of schemes have been devised to<br />

alleviate this problem.<br />

Probabilistic approaches require each receiver to observe some random delay before sending a receiver report.<br />

This makes it possible to assign unequal "importance" to receivers by changing some weights in the random<br />

process, but the delays before a congestion indication is taken into account by the source may be prohibitive.<br />

Another approach is for the source to perform progressive, topological polling of the receivers, using for<br />

instance Time To Live (TTL) mechanisms to control the number of responses. The main drawback of this<br />

method lies in its inaccuracy, as a modest TTL increase may yield a drastic increase of the number of answers,<br />

thus causing a feedback implosion if the source is overwhelmed. A third alternative based on progressive polling<br />

has been proposed. It relies on random keys computed by each receiver. Progressive polling is done by varying<br />

the number of significative bits, thus allowing the source to quite accurately poll the number of receivers it<br />

wants. This number is further reduced by filtering the answers: a new poll embeds the maximum congestion<br />

state already known by the source, so that only receivers incurring worse conditions have to reply.<br />

The other issue associated with multicast feedback is that of aggregating heterogeneous reports into a consistent<br />

view of the communication state. In some cases this considerably restricts the usefulness of multicast feedback<br />

schemes. Consider for instance an adaptation process where receivers send back to the source the bandwidth<br />

availability they infer. Then the source faces a problematic trade-off as it has to determine a single transmission<br />

rate that is bearable to all (or at least to a majority of) receivers. In most cases this translates into selecting the<br />

lowest common rate between all indicated values, or a rate low enough that a vast majority of receivers will not<br />

experience congestion. The corollary is a reduced quality for all receivers, whether they actually experience<br />

congestion or not. When the feedback scheme is devoted to error protection, or to efficient splitting of the<br />

bandwidth between raw data and protection, then the source has to take into account the worse error conditions<br />

encountered by the different receivers. The corollary in this case is suboptimal rate / distortion ratio, since most<br />

forward error correcting data is useless to all but a few receivers.<br />

Layered and scalable coding and transmission alleviate this problem by making it possible to adapt the rate or<br />

amount of error control data on a layer basis. A variety of multicast schemes making use of layered coding for<br />

audio and video communications have been proposed: Receiver-driven approaches consist in multicasting<br />

different layers of video using different multicast addresses and let the receivers decide which multicast group to<br />

subscribe to. Receiver-driven rate control mechanisms - such as RLM 119 - suffer from several limitations:<br />

• The first limitation concerns the trade-off between the granularity in the rate adaptation and the extra<br />

complexity and the traffic overhead induced by a large number of layers. A small number of layers will lead<br />

to a coarse rate adaptation. On the other hand, a large number of layers will lead to extra complexity in<br />

multicast address management, in additional state information to be maintained by the receivers, as well as to<br />

a potential waste of bandwidth. In addition to the penalty in terms of compression performances induced by<br />

scalable representations, a large number of layers will lead to traffic overhead due to IGMP messages<br />

exchanged for dynamic multicast tree management, and to additional signalling information (eg: announcing<br />

the rate of each layer).<br />

• A second limitation concerns the degree of reactiveness of the application which may be impacted by the<br />

latency of the control mechanisms, such as the join-experiments in RLM. Failed join-experiments will create<br />

additional traffic congestion, during transition periods corresponding to the latency for pruning a branch of<br />

the multicast tree.<br />

A source adaptive multi-layered multicast - SAMM – algorithm based on hierarchical aggregation of feedback is<br />

described in 120 . Feedback packets contain information on the estimated bandwidth available on the path from<br />

the source. Feedback mergers are assumed to be deployed in the network nodes to avoid feedback implosion.<br />

Given a maximum number of layers for the source, the rate of each layer is chosen in order to maximise a<br />

combined "goodput" measure of video traffic received by all downstream receivers. The goodput is defined as<br />

the maximum rate that can be received without any loss. A mechanism based on "partial suppression" of<br />

feedback, avoiding feedback aggregation in the network nodes, is proposed in 121 .<br />

119 S. McCanne and M. Vetterli and V. Jacobson, « Low-Complexity Video Coding for Receiver-Driven Layered Multicast », IEEE Journal<br />

on Selected Areas In Communications, August, No. 6, vol. 15, pp. 983-1001, 1997<br />

120 C. Albuquerque and B. J. Vickers and T. Suda, «An End-to-End Source-Adaptive Multi-Layered Multicast ({SAMM}) Algorithm »,<br />

Proceedings of the packet video workshop, PVW'99, April 1999<br />

121 D. Sisalem and A. Wolisz, « MLDA: A {TCP}-friendly congestion control framework for heterogeneous multicast environments »,<br />

technical report, GMD FOKUS, Berlin, Germany, 2000<br />

<strong>Annex</strong> 2 - Page 87 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The receivers estimate a TCP-friendly rate (or its throughput if not required to be TCP-compatible) and returns<br />

the corresponding interval this rate belongs to. If a report from another receiver indicating a rate in the same<br />

interval was seen during a given time period, the receiver does not transmit its report. The receivers sent their<br />

reports only on the highest layers they are listening to. This approach avoids the deployment of aggregation<br />

mechanisms in the network nodes, as required by SAMM, but on the other hand, the partial feedback<br />

suppression may induce a flat distribution of the requested rates, hence lead to sub-optimality. An alternative<br />

approach combining a multi-layered representation of the source and a mechanism for aggregating feedback<br />

information in order to have a hybrid sender-receiver driven approach is described in 122 . Congestion control in<br />

multicast remains however an open issue which requires further investigation.<br />

<strong>A2.</strong>6.3.2 Multi-rate switching and scalable coding<br />

In streaming servers, so far, simple strategies such as multi-rate switching are generally considered (see e.g.,<br />

the windows media server). The multimedia content is pre-encoded at a selection of bit rates. The different<br />

streams are stored on the streaming server. In streaming solutions supporting multi-rate switching, the streaming<br />

server automatically detects the user’s Internet connection speed, e.g., using bandwidth prediction techniques<br />

described in the above section, and suitably adapts on the fly the bit rate, hence the quality of the media stream,<br />

by selecting the appropriate encoded stream. Multi-rate switching is implemented e.g., in the Microsoft<br />

windows media server and in the SureStream technology of the RealNetwork Helix media server. Alternative<br />

solutions to multi-rate switching are based on scalable representations of the multimedia signals.<br />

<strong>A2.</strong>6.3.3 Relay/caching functionality<br />

The deployment of relay or caching media servers at the network edges is also among the features considered<br />

for avoiding overload of the core network. Streaming servers are thus positioned at the edge of the distribution<br />

network near the end-users. These proxy or caching servers can then fetch the requested media and serve it as if<br />

the content was locally available. This contributes to removing the load from the central server. The architecture<br />

of relay servers then implements what it is commonly referred to as Application Layer Multicast, leading to<br />

overlay content delivery architectures.<br />

CDN can thus be seen as overlay networks of content servers distributed geographically to enable the end-users<br />

to have rapid and reliable access to the content. CDNs use technologies such as caching to place replicated<br />

content close to the network edges. Load balancing can then ensure that the users are transparently routed to the<br />

“best” server. Several practical CDN solutions already exist and are listed below.<br />

<strong>A2.</strong>6.3.4 Client side buffer management<br />

Mechanisms such as buffer management on the receiver, by allowing for local caching and playback (see e.g.<br />

123 ), are also envisaged to reduce the impact of network congestion on the quality of the rendered signals. Client<br />

side buffering coupled with intelligent pre-fetching and playback provides means to “absorb” network<br />

bandwidth varying conditions. Media data is buffered at the client to protect against playout interruptions due to<br />

packet losses and random delays. However, while the likelihood of an interruption decreases as more data are<br />

buffered, the latency increases. In today’s streaming technologies, buffering delays often range from 5 to 15<br />

seconds for a good balance between delay and playout reliability.<br />

<strong>A2.</strong>6.3.5 Scheduling on the sender side<br />

Solutions for rate-distortion optimized streaming of media packets are also studied (see e.g. 124 ). The streaming<br />

system decides which packet to transmit based on the packet deadline, the channel statistics, the feedback<br />

information, the packets inter-dependencies and the reduced distortion resulting from a correct reception and<br />

decoding of the packet. Optimized packet schedules can be computed at the sender, the receiver or at a proxyserver<br />

on the network edge.<br />

122<br />

J. Viéron, T. Turletti, K. Salamatian and C. Guillemot, “Source and channel adaptive rate control for multicast layered video<br />

transmission based on a clustering algorithm”, Eurasip journal on applied signal processing, special issue on Multimedia Over IP and<br />

Wireless Networks, No.2, 158-176, Feb. 2004<br />

123<br />

B. Girod, M. Kalman, Y. Liang, and R. Zhang, "Advances in Channel-adaptive Video Streaming," Wireless Communications and<br />

Mobile Computing, vol. 2, no. 6, pp. 549-552, September 2002<br />

124<br />

P. A. Chou and Z. Miao, “Rate-distortion optimized streaming of packetized media”, Technical Report MSR-TR-2001-35, Microsoft<br />

Research, Redmond, WA, February 2001.<br />

<strong>Annex</strong> 2 - Page 88 of 282


<strong>A2.</strong>6.3.6 Loss control<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The transmission of encoded video and audio data over IP-based networks causes primarily packet erasures in<br />

current network architectures. Several techniques can be considered to combat packet losses, i.e., to improve the<br />

apparent quality of the transmission: forward error/erasure correction, retransmissions schemes (e.g., ARQ -<br />

Automatic repeat Request - or a mixed of the two techniques, i.e., ARQ used in combination with forward<br />

error/erasure correction), and interleaving. The latency that the application can tolerate is a key element in the<br />

choice of a loss control or repair technique. The efficiency of the loss control scheme depends on the tolerable<br />

latency.<br />

<strong>Annex</strong> 2 - Page 89 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Retransmission schemes<br />

Some applications (e.g., streaming) can tolerate initial latency of a few seconds. At session set-up, data is being<br />

stored in the receiving buffer to accommodate end-to-end delays of a few seconds. In that case, retransmission<br />

schemes can be envisaged to increase the transmission reliability.<br />

Reliable RTP<br />

A set of features can be added in the application layer to the RTP protocol in order to cope with packet losses<br />

and network congestion: retransmissions and congestion control. ACK packets with bit masks to signal the<br />

acknowledged packets (e.g., using RTCP APP packets) can be transmitted from the client to the server 125 . The<br />

mask allows acknowledging a number of RTP packets, limiting the amount of feedback. The server must also<br />

incorporate a windowing and congestion control mechanism. As in TCP congestion control, the transmission of<br />

RTP packets can be constrained by the size of a congestion window. Expiration and timeout mechanisms can<br />

also be incorporated. This amounts to mimic the behaviour of TCP. Notice that these features have been added<br />

to the Apple Darwin server. The parameters of the mechanism are negotiated out-of-band via RTSP.<br />

RTP retransmissions<br />

The IETF has chartered a work item to standardize RTP retransmissions schemes and profiles for Unicast<br />

applications and for small Multicast groups, to be used in combination with the extended RTP profile for RTCP<br />

feedback, AVPF 126 The AVPF profile defines general purpose messages such as ACKs and NACKs. Some of<br />

the RTCP timing rules have also been re-defined in this extended profile to allow for more frequent feedback.<br />

The retransmission protocol can be configured to trade latency against reliability. A payload format for<br />

retransmitted RTP packets is also proposed in 127 to the IETF. Original and retransmitted packets can be sent<br />

either in two separate RTP sessions or in a single session using SSRC multiplexing.<br />

The receiver detects packet losses from the gaps received in the sequence numbers of both streams. It signals<br />

lost packets to the sender via the NACK message defined in the AVPF profile, taking into account the sender<br />

retransmission buffer length that has been signalled via SDP. The sender can, before re-transmitting a packet<br />

account for the network congestion state and the importance of the considered packet. The receiver may also<br />

request a retransmission based on the estimated importance of the packet or on application level QoS. ACK<br />

messages are used by the sender to delete acknowledged packets from the retransmission buffer.<br />

Retransmission techniques, based on feedback messages, are well suited for Unicast applications and for small<br />

multicast groups. They are not scalable to larger scale multicast transmission. Retransmissions pose a risk of<br />

increasing the network congestion, as there is a potentially large bandwidth overhead due to the use of<br />

retransmissions. Congestion control mechanisms should in addition account for the retransmitted packet: the<br />

predicted rate should include the retransmitted packets, hence the packet rate of the original stream should be<br />

reduced.<br />

FEC: Forward error correction<br />

Methods based on forward error correction can be used instead for applications with delay constraints. With<br />

open loop mechanisms such as FEC, redundant data is transmitted along with the original data, so that some of<br />

the lost original data can be recovered from the redundant information. The FEC information can also be sent in<br />

response to retransmission requests, allowing a single retransmission to potentially repair several losses.<br />

The redundant information can be generated by error correcting codes, such as parity codes or Reed-Solomon<br />

codes, verifying the MDS property (e.g. Reed-Solomon codes). The exact position of missing data being known,<br />

a good correction capacity can indeed be obtained by systematic MDS codes. An (n,k) MDS - Maximal<br />

Distance Separable - code takes k data packets and produces n-k redundant data packets. The MDS property<br />

allows to recover up to n-k losses in a group of n packets.<br />

To apply these protection mechanisms, a protocol support is required. RFC 2733 128 and RFC 3009 129 of the<br />

IETF define such protocols, later on extended in 130 to allow for unequal loss protection.<br />

125 extensions of RTCP with bit mask<br />

126 J. Ott, S. Wenger, N. Satto, C. Burmeister and J. Rey, “Extended RTP profile for RTCP-based feedback”, draft-ietf-avt-rtcp-feedback-<br />

11.txt, work in progress, Aug. 2004<br />

127 J. Rey, D. Leon, A. Miyazaki, V. Varsa and R. Hakenberg, “RTP retransmission Payload format”, draft-ietf-avt-rtp-retransmission, work<br />

in progress in the IETF AVT group, Jan. 2004<br />

128 J. Rosenberg and H. Shulzrinne,”An RTP Payload Format for Generic Forward Error Correction”, IETF Request for comments, RFC<br />

2733, Dec. 1999<br />

129 J. Rosenberg and H. Shulzrinne,”Registration of parityfec mime types”, IETF Req. for Comments, RFC 3009, Nov. 2000<br />

130 A. Li (Ed.), “An RTP payload format for generic FEC”, draft-ietf-avt-ulp-10.txt, work in progress, July 2004<br />

<strong>Annex</strong> 2 - Page 90 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Different levels of protection can thus be applied to data with different levels of importance, leading eventually<br />

to a more efficient bandwidth usage. The user or application partitions the message (e.g. the audio-visual data)<br />

into segments and assigns to each segment a priority value. Then, the segments are encoded into a set of packets<br />

with the error protection matching their priority value and the required associated QoS. The FEC data is<br />

transmitted in separate RTP packets using a specific payload format (RTP payload format for generic FEC). The<br />

payload format contains information that allows the sender to tell the receiver which media packets are protected<br />

by the FEC packet. An unequal loss protection technique has also been defined for <strong>Annex</strong> I of H.323 in the<br />

context of wireless packet switched networks.<br />

Besides the error control strategy to retain, the control of the amount of redundant information added at the<br />

source, or optimal splitting of the available bandwidth between raw and redundant data, is a major concern. It<br />

can be based on feedback information about the loss process measured at the destination, i.e. using QoS<br />

reporting mechanisms of RTCP.<br />

<strong>A2.</strong>6.3.7 Congestion Control issues for wireless access<br />

Most of the CC approaches have been designed for the wired Internet, without taking into account the<br />

characteristics of wireless links. The data link layer at the end nodes of the wireless links provides only partial<br />

reliability. Packets may be discarded by the PDCP (embedding ROHC) or by the transport (UDP-lite) layers:<br />

packets with corrupted headers must be discarded. Congestion control mechanisms have been designed so far<br />

with the implicit assumption that any loss is a congestion loss. As a result, the sending rate is reduced not only<br />

in response to congestion, but also in response to wireless losses.<br />

In order to prevent this faulty behaviour in network topologies including radio links, the congestion control<br />

mechanism must have the capability to differentiate between wireless and congestion losses. Loss differentiation<br />

schemes can either be implicit or explicit. Explicit schemes are those that make use of agents, e.g. snoop agents,<br />

deployed on intermediate network nodes 131 . Implicit or end-to-end schemes try to differentiate losses at the<br />

receiver without any intermediate nodes exploiting measures such as one-trip time 132 or packet inter-arrival time<br />

133 . The approach based on packet inter-arrival time and de-sequencing is difficult to apply in presence of several<br />

competing flows, packets from different flows getting interspersed. The approach in 134 assumes that spikes of<br />

relative one-trip time (ROTT) are observed in periods of congestion. The ROTT is used to identify the state of<br />

current connection. If the connection is in spike state, losses are assumed to be due to congestion. Otherwise,<br />

losses are assumed to be wireless. The difficulty in the approach is, given fixed temporal windows, to catch<br />

spikes over time distributions, depending on whether the congestion occurs on one or several nodes of the path.<br />

131<br />

H. Balakrishnan, S. Seshan, and R.H. Katz, “Improving reliable transport and handoff performancein cellular wireless networks,” ACM<br />

Wireless Networks, vol. 1, no. 4, pp. 469–481, Dec.1995.<br />

132<br />

Y. Tobe, Y. Tamura, A. Molano, S.Ghosh, and H. Tokuda, “Achieving moderate fairness for UDP flows by path status classification,” in<br />

IEEE Conf. on Local Computer Networks (LCN), Nov. 2000, pp. 252–261<br />

133<br />

S. Biaz and N.H. Vaidya, “Discriminating congestion losses from wireless losses using interarrival times at the receiver,” in IEEE<br />

Symposium on Application-specific systems and softwareEng. and Tech., March 1999.<br />

134<br />

J. Rosenberg and H. Shulzrinne,”An RTP Payload Format for Generic Forward Error Correction”, IETF Request for comments, RFC<br />

2733, Dec. 1999.<br />

<strong>Annex</strong> 2 - Page 91 of 282


<strong>A2.</strong>6.4 Network QoS for Internet multimedia<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The application-layer mechanisms above are intended to adapt –dynamically- the flow to the network<br />

characteristics, which may vary in time. The goal is eventually to avoid a possible network collapse and<br />

minimize the impact of the network impairments on the quality of the received signal. Application-layer<br />

mechanisms are not by themselves sufficient to guarantee a given end-to-end QoS. Network mechanisms must<br />

also evolve in order to take into account multimedia continuous flow characteristics. As mentioned above,<br />

multimedia requirements are in sharp contrast with those of traditional data communication. Smooth rate<br />

variations are a prerequisite for an acceptable quality. Furthermore, end-to-end delay variations have a greater<br />

impact on continuous data streams. In contrast with data communication, they can tolerate some impairment.<br />

Two network QoS architectures have been defined by the IETF: the Integrated Services (IntServ) 135 and the<br />

Differentiated Services (DiffServ) 136 .<br />

<strong>A2.</strong>6.4.1 Network QoS Models<br />

Integrated services (IntServ)<br />

IntServ has been proposed by the IETF to reserve resources in advance, so that selected flows can be treated<br />

with guaranteed resources. The goal was in particular to enable real-time applications as well as bandwidth<br />

sharing between different traffic classes. Two types of services have been defined for IntServ and can be<br />

requested via RSVP 137 :<br />

• Guaranteed Service 138 : This service is intended for real-time intolerant applications. It offers a strict<br />

guaranteed service providing firm bounds on end-to-end delay, assured bandwidth for traffic conforming to<br />

reserved specifications.<br />

• The second type is a controlled load 139 service providing for a better than best-effort and low delay service<br />

under light to moderate network loads. It is intended for applications that can tolerate a limited amount of<br />

losses and delay. The traffic parameters of importance are the average rate, peak rate and burst size. The<br />

end-to-end delay cannot be determined in a deterministic manner.<br />

The Resource Reservation Protocol (RSVP) has been designed as a signalling protocol for setting the traffic<br />

parameters, i.e. for resource reservation and for admission control. The application must know the characteristics<br />

of its traffic beforehand. RSVP can then be used to signal the intermediate network elements the application<br />

traffic parameters. Depending on the availability of resources, the network either reserves the resources and<br />

sends back a positive acknowledgement or sends a negative acknowledgement. . Network resources are reserved<br />

for each flow, i.e. for each uni-directional data stream, uniquely identified by the source IP address, the source<br />

port number, the destination IP address, the destination port number, and the transport protocol. The decisions to<br />

grant resources are made by admission control mechanisms (see section 3.3 below). If the end application sends<br />

out-of-profile traffic, then the data is given best effort service, which may cause the packets being dropped.<br />

IntServ allows per-flow QoS but at the expense of per-flow state and signalling at every hop. As a consequence,<br />

IntServ presents the following limitations:<br />

• Every network and end device along the path need to support RSVP. Intserv makes routers complicated.<br />

Intermediate routers require modules to support RSVP reservations and to treat flows according to the<br />

reservations. In addition they have to support RSVP messages and coordinate with policy servers;<br />

• it is not scalable with the number of flows. As the number of flows increases, routing becomes incredibly<br />

difficult. The backbone core routers become slow when they try to accommodate an increasing number of<br />

RSVP flows;<br />

• RSVP imposes maintenance of soft states at the intermediate routers. This implies that routers have to<br />

constantly monitor and update states on a per-flow basis. Reservations along a path need to be refreshed<br />

periodically adding to network traffic, to risk of reservation timeout,<br />

• The maintenance of state information for each reservation introduces some scalability problems.<br />

135 R. Braden, D. Clark and S. Shenker, “Integrated Services in the Internet Architecture: An Overview”, IETF RFC 1633, June 1994<br />

136 S. Blake, D. Black, M. Carlson, E. Davis, Z. Wand and W. Weiss, “An Architecture for Differentiated Services,” RFC 2475,Dec. 1998<br />

137 R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin nd «Resource reSerVation Protocol – RSVP – Version 1 functional<br />

specification », IETF Request for Comments RFC2205, http://www.ietf.org/rfc/rfc2205.txt, Sept. 1997.<br />

138 S. Shenker, C. Partridge, R. Guerin, “Specification of Guaranteed Service”, IETF Request for Comments, RFC 2212, Sept. 1997<br />

139 J. Wroclawski, “Specification of the Controlled Load Network element Service”, IETF Request for Comments, RFC 2211, Sept. 1997.<br />

<strong>Annex</strong> 2 - Page 92 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Differentiated services (DiffServ)<br />

The DiffServ architecture 140 has been introduced to solve the IntServ scalability problem. It defines classes of<br />

services (CoS) corresponding to aggregates of flows. The key idea advocated by the DiffServ group of the IETF<br />

consists, by using a priority mechanism, in offering services with gradually increasing performances.<br />

In the DiffServ architecture, traffic is classified, metered and marked at the edge of the network. Traffic with<br />

similar QoS requirements is marked, by setting the diffserv code point (DSCP) in the header of each IP packet,<br />

and is treated in an aggregated fashion in the core routers. The DSCP specifies a per-hop behaviour (PHB) to be<br />

applied to the packets of a given class within a provider’s network (in the core network). A PHB denotes a<br />

combination of forwarding, classification, scheduling, and drop behaviours at each hop. Two PHB’s have been<br />

standardized so far: the Expedited Forwarding (EF) PHB 141 for support of delay and jitter sensitive traffic and<br />

the Assured Forwarding (AF) set of PHBs 142 . The set of AF PHBs is intended to provide different levels of<br />

forwarding assurances for IP packets at a node and to be used to support multiple priority service classes.<br />

Therefore, the core routers do not need to maintain per flow states, but use instead scheduling and buffer<br />

management for aggregates of flows. PHBs provide means for allocating buffer and bandwidth resources at each<br />

node among competing traffic. The PHB is the externally observable forwarding Behaviour applied at a DScompliant<br />

node.<br />

The deployment of DiffServ architectures involves may issues such as dimensioning, pricing, admission control<br />

policies, etc … The use of DiffServ concepts requires also identifying the QoS requirements of the different<br />

types of applications. The main QoS characteristics of interest for multimedia applications are the packet end to<br />

end delay and the packet loss rate. Each flow is characterised by different delay and loss requirements, which<br />

depend on the application characteristics. Some data flows require reliable transport, while for others, to meet<br />

real-time constraints, unreliable and unresponsive transport protocols, such as UDP, have to be used. Note that,<br />

in 3GPP, four traffic or QoS classes have been identified 143 : conversation, streaming, interactive and<br />

background classes. Typical application examples would be respectively voice with stringent and low delay<br />

requirements, streaming video with requirements comparable to voice except for delay, Web browsing requiring<br />

the ‘preservation of the payload content’ and download of e-mail for which the recipient would expect the data<br />

within a certain time, but with the payload content preserved. The above QoS or traffic end-to-end classes are<br />

parameterized by sets of QoS attributes of different applicability and/or values at different layers. The<br />

application QoS characteristics must then be mapped into appropriate classes of services via appropriate DSCP<br />

(DiffServ Codepoints) marking strategies.<br />

<strong>A2.</strong>6.4.2 QoS Signalling<br />

QoS signalling is a key component of an end-to-end QoS architecture. It is necessary to establish, maintain, and<br />

remove reservation states in network nodes. The goal of QoS signalling protocols is to provide packet-switched<br />

networks with a similar behaviour as in circuit-switched networks.<br />

RSVP 144 has been designed for QoS signalling in an IntServ architecture and not for general-purpose signalling.<br />

However, RSVP is known to suffer from excessive transport and processing overhead, of some weaknesses in<br />

terms of mobility. Extensions to RSVP have been defined in order to cope with these limitations and in order to<br />

meet requirements of other applications which also require state-establishment protocols.<br />

The problem of signalling and in particular of QoS signalling is currently being addressed by the IETF NSIS<br />

(Next Steps In Signalling) working group. The NSIS group is in particular defining requirements as well as an<br />

architecture and a next-generation signalling protocol. the goal is “to develop a transport layer signalling<br />

protocol for the transport of upper layer signalling” and when doing so “to re-use, where appropriate, the<br />

protocol mechanisms of RSVP, while at the same time simplifying it and applying a more general signalling<br />

model.”<br />

140<br />

S. Blake, D. Black, M. Carlson, E. Davis, Z. Wand and W. Weiss, “An Architecture for Differentiated Services,” RFC 2475,Dec. 1998<br />

141<br />

V. Jacobson, K. Nichols and K. Poduri, “An Expedited Forwarding PHB”, IETF Request for Comments, RFC 2598, June 1999<br />

142<br />

F. Baker, J. Heinanen, W. Weiss and J. Wroclawski, “Assured Forwarding PHB group”, IETF Request for comments RFC 2597, June<br />

1999<br />

143<br />

Third generation partnership project, “Technical specification group services and system aspects; QoS concept and architecture; 3GPP<br />

TS 23.107 v5.4.0,” Tech. Rep., 3GPP, March 2002<br />

144<br />

R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin nd «Resource reSerVation Protocol – RSVP – Version 1 functional<br />

specification », IETF Request for Comments RFC2205, http://www.ietf.org/rfc/rfc2205.txt, Sept. 1997<br />

<strong>Annex</strong> 2 - Page 93 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

To allow for a more general signalling protocol which could be used accommodate different services or<br />

resources, such as NAT & firewall traversal and QoS resources, a two layer model separating the signalling<br />

transport from the application signalling is considered. The general-purpose CASP (Cross-Application<br />

Signalling Protocol) protocol 145 has been proposed in this context to accommodate general signalling<br />

applications.<br />

RSVP<br />

RSVP (Resource Reservation Protocol) 146 has been standardized in 1997 to provide end-to-end QoS signalling<br />

services for application data streams, i.e., to ask for a specific QoS from the network. More precisely, it has been<br />

initially designed to establish and maintain resources for end-to-end real-time sessions over the Internet based on<br />

Integrated Services. RSVP is a message-based signalling protocol which can carry a variety of information, e.g.,<br />

QoS information, authentication, accounting information, etc …<br />

RSVP is receiver oriented, i.e., the receiver indicates the bandwidth, latency and other QoS characteristics<br />

needed during the exchange. The sender then initiates a reservation process by sending a PATH message. The<br />

PATH message containing information about the source of the flow, the characteristics of the traffic to be<br />

transmitted, finds the path through the network and binds the route to the RSVP use. Upon reception of the<br />

PATH message, the receiver generates reservation messages indicating the QoS needed. This eventually leads to<br />

some resource reservations (e.g., reserving buffer space) along the route established by the PATH message.<br />

A RSVP analysis 147 carried out by the IETF NSIS group reveals a certain number of limitations for this<br />

protocol:<br />

• RSVP lacks a modular framework which could make it amenable for supporting other signalling<br />

applications, including inter-domain signalling or QoS signalling in access networks (e.g., typical DiffServ<br />

edge router resource reservation);<br />

• the built-in multicast support leads to excessive overhead: It has been designed specifically for multicast;<br />

• weaknesses with respect to reliability (due to limited signalling message size), authentication and<br />

authorization;<br />

• difficulty to support host mobility since RSVP identifies signalling sessions by IP addresses; when the host<br />

moves, states previously established may remain for a rather long period of time (until they time out) leading<br />

to inefficient resource utilization.<br />

This has motivated the NSIS group in chartering a work item on the design of a more general signalling<br />

protocol. In that context, RSVP extensions have been studied to cope with the above limitations. RSVP-MIP<br />

has been designed for mobile IP QoS signalling. A layered light-weight RSVP protocol (RSVP-lite) 148<br />

removing the multicast features with extensions such as refresh reduction extension has also been proposed.<br />

When used for QoS signalling, RSVP messages carry QoS related information. However, RSVP can be used for<br />

other applications than QoS signalling by creating new messages. A number of extensions has thus been<br />

introduced such as diagnosis messages to be used in IP tunnels and DiffServ networks, and RSVP traffic<br />

engineering extensions to be used in MPLS (Multi-Protocol label Switching – see below) to signal MPLS<br />

explicit routes, and in generalized MPLS networks. We come back further in this paragraph on the use of RSVP<br />

in a context of MPLS traffic engineering.<br />

Despite these extensions, the overhead resulting from message reliability and from large size signalling<br />

messages remains an issue.<br />

CASP<br />

CASP (Cross-Application Signalling Protocol) 149 is intended to be a general purpose signalling protocol based<br />

on two layers: a messaging layer delivering the signalling messages and the client layer which consists of a nexthop<br />

discovery client and any number of specific protocols performing the actual signalling functions. The client<br />

145<br />

H. Schulzrinne, H. Tschofenig, X. Fu and A. Macdonald, “CASP: Cross-Application Signalling Protocol”, draft-schulzrinne-nsis-casp-<br />

01.txt, work in progress, March. 2003<br />

146<br />

R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin nd «Resource reSerVation Protocol – RSVP – Version 1 functional<br />

specification », IETF Request for Comments RFC2205, http://www.ietf.org/rfc/rfc2205.txt, Sept. 1997<br />

147<br />

J. Manner, X. Fu and P. Pan, “Analysis of existing quality of service signalling protocols”, draft-ietf-nsis-signalling-analysis-04.txt,<br />

work in progress carried out by the IETF NSIS group, May 2004<br />

148<br />

X. Fu and C. Kappler, “Towards RSVP-lite: Light-weight RSVP for generic signalling”, 17th Intl. Conf. on Advanced Information<br />

Networking and Applications, IANA 2003, March 2003<br />

149<br />

H. Schulzrinne, H. Tschofenig, X. Fu and A. Macdonald, “CASP: Cross-Application Signalling Protocol”, draft-schulzrinne-nsis-casp-<br />

01.txt, work in progress, March. 2003<br />

<strong>Annex</strong> 2 - Page 94 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

specific data are transported in objects encapsulated in the signalling message. The client layer protocols specify<br />

the application specific signalling messages.<br />

Two client layer protocols have been defined so far: a QoS resource allocation client (CASP-QoS) and a client<br />

for opening firewall ports. It is thus more modular, hence better suited to support signalling for other<br />

applications than QoS signalling.<br />

Another feature of CASP is that the path discovery is separated from the signalling message delivery. This<br />

allows both in-path and out-of-path signalling, while RSVP provides in-path signalling only. A CASP session is<br />

established along a chain of CASP nodes. The CASP server determines the next node along the path, and<br />

establishes – if needed – the transport connection to that node. Transport connections for signalling messages<br />

delivery in CASP, unlike in RSVP, are reliable (e.g., using TCP). The signalling messages delivery then inherits<br />

from the flow and congestion control features of the underlying reliable transport protocol retained.<br />

The separation of the path discovery from the signalling messages and the use of hop-by-hop addressing instead<br />

of end-to-end addressing (as in RSVP) makes possible the use of security protocols such as TLS and IPsec, as<br />

well as inter-domain signalling.<br />

Both soft-state and hard-state operation modes are feasible. In the soft-state mode, the state is removed after a<br />

given time interval unless it has been refreshed. The hard state is maintained until an explicit state removal<br />

message will dismiss the session. A cryptographic random session identifier is selected by the initiator of the<br />

signalling. In addition, an identifier is assigned to the flow the transmission of which has been requiring the<br />

signalling procedure. The use of both session and flow identifiers and of explicit state removal or teardown<br />

messages contributes to solving the problems of double reservations or of releasing abandoned paths in the<br />

context of mobility.<br />

This protocol is still under investigation considering different transport protocols and routing interfaces with<br />

mobile IP.<br />

<strong>A2.</strong>6.4.3 QoS Mechanisms<br />

In order to guarantee the different services, the network must implement a certain number of functions:<br />

• Admission control<br />

• Congestion Avoidance<br />

• Congestion Management<br />

• Traffic shaping and policing<br />

• Maintaining per-flow-state<br />

• Link Efficiency handling<br />

Network policies, COPS and admission control<br />

In IntServ, the policies to be applied to flows can be stored in directory or policy servers. When receiving a<br />

RSVP message, the RSVP module requests the LPM (Local Policy Module) for a decision to be taken for the<br />

request. The LPM is the module responsible for enforcing policy driven admission control on any policy aware<br />

node. The LPM interacts with a PEP (Policy Enforcement Point), which in turn contacts a PDP (Policy<br />

Decision Point) with a request for a policy decision on the packet and then sends the packet to the RSVP<br />

module. The PEP is a network device that is capable of enforcing a policy. It may be inside the source or<br />

destination or on any node in between. The logical entity which interprets the policies pertaining to the RSVP<br />

requests & formulates a decision is called the PDP (Policy Decision Point).<br />

PDP and PEP communicate with a standard protocol called COPS (Common Open Policy Service). It is a<br />

request-response protocol which relies on TCP for reliable delivery, and possibly IPSec for security. There are<br />

three types of requests:<br />

• Admission control requests: If a packet is just received by a PEP, it asks the PDP for a admission control<br />

decision on it.<br />

• Resource Allocation request: The PEP requests the PDP for a decision on whether, how and when to reserve<br />

local resources for the request.<br />

• Forwarding request: the PEP asks the PDP how to modify a request and forward it to the other network<br />

devices.<br />

<strong>Annex</strong> 2 - Page 95 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

When a PEP sends a policy decision request, i.e., a COPS Request message, the PDP may reach a decision by<br />

processing the packet and send the PEP its decision, called a COPS decision message. Keeping track of previous<br />

decisions, it can send the PEP another COPS Decision message asynchronously.<br />

The PEP may send explicit COPS Delete message to the PDP to remove the state associated with the request and<br />

stop any further decision making at the PDP on that request. If a path or reservation state timeouts or a RSVP<br />

Tear message comes, then a COPS Delete message is issued to the PDP to remove the state.<br />

<strong>Annex</strong> 2 - Page 96 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Congestion avoidance mechanisms<br />

Congestion avoidance mechanisms are intended to anticipate and avoid congestion. The main strategies are tail<br />

drop and random early dropping (RED). Tail drop simply drops an incoming packet if the output queue for the<br />

packet is full, or enqueue the packet otherwise. The main disadvantage is possible TCP global back-up<br />

synchronization: Taildrop can drop packets from many hosts at the same time. A RED gateway detects<br />

congestion by monitoring the average queue size and randomly discards packets if the average queue size<br />

exceeds some lower bound so as to notify connections of incipient congestion. The dropping probability<br />

increases with the average queue size. RED has been shown to successfully prevent global synchronisation, to<br />

reduce packets loss ratios for sufficient buffer size. A queue management algorithm called RIO - RED with IN<br />

and OUT – has been defined, as an extension of RED, in order to discriminate low priority (out-of-profile)<br />

packets from high priority (in-profile) packets in times of congestion. By supporting two RED algorithms with<br />

different levels of dropping probability, RIO allows to perform preferential dropping of out-of-profile packets<br />

over in-profile packets.<br />

WRED – Weighted RED - is a RED strategy where in addition it drops low priority packets over high priority<br />

ones when the output interface starts getting congested. In DiffServ environments WRED looks at IP precedence<br />

bits to decide priorities and hence which ones to selectively dump. WRED is usually configured at the core<br />

routers since IP precedence is set only at the core-edge routers. A variant of WRED is called multi-level RED.<br />

Multi-level RED is a statistical approach which relies on the usage of a random dropping function with different<br />

dropping probabilities together with a shared buffer mechanism, as illustrated in Figure 39. For each virtual<br />

queue (e.g for each subclass AFn in a DiffServ network), the thresholds are estimated from the average queue<br />

size computed as a sum of the current stored average queue size of all the virtual queues (or the AF subclasses<br />

AFn ) with a higher priority.<br />

VirtualQueue<br />

selector<br />

Packets<br />

1<br />

Dp<br />

MAX_p<br />

AFn1<br />

AFn2<br />

AFn3<br />

REDFunction<br />

For eachqueue<br />

DropPacket<br />

AFn3<br />

<strong>Annex</strong> 2 - Page 97 of 282<br />

AF class physicalQueue<br />

AFn2<br />

Figure 39: Multi-level RED for differentiation<br />

OutputLink<br />

AFn1<br />

avg_queue<br />

The multi-level dropping probability mechanism with a shared buffer can be used for differentiating packets<br />

within a given stream The service differentiation (dropping probability function) can be made to depend on the<br />

semantic of the information transported in the packet and will rely on a source (sender) marking.


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Congestion management mechanisms<br />

Congestion management mechanisms are methods implemented in core routers in support of the different classes<br />

of service. They include<br />

• the different queues assigned to the different classes of traffic<br />

• the algorithms for classifying incoming packets and assigning them to the different queues.<br />

• Scheduling packets out of the various queues and preparing them for transmission.<br />

Different types of queuing techniques exist. In FIFO (First In First Out) queues, packets are transmitted in the<br />

order in which they arrive. Packets are stored in a unique queue when the network is congested and forwarded<br />

when there is no congestion. If the queue is full then packets are dropped. In Weighted Fair Queuing packets are<br />

assigned to different queues according to their ToS value, destination and source port number, destination and<br />

source IP address etc. Each queue has some priority value. After accounting for high priority traffic the<br />

remaining bandwidth is divided fairly among multiple queues of low priority traffic. In Custom Queuing,<br />

separate queues are maintained for separate classes of traffic. A byte count is set per queue. This ensures that the<br />

minimum bandwidth requirement by the various classes of traffic is met. CQ round robins through the queues,<br />

picking the required number of packets from each. If a queue is of length 0 then the next queue is serviced. In<br />

Priority Queuing, 4 traffic priorities - high, medium, normal and low are established. Incoming traffic is<br />

classified and enqueued in either of the 4 queues. Classification criteria are protocol type, packet size, ….<br />

Unclassified packets are put in the normal queue. The queues are emptied in the order of - high, medium, normal<br />

and low. In each queue, packets are in the FIFO order. During congestion, when a queue gets larger than a<br />

predetermined queue limit, packets get dropped. The advantage of priority queues is the absolute preferential<br />

treatment to high priority traffic. The disadvantage is that it is a static scheme and does not adapt itself to<br />

network conditions.<br />

Traffic conditioning in edge routers<br />

Edge routers perform traffic conditioning and assign the DSCPs on the basis of an SLA – Service Level<br />

Agreement – negotiated between the customer and the network provider. The classifier reads the DSCP and/or<br />

other field (source IP, destination IP, source port, destination port, etc.), selects and routes packets to a Traffic<br />

Conditioner (TC). The role of the traffic conditioner is to ensure that the flows are in line with the SLA, by<br />

• monitoring the temporal traffic flow of each packet stream (meter) to see if it is within the required profile<br />

and by triggering re-marking, dropping, or shaping, if it is out of profile.<br />

• changing the DSCP, if necessary, in order to change the forwarding behaviour (marker).<br />

• delaying packets of an out of line flow, in order to cause it to conform to the agreed traffic profile. (shaper).<br />

• dropping the packet, if allowed (dropper).<br />

Packets<br />

Meter<br />

Classifier Marker Shapper /<br />

Dropper<br />

<strong>Annex</strong> 2 - Page 98 of 282<br />

Traffic Conditioner<br />

Figure 40: Traffic conditioning building blocks.<br />

Domain Boundary<br />

Classified<br />

Packets


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The building blocks of a traffic conditioner are shown in Figure 40.<br />

Classifier<br />

Packet classifiers select packets in a traffic stream based on the content of some portion of the packet header<br />

such as DSCP, source or destination IP address, etc. Classifiers are used to "steer" packets matching some<br />

specified rule to an element of a traffic conditioner for further processing. Behaviour Aggregate (BA) or Multifield<br />

(MF) classification can be performed. A Behaviour Aggregate classifier selects packets based only in the<br />

contents of the DSCP field, while a Multi-field classifier selects packets based on the content of some arbitrary<br />

number of header fields; typically some combination of source address, destination address, DS field, protocol<br />

ID, source port and destination port. Usually BA classifier is performed on the interior nodes of a DiffServ<br />

domain, while MF classifiers are located at the ingress or egress nodes of a DiffServ domain.<br />

Meter<br />

The meter function will be governed by a policing function intended to assure that each class is in conformance<br />

with the SLA. In the platform, conformance to the SLA will be verified using a single or multiple token bucket<br />

approach. Also, the exponential token bucket, will be used as a means of metering for the SLAs based on the<br />

respective PHBs.<br />

Marking in the ingress node<br />

Packet markers set the DS field of a packet to a particular codepoint, classifying the marked packet to a<br />

particular DS behaviour aggregate. The marker may be configured to mark all packets which are steered to it to<br />

a single codepoint, or may be configured to mark a packet to one of a set of codepoints used to select a PHB in a<br />

PHB group, according to the state of a meter.<br />

Re-Marking in the ingress/core nodes<br />

When a marker changes the codepoint in a packet it is said to have "re-marked" the packet the sender has<br />

already marked. DSCP re-marking may be performed in egress and core nodes, according to the SLS.<br />

Per Hop Behaviours in core routers<br />

CoS to PHB mapping<br />

A DiffServ domain is characterized by an Ingress and an Egress Point, with Service Level Specifications. The<br />

SLS corresponds to a CoS and to a corresponding amount of resources (bandwidth). The SLS have the form<br />

. Each CoS has to be assigned a certain per-hopbehaviour<br />

or forwarding treatment.<br />

Class of Service Per Hop Behaviour<br />

Virtual Leased Line EF<br />

Controlled Delay – Low Exp-TBF<br />

Controlled Delay – Medium Exp-TBF<br />

Assured Forwarding Group AF group<br />

Best Effort BE<br />

Table 13<br />

Quantitative CoS (virtual leased line, controlled – medium or low - delay) require Admission Control to the<br />

networks resources, possible network reconfiguration when resources are not sufficient for a new flow to be<br />

accepted, as well as explicit path routing, for end to end delay control. Per flow Admission Control to the core<br />

DiffServ network resources, can be performed at the edge routers, using a signalling protocol, e.g., RSVP.<br />

Resources available for each Class of Service and source destination pairs are determined by the DiffServ<br />

Service Level Specification (SLS), specifying resources to be reserved for each CoS.<br />

<strong>Annex</strong> 2 - Page 99 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

PHB implementation<br />

CoS that is offered by the DiffServ domain has to be mapped to a PHB. Per Hop Behaviours correspond to<br />

different forwarding treatments:<br />

• Expedited Forwarding (EF). The EF PHB can be used to build a low loss, low latency, low jitter, assured<br />

bandwidth, end-to-end service through DiffServ domains. Such a service appears like a point-to-point<br />

connection or a "virtual leased line" 150. Several types of queue scheduling mechanisms may be employed<br />

to implement the EF PHB. A possible implementation is a CBQ 151 scheduler that gives the EF queue the<br />

highest priority, providing bounded and isolated bandwidth up to the configured rate. CBQ, by building up<br />

queues of different priorities, allows explicit service discrimination between flows, each flow being<br />

composed of homogeneous aggregates of packets. This queuing mechanism allows to separate streaming<br />

video applications from other applications which do not have the same characteristics and which do not<br />

require the same level of QoS. Each class of a CBQ based packet scheduler can implement different Per Hop<br />

Behaviours.<br />

• Another way of implementing a EF PHB is to use a simple priority queue that will give the appropriate<br />

behaviour as long as there is no higher priority queue that could preempt the EF for more than a packet time<br />

at the configured rate. This could be accomplished by having a rate policer such as a token bucket associated<br />

with each priority queue to bound the incoming rate. It is also possible, to use a single queue in a group of<br />

queues serviced by a weighted round robin scheduler where the share of the output bandwidth assigned to<br />

the EF queue is equal to the configured rate.<br />

• Assured Forwarding (AF). AF PHB group is a means for a DiffServ domain to offer different levels of<br />

forwarding assurances for IP packets. Four AF classes are defined. At each DiffServ node AF class is<br />

allocated a certain amount of forwarding resources (buffer space and bandwidth). Within each AF class, a<br />

packet is assigned one of 3 different levels of drop precedence. A congested DS node tries to protect packets<br />

with a lower drop precedence value from being lost by preferably discarding packets with a higher drop<br />

precedence value. An AF class is configured with a maximum rate and a threshold at which the classifier<br />

begins to assign higher drop precedence to the packets 152. Classification to the appropriate classes is<br />

performed using packet filtering. Meters are attached to each class, and if the agreed rate is exceeded,<br />

policing or dropping occurs.<br />

150 V. Jacobson, K. Nichols and K. Poduri, “An Expedited Forwarding PHB”, IETF Request for Comments, RFC 2598, June 1999<br />

151 Floyd, S., and Jacobson, V., “Link-sharing and Resource Management Models for Packet Networks”. IEEE/ACM Transactions on<br />

Networking, Vol. 3 No. 4, pp. 365-386, August 1995<br />

152 F. Baker, J. Heinanen, W. Weiss and J. Wroclawski, “Assured Forwarding PHB group”, IETF Req. for comments RFC 2597, June 1999<br />

<strong>Annex</strong> 2 - Page 100 of 282


<strong>A2.</strong>6.5 MPLS and traffic engineering<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The growing demand for bandwidth, and at the same time efficient bandwidth utilization and QoS support,<br />

requires also high performance switching and routing solutions. This motivated the design of simple forwarding<br />

methods combining traffic management features of traditional switches with routing functions, exemplified by<br />

MPLS.<br />

<strong>A2.</strong>6.5.1 MPLS<br />

MPLS - Multiprotocol Label Switching – 153 is an IETF initiative that aims at integrating Layer 2 information<br />

about network links (bandwidth, latency, utilization) into Layer 3 (IP) in order to simplify and improve IPpacket<br />

exchange. The goal is eventually to provide efficient switching and give flexibility to route traffic around<br />

link failures, congestion, and bottlenecks. From a QoS standpoint, MPLS should allow to manage different<br />

kinds of data streams based on priority and service plan. It gives the appearance of a connection-oriented<br />

service.<br />

When packets enter a MPLS-based network, Label Edge Routers (LERs) give them a label (identifier). The<br />

layer-3 header is analyzed and mapped into a fixed length unstructured value called a label. Different headers<br />

can be mapped to the same label, provided that these headers result in the same choice of next hop. These labels<br />

not only contain information based on the routing table entry (i.e., destination, bandwidth, delay, and other<br />

metrics), but also refer to the IP header field (source IP address), Layer 4 socket number information, and<br />

differentiated service. The labels correspond to a forwarding equivalence class. This association is called the<br />

label binding. The assignment of a label may also be based on a forwarding policy. A short label header is then<br />

added at the front of the layer-3 packet and, this way, carried across the network with the packet. At the next<br />

hops, the forwarding decision is based on the label value.<br />

Once this classification is complete and mapped, different packets are assigned to corresponding Labeled Switch<br />

Paths (LSPs), where Label Switch Routers (LSRs) place outgoing labels on the packets. Each LSR informs its<br />

neighbours of the label bindings it has made, via the Label Distribution Protocol (LDP). Note that RSVP-TE is<br />

also envisaged for label distribution. RSVP-TE is defined as a set of tunnelling extensions to the original RSVP<br />

protocol, essentially for traffic engineering. Tunneling extensions allow the creation of LSPs and to provide<br />

smooth rerouting or pre-emption. Label-switched paths can be implemented over various packet based link-level<br />

technologies (layer-2 technologies), such as Packet-over-Sonet, Frame Relay, ATM, and LAN technologies (e.g.<br />

all forms of Ethernet, Token Ring, etc.). The provisioning and management, in this case, needs to deal with only<br />

MPLS rather than with multiple layer-2 technologies.<br />

MPLS can be regarded as a technology forcing application flows into connection-oriented paths and providing<br />

mechanisms for traffic engineering and bandwidth guarantees. To enable traffic engineering, MPLS must be<br />

combined with technologies enabling class-specific treatment, (e.g. RSVP with tunnelling extensions, DiffServ).<br />

A key advantage of MPLS is thus the close integration with the IP networking stack.<br />

<strong>A2.</strong>6.5.2 Generalized MPLS<br />

Generalized MPLS extends the MPLS control plane protocols to control several types of switches, Time<br />

Division Multiplex (TDM), Lambda Switch (LSC), Fiber-Switch (FSC), and not only packet interfaces and<br />

switching. When the optical cross-connect is realized by wavelet switching, the protocol is called Multiple<br />

Protocol Lambda Switching. The label definition is thus extended to include time slots, wavelength, port<br />

numbers. A functional description of the extensions to MPLS signalling is provided in 154 .<br />

<strong>A2.</strong>6.5.3 DiffServ aware MPLS traffic engineering<br />

Service level agreements (SLA) define the service quality offered in terms of latency, jitter, bandwidth<br />

guarantees, … translating into a variety of scheduling, queuing and drop policies. However, to guarantee strict<br />

scheduling, DiffServ marking is not sufficient. If the traffic follows a path with resources which are not<br />

sufficient to meet jitter or latency requirements, or if the congestion is caused by a link or node failure, the SLA<br />

cannot be met. Hence, DiffServ provides a QoS treatment to traffic aggregates but does not guarantee a given<br />

QoS.<br />

Congestion might be avoided by optimizing the routing of traffic.<br />

153<br />

E. Rosen, A. Viswanathan and R. Callon, “MPLS : Multi-protocol Label Switching architecture” IETF Request for Comments, RFC<br />

3031, Jan. 2001<br />

154<br />

E. Rosen, A. Viswanathan and R. Callon, “MPLS : Multi-protocol Label Switching architecture” IETF Request for Comments, RFC<br />

3031, Jan. 2001<br />

<strong>Annex</strong> 2 - Page 101 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Since the current Internet routing protocols make routing decisions based only on the shortest path to the<br />

destination, traffic will be aggregated towards the core of a network, even if alternative routes of higher capacity<br />

exist. Traffic engineering is the process of distributing the load on a network in order to achieve optimal<br />

utilization of the available bandwidth. An important mechanism for automating traffic engineering is constraintbased<br />

routing (CBR). CBR extends shortest path routing algorithms to take resource availability and flow<br />

requirements into consideration when computing routes. Thus, a CBR algorithm might select an alternative path<br />

to the destination if it provides more bandwidth than the shortest path. This leads to a more effective utilization<br />

of network resources.<br />

In contrast, MPLS together with constraint-based routing can guarantee bandwidth for a forwarding equivalence<br />

class but does not support class-based treatment of flows (i.e, forwarding or discarding and scheduling).<br />

DiffServ-aware MPLS traffic engineering framework 155 defined by the IETF aims at combining these<br />

complementary technologies in the direction of QoS-enabled networks.<br />

MPLS DiffServ-TE makes MPLS aware of pre-defined classes of service (CoS) for the different types of traffic<br />

and applications, allowing resource reservation for the different CoS. When MPLS supports DiffServ, traffic<br />

flows can go via class-based admission, differentiated queue servicing, pre-emption priority, etc … to ensure<br />

that the traffic complies with the negotiated service level agreement. In other words the DiffServ functions are<br />

being added to MPLS.<br />

RFC 3270 155 describes how the Per-Hop-Behaviours (PHB) for the different CoS can be inferred from the<br />

restricted MPLS header and label depending on the number of different PHB to be supported.<br />

<strong>A2.</strong>6.5.4 RSVP-TE (Traffic Engineering)<br />

MPLS separates data forwarding from the control and signalling mechanisms used for routing and label<br />

management. A variety of control and signalling mechanisms can thus be considered for MPLS. The IETF has<br />

defined two signalling mechanisms for establishing LSPs in an MPLS environment: LDP with some extensions<br />

156 , and RSVP-TE 157 .<br />

LDP (Label Distribution Protocol) is a signalling protocol designed for setting up LSP on a hop-by-hop basis. It<br />

also allows to exchange information about label bindings. Each MPLS node selects the next hop according to<br />

the information in the label information table. LDP defines a number of messages used to establish, maintain<br />

and terminate sessions between MPLS nodes. Extensions of LDP defined for explicit routing are referred to as<br />

CR-LDP (constraint-based routing LDP).<br />

RFC 3209 157 of the IETF defines extensions of RSVP to establish explicitly routed LSPs. This protocol referred<br />

to as RSVP-TE defines procedures and messages to allocate and bind labels, to exchange information about<br />

label bindings and to establish LSPs. The RSVP-TE PATH message conveys QoS parameters such as<br />

bandwidth, burst limits, delay, jitter, etc … for each link along the LSP.<br />

155<br />

F. Le Faucheur (Ed.), et al, “MPLS Support of Differentiated Services,” IETF Request for Comments, RFC3270, May 2002<br />

156<br />

L. Andersson et al, “LDP Specification,” IETF Request for Comments, RFC 3036, Jan. 2001<br />

157<br />

D. Awduche, L. berger, D. Gan, T. Li, S. Srinivasan, and G. swallow, “RSVP-TE: Extensions to RSVP for LSP tunnels”, IETF Request<br />

for comments, RFC 3209, Dec. 2001<br />

<strong>Annex</strong> 2 - Page 102 of 282


<strong>A2.</strong>6.6 Session and application level signalling<br />

<strong>A2.</strong>6.6.1 SDP<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

SDP (Session Description Protocol) 158 is used to describe multimedia sessions (i.e. the media streams). It is<br />

actually a structured, text-based media-description format. SDP messages include information such as the<br />

session name and purpose, the time during which the session is active, the bandwidth used, the type of media,<br />

the transport protocol (RTP, H.320), the media encoding format, the multicast address and transport port for the<br />

media. It is also used to specify the client capabilities (e.g. can support MPEG-1 video codec and MP3 audio<br />

codecs). SIP, MGCP (Media Gateway Control Protocol), SAP (Session Announcement) and RTSP (Real-Time<br />

Streaming Protocol) all use SDP. SDP also allows the scheduling of session start and stop times or to describe<br />

recurring sessions. SDP information can be carried within other protocol packets, e.g., in the SIP message body.<br />

<strong>A2.</strong>6.6.2 RTSP<br />

The Real Time Streaming Protocol is an application level protocol to control the delivery of data with real time<br />

properties. RTSP is a protocol for initiating and directing delivery of streaming multimedia from media servers.<br />

RTSP is designed to work with time-based media, such as streaming audio and video, as well as any application<br />

where application controlled, time-based delivery is essential. It has mechanisms for time-based seeks into<br />

media clips, with compatibility with many timestamp formats, such as SMPTE timecodes. It can be regarded as<br />

some sort of "Internet VCR remote control protocol". In addition, RTSP is designed to control multicast delivery<br />

of streams.<br />

RTSP does not deliver data, though the RTSP connection may be used to tunnel RTP traffic for ease of use with<br />

firewalls and other network devices. RTP and RTSP will likely be used together in many systems, but either<br />

protocol can be used without the other. It is complementary to protocol suites such as H.323 or to SIP. E.g.,<br />

H.323 is useful for setting up audio/video conferences in moderately sized peer-to-peer groups (e.g.,<br />

videoconferencing groups), whereas RTSP is useful for large-scale broadcasts and audio/video-on-demand<br />

streaming. RTSP provides "VCR-style" control functionality such as pause, fast forward, reverse, and absolute<br />

positioning, which is beyond the scope of H.323. Many major firewall vendors are listed as supporters of RTSP.<br />

Being an open standard, RTSP may allow the industry to concentrate its efforts on a single streaming<br />

infrastructure, with clients, servers, and proxies all using RTSP as a single unifying protocol.<br />

<strong>A2.</strong>6.6.3 SIP<br />

SIP 159 is an application-layer transaction “request-response” protocol designed to establish, modify, terminate<br />

Unicast and multicast multimedia (voice and video) sessions in an IP network for a large range of multimedia<br />

applications. A session can be for example a telephone call (IP telephony) or a collaborative multimedia<br />

conference session. SIP sessions can also include information retrieval or broadcast sessions, depending on the<br />

session description. The protocol has been specified by the MMUSIC WG as a proposed standard in 1999 (IETF<br />

RFC 2543) and was updated by the SIP WG in 2002 (IETF RFC 3261). Possible SIP applications and usage<br />

scenarios are investigated in the Session Initiation Proposal Investigation (SIPPING) WG. SIPPING describes<br />

the requirements for any extension to SIP. SIP is neither a session description protocol nor a resource<br />

reservation protocol. These functions are provided by other protocols such as SDP and RSVP respectively (see<br />

above). SIP has been designed to co-exist and inter-operate with the other Internet protocols. In contrast with<br />

H.323 which is an entire suite of protocols or technologies including codecs, call control, conferencing, in one<br />

integrated stack, SIP has been designed to work with a broad spectrum of existing and future protocols. In that<br />

sense, SIP is more flexible. It provides four basic functions: establishment of user location, i.e. translating from<br />

the user name to the corresponding network address; feature negotiation; call management, i.e., adding,<br />

dropping or transferring participants; changing features of a session in progress.<br />

SIP provides a small number of text-based messages to be exchanged in separate transactions between the SIP<br />

peer entities (SIP user agent in a user terminal). The architecture of SIP is based on HTTP with its advantages of<br />

easy extendibility and text based messaging. Much of the syntax and semantics are borrowed from HTTP. It<br />

looks like HTTP messages – with message formatting, header and MIME support.<br />

158<br />

M. Handley and V. Jacobson, “SDP: Session Description Protocol,” IETF Request for Comments RFC 2327,<br />

http://www.ietf.org/rfc/rfc2327.txt, Apr 1998<br />

159<br />

J. Rosenberg et al., “SIP: Session Initiation Protocol,” IETF Request For Comments RFC 3261, Jun 2002. Replaces RFC 2543<br />

<strong>Annex</strong> 2 - Page 103 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The session itself is described at two levels: the SIP protocol which contains the parties’ addresses and protocol<br />

processing features; the description of the media streams that are exchanged between the parties of a multimedia<br />

session are defined by another protocol. The IETF suggests the Session Description Protocol (SDP, IETF RFC<br />

2327). The message body is transparent to SIP.<br />

The basic call control functionality is provided by one signalling transaction using the INVITE request message.<br />

Other transactions complement the basic call, e.g., explicit call release. Network entities such as proxy servers or<br />

redirect servers that can be traversed by the messages, and can be used for support, e.g., for address resolution.<br />

SIP supports user mobility by proxying and redirecting requests to the user's current locations. The protocol<br />

works on a client-server model. Entities in SIP are user agents formed by a user agent client (UAC) and server<br />

(UAS), proxy servers and redirect servers. The signalling path is independent from the data path.<br />

The user agent client initiates a SIP request (i.e., is the end system which sends the SIP requests). The user agent<br />

server returns the responses (i.e., listens to the call requests and prompts the user or executes a program to<br />

determine the response) on behalf of the end user. The redirect servers are network servers which redirect users<br />

to try other servers, i.e., they return the destination addresses to the receiving client. Proxy servers are<br />

intermediate entities receiving requests from the clients and forwarding or re-initiating the requests (acting as a<br />

Client) to the other servers. A proxy server can either be stateful or stateless. When stateful, it remembers the<br />

incoming requests and the associated outgoing requests and co-ordinates the responses accordingly.<br />

Location server is used by the SIP redirect or proxy server to obtain information about the called party's possible<br />

locations. This may come from the SIP server or other protocols (non-SIP) when externally located. Users must<br />

register their current location. This is handled by registrars typically co-located with proxy or redirect server.<br />

They accept the registration requests from the users. The proxy servers can thus forward the calls to the user’s<br />

current location. Proxying and redirecting requests to a user’s current location enables user mobility.<br />

The SIP protocol and its extensions can be used to support a variety of functions, e.g.,<br />

• Interface to SIP-enabled endpoints;<br />

• Call/session control, for example, interface to media gateway controllers, proxies;<br />

• Media service control, for example, interface to media gateways and media servers.<br />

• Service control, for example, interface to application servers and services mediation functions.<br />

• Intelligent network/Internet protocol (IN/IP) internetworking, for example, the SIP-based Services in the<br />

PSTN/IN Requesting Internet Services<br />

<strong>A2.</strong>6.6.4 SAP<br />

For large multicast conferences where synchronous invitation of all prospective participants is not viable, a<br />

protocol called the Session Announcement Protocol (SAP) has been proposed. SAP has been designed around<br />

the same time as SIP. It is used to announce multimedia sessions, e.g., to users in a multicast group. It can be<br />

used in phones and gateways in conjunction with SIP. Session announcement packets containing SDP<br />

descriptions are periodically transmitted to a multicast address and port. Announcements can contain start time<br />

of session, duration of session, etc. Specialized session directory tools listen to session announcements,<br />

informing the user about active and upcoming sessions, address and port.<br />

<strong>Annex</strong> 2 - Page 104 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>6.7 Session signalling and QoS networks: Interaction and Integration<br />

This section gives an overview on how the main session signalling and QoS networks may interact and interoperate<br />

in QoS enabled broadband multimedia systems.<br />

The first problem to be addressed in the design of multimedia systems is audio and video streams establishment<br />

and negotiation of QoS parameters. The IETF has defined the Session Initiation Protocol (SIP) as a standard<br />

protocol used for session initiation and for negotiating its parameters in the Internet. The OMG, IMTC are also<br />

promoting standards for audio/video streams control and transmission. The OMG has defined the AVSC<br />

(Control and Management of Audio/Video Streams) specification for the control and management of<br />

audio/video streams. The OMG AVSC specification defines a model for implementing an open distributed<br />

multimedia streaming framework based on CORBA environment. The architecture inherits the features of<br />

CORBA as a middleware. The IMTC committee has proposed H.323. The resulting diversity in system<br />

architectures and control protocols raises questions about the interoperability between such applications. We<br />

come back on this point in the sequel. We focus here on the interaction of SIP with network and end-to-end QoS<br />

mechanisms.<br />

sip:toto@xxx.xx<br />

Request<br />

Response<br />

SIP (UAC)<br />

1<br />

1<br />

SIP<br />

Proxy<br />

2<br />

3<br />

4<br />

Figure 41: SIP distributed architecture and session establishment walk-thru.<br />

The SIP endpoints, also called user agents, initiate requests and respond to requests. The user agents send<br />

registration messages to the SIP registrar which stores the corresponding information in a location service via a<br />

non-SIP protocol. Once the information is stored, the registrar sends back the appropriate response.<br />

For the session establishment, user agents communicate with other User Agents directly or via intermediate<br />

servers (Figure 41). SIP intermediate servers act as proxy or redirect servers. SIP Proxy Servers forward<br />

requests from the User Agent to the next SIP server within the network and also retain information for<br />

billing/accounting purposes. SIP supports address resolution, name mapping and call re-direction: SIP Redirect<br />

Servers respond to client requests and inform them of the requested server’s address. SIP servers can contact<br />

external location servers in order to determine user or routing policies. This enables to determine the location of<br />

target points. Several non-SIP schemes can be used to locate users. To avoid scalability problems, SIP servers<br />

can either maintain state information or forward requests in a stateless fashion.<br />

To carry information about the type of session, the media capabilities of the end systems, the negotiating terms<br />

and conditions, SIP uses SDP (Session description Protocol). This information is carried as attachment. Upon<br />

receiving an INVITE message to join a session, a party can either accept or reject the invitation. In case of<br />

rejection, the SIP agent returns a message indicating why the remote party cannot be contacted. If the invitation<br />

has been accepted, the inviting party receives an indication that the called party has been located. In addition to<br />

<strong>Annex</strong> 2 - Page 105 of 282<br />

1<br />

SIP<br />

SIP<br />

Proxy<br />

5<br />

1<br />

6<br />

7<br />

8<br />

SIP Client<br />

Location<br />

9<br />

Non-SIP<br />

Protocol<br />

SIP


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

establishing a session between two points, SIP also supports mid-call changes, changes of media characteristics<br />

or codec.<br />

Interoperability of many multimedia applications constructed over different system platforms using other<br />

signalling protocols can be practically achieved using a SIP controlled gateway. Gateways provide call control,<br />

possibly translation of messages between SIP conferencing end points and other terminal types. They act both as<br />

servers and clients. They can perform functions such as Authentication, Authorisation, network access control,<br />

routing. The requests are serviced internally or by passing them on, possibly after translation, to other servers.<br />

In order to provide complete services (e.g., IP telephony, IP videoconferencing, etc,…), SIP must be used in<br />

conjunction with other IETF protocol standards, e.g.<br />

• with RSVP which will reserve the required network resources for a given targeted/negotiated QoS, or with<br />

the COPS (Common Open Policy Service) protocol for supporting policy control over QoS,<br />

• with RTP (Real Time Transport Protocol) for the transport of real time data,<br />

• with RTSP (Real Time Streaming Protocol) for controlling the delivery of streaming media,<br />

• and with SAP (Session Advertisement Protocol) for advertising multimedia sessions via multicast.<br />

A lot of effort has already been devoted to the interaction of SIP with the QoS mechanisms in IP networks. This<br />

includes SIP extensions (such as SDP (Session Description Protocol)), QoS extensions and resource reservation<br />

interactions with SIP) in order to increase QoS in SIP-based networks. One characteristic of SIP that actually<br />

increases QoS is the ability to indicate in a SIP INVITE, via SDP, which codecs are supported before a session<br />

begins and the desired QoS.<br />

When considering QoS with SIP, one has actually to distinguish between the service quality due to session<br />

establishment which includes aspects such as call completion probability and call establishment time and the<br />

RTP media stream delivery quality. In the sequel of the section we focus on the interaction of – or augmenting –<br />

SIP with media delivery QoS.<br />

Indeed, SIP and data paths are disjoint and SIP cannot reserve resources. Hence, RTP media stream quality does<br />

not depend on the SIP protocol, but depends on more general IP QoS mechanisms. To date, SIP applications<br />

over IP networks functioned as best-effort services — their media packets are delivered with no performance<br />

guarantees. However, SIP Gateway Support of RSVP can allow the support of quality of service (QoS) by<br />

coordinating SIP call signalling and RSVP, e.g., for DiffServ resource management. Resource reservation on<br />

SIP gateways can synchronize RSVP procedures with SIP call establishment procedures, ensuring that the<br />

required quality of service (QoS) for a call is maintained across the IP network. This can be done on the basis of<br />

QoS information and media codec types provided by SDP messages carried with SIP INVITE requests. This<br />

means integration of two forms of signalling: the signalling to set up the call using SIP and - once the media<br />

addresses and codecs are agreed upon via SDP - a second signalling for setting up QoS using RSVP.<br />

Different SIP-QoS architectures can be envisaged, i.e.<br />

• either supported end-to-end with additional complexity on the terminals, the user application must be aware<br />

of the QoS mechanisms used in the access network and the corresponding QoS signalling protocol (e.g.<br />

RSVP, COPS, or other),<br />

• or scenarios based on QoS aware SIP servers, i.e., which amounts to moving QoS related functions to SIP<br />

servers that will control both call setup and resource reservation, thus relieving the terminals from extra<br />

complexity.<br />

<strong>A2.</strong>6.7.1 End-to-end architecture:<br />

In the end-to-end architecture, the terminals (the SIP user agent) start a RSVP based bandwidth reservation<br />

during the SIP call setup. The SIP terminals are indeed connected through access networks to a core network<br />

with QoS support. The QoS provided in the core network is accessed via network edge routers. When the calling<br />

User Agent wants to establish a QoS call, it sends the SIP INVITE message to the callee, specifying that a<br />

bandwidth reservation is requested. Upon the receiving of the INVITE message, a “session progress” response is<br />

sent from the callee and then the resource reservation procedure can start.<br />

Depending on the QoS model (Intserv or DiffServ), the caller or/and the callee starts a RSVP session by sending<br />

PATH messages to the peer party or within the access network only. RSVP signalling can indeed be used within<br />

access networks only, and DiffServ mechanisms can be used in the core network. The bandwidth reservation is<br />

still requested by the terminals by means of RSVP signalling, but the resources in the core network are handled<br />

with DiffServ mechanisms. Upon the reception of the RESV messages each user agent realizes that the<br />

reservation has been successfully setup and the SIP call setup can continue.<br />

<strong>Annex</strong> 2 - Page 106 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Admission control within the Diffserv network is enforced at the edge of the network by the Edge Routers<br />

(ERs). Different architectures can be used for the resource management.<br />

A possible solution is to rely on a Bandwidth Broker – BB - in charge of managing resources of the whole<br />

Diffserv network. The BB may act as an Admission Control Server. The BB can be regarded as a server<br />

dedicated to policy control, accounting and billing aspects. At the reservation setup time, the ingress Edge<br />

Router queries the BB and rejects or admits the new flow depending on its response.<br />

This approach requires user agents to be aware of both SIP and RSVP signalling; therefore no generic SIP client<br />

can be used. The support of both SIP and RSVP protocols may be critical for very light terminals. A lot of<br />

signalling messages have to be sent and processed by terminals, servers and routers.<br />

<strong>A2.</strong>6.7.2 Architecture based on QoS aware SIP servers:<br />

In the second approach, the SIP client sends SIP messages to its proxy server handling both outgoing and<br />

incoming calls and receives the messages from its server. The SIP servers can read and add QoS related<br />

information in the SIP messages. The SIP server extracts from the SIP message body the signalling QoS<br />

parameters and interacts with the network QoS mechanisms. If the terminating SIP server is able to handle the<br />

requested QoS, it answers with proper information in the response SIP messages. The QoS scenario can be<br />

based on COPS as the protocol for QoS reservations.<br />

The caller SIP client starts a SIP call setup session through the SIP proxy server, by sending a SIP INVITE<br />

message. The message carries the called URI in the SIP header and the session specification within the body<br />

SDP (media, codecs, source ports, etc). Based on the session information (e.g. carried by SDP), the QoS aware<br />

SIP server can start a QoS session interacting with a remote QoS aware SIP server and with the QoS provider<br />

for the backbone network (i.e. the access edge router). The QoS requests can be made by the SIP server to the<br />

edge router using COPS. The edge router to handle the QoS requests should support all the mechanisms needed<br />

to perform admission control, and appropriate policing functions.<br />

The SIP server can then insert the required descriptors within the INVITE message and forward it towards the<br />

called server, possibly via its QoS aware SIP proxy server which thus has all the information to request a<br />

specific QoS reservation to the edge router on the called access network for the called-to-caller traffic flow.<br />

When the called QoS aware SIP server receives the response for the QoS reservation request, if it is positive, it<br />

stores such QoS information and sends it within the OK message toward the caller.<br />

The QoS provided by the QoS enabled network is accessed by QoS Access Points located in the edge routers.<br />

The setup of a QoS session in such a scenario is composed of two phases: the end-to-end signalling mechanism<br />

to exchange QoS information and the QoS negotiation between the SIP agents and the QoS network, e.g. using<br />

COPS as shown in Figure 42.<br />

sip:toto@xxx.xx<br />

SIP Request<br />

SIP Response<br />

Data flow<br />

COPS exchanges<br />

SIP<br />

Proxy<br />

PEP / PDP<br />

COPS<br />

PDP / Bandwidth Broker<br />

Core router<br />

Edge Diffserv router<br />

Figure 42: Interaction of SIP with QoS enabled IP networks.<br />

<strong>Annex</strong> 2 - Page 107 of 282<br />

PEP/PDP<br />

Edge Diffserv router


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

This separation of session establishment and QoS reservation could lead to the following: One may succeed<br />

(namely, the call setup), while the other (resource reservation) can fail. To avoid this problem, the SIP INVITE<br />

(specifically, the SDP), can contain indicators that tell the called user not to "ring" until sufficient resources have<br />

been reserved (using RSVP or some other mechanism). Of course, if the QoS reservation fails, the call can<br />

obviously proceed with best effort.<br />

In the scenario depicted in Fig.4, the client does not need to be COPS or RSVP aware. It needs only to be SIP<br />

aware, which may be more appropriate for light-weight devices. Except for RSVP which may not be needed in<br />

the above signalling/QoS architecture, the protocol stack to be implemented on the end systems including the<br />

end-to-end mechanisms for improving the QoS of the multimedia sessions is depicted in Figure 43.<br />

H.323<br />

Application & Session<br />

signalling<br />

MGCP<br />

gateway control<br />

SDP<br />

SIP<br />

QoS signalling<br />

(control plane)<br />

RTSP RSVP RTCP RTP<br />

TCP UDP<br />

IPv4, IPv6<br />

Figure 43: End-systems protocol stack<br />

<strong>Annex</strong> 2 - Page 108 of 282<br />

Media & transport<br />

(data plane)<br />

A/V Codecs<br />

cong.<br />

control<br />

Loss<br />

control<br />

Buffer management


<strong>A2.</strong>6.8 Content adaptation<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

To cope with the growing diversity and heterogeneity of networks, devices and content consumption, the<br />

content may need to go thru adaptation processes: transcoding, scaling, content adaptation to small devices, e.g.,<br />

like markup language transformation for wireless devices (PDAs, cell phones), content filtering, personalization<br />

(e.g., human language translation, modality transformation), location-aware data insertion, prioritisation, virus<br />

scanning, ad insertion. Content adaptation in open environments requires the definition of protocols, interfaces<br />

and syntax and semantics for capability exchange, for specifying the adaptation requested, feasible or allowed,<br />

etc … This section gives an overview of emerging technologies in this area.<br />

<strong>A2.</strong>6.8.1 MPEG-21 DIA (Digital Item Adaptation) (MPEG-21 Part 7)<br />

The goal of MPEG-21 DIA is “enabling universal multimedia access” with interoperable and transparent access<br />

to MM content. The DIA concept relies on a resource adaptation engine and a descriptor adaptation engine. The<br />

scope of the standard is to specify tools that assist the adaptation process, not to specify the adaptation engines.<br />

These tools include usage description tools and bitstream syntax description tools.<br />

Usage Description tools<br />

The usage description tools refer to the physical environmental conditions including the terminal capability, the<br />

network characteristics, user characteristics, natural environment characteristics. The terminal capability<br />

description tools include the following classes of description tools:<br />

• Codec capabilities<br />

• Input-output capabilities including display, audio output capabilities and properties of input devices<br />

• Device properties, e.g. power related and storage characteristics<br />

The network characteristics include<br />

• Network’s static attributes, e.g. maximum capacity or minimum guaranteed bandwidth<br />

• Network dynamic and time-varying conditions, e.g., available bandwidth, error (packet loss or bit error rate),<br />

delay (one way or two-ways packet delay, jitter).<br />

The user characteristics refer to<br />

• User information, preferences, usage history the description of which relies on the MPEG-7 description<br />

scheme<br />

• Audiovisual presentation preferences, e.g., display brightness, contrast, or audio power.<br />

• Accessibility characteristics, e.g., description of auditory or visual impairments<br />

• Location characteristics, e.g., user’s movement over time, destination information which can be used in a<br />

context of adaptive location-aware services.<br />

The natural environmental characteristics include<br />

• Location and time of usage making use of MPEG-7 description schemes (Place DS and time DS)<br />

• Audiovisual environment, e.g., environment noise level and frequency, illumination characteristics<br />

The standard also specifies a set of possible constraints, of adaptation operations, and utility metrics in the form<br />

of subjective or objective measures. It also specifies the relationships between constraints, feasible adaptation<br />

operations, and utilities.<br />

Bitstream syntax description tools<br />

The bitstream syntax description can be seen as a high level description of the bitstream syntax. The goal is to<br />

allow intermediate network nodes or proxies to perform media adaptation without having to understand every<br />

media format. The bitstream description makes use of the Extensible Markup Language (XML) and specifies<br />

how the data layers are organized in the bitstream. An adaptation engine can then perform transformations of the<br />

description with an Extensible Stylesheet Language Transformation (XSLT). An adaptation engine should then<br />

be able to re-generate a transformed bitstream from the transformed description.<br />

Two tools have been specified as part of MPEG-21 DIA: the bitstream syntax description language (BSDL)<br />

used to describe the syntax of a particular coding format, the generic bitstream syntax schema (gBS) used to<br />

describe any binary resource and to associate semantic labels with the elements of syntax. They can be seen as<br />

tools to create high-level descriptions of the bistream syntax in order to allow for format-independent<br />

adaptation.<br />

<strong>Annex</strong> 2 - Page 109 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Metadata adaptation<br />

The standard also includes tools to support metadata adaptation for two main classes of applications: filtering<br />

and scaling of large descriptions (metadata), the possibility to integrate several description instances. Filtering<br />

and adapting the metadata can help in situations of limited processing power, or limited bandwidth and memory.<br />

It is intended to specify attributes such as metadata size or the number of elements, the number of occurrences of<br />

some elements in the metadata description, the invariance of the descriptors to particular adaptation or<br />

transformations, etc …<br />

Other tools related to content adaptation<br />

The standard should also eventually specify<br />

• Tools needed for transferring state information related to the consumption of a digital item on one device to a<br />

second device.<br />

• Tools assisting modality conversion<br />

• Tools to describe the rights that a user has to perform adaptation<br />

<strong>A2.</strong>6.8.2 Dynamic content<br />

Dynamic content can be regarded as a form of content adaptation. Dynamic content includes HTML and XML<br />

pages generated on the fly. To manage dynamic Web content, a new markup language ESI (Edge Side Includes)<br />

has been developed. It facilitates the breakdown of Web pages in fragments, separate entities which can be<br />

cached independently. ESI allows to reduce the bandwidth requirements and to increase the CDN performance.<br />

<strong>A2.</strong>6.8.3 ICAP protocol<br />

Content adaptation was traditionally done on web servers at the expense of significant processing load on web<br />

servers. Three different alternative approaches are considered: server-side adaptation, proxy-based adaptation,<br />

and adaptation paths. In server-side adaptation, the servers deliver the adapted documents, with on-demand<br />

dynamic adaptation or by having a repository of pre-adapted document. With proxy-based adaptation, the<br />

documents are provided by the server in a generic form. The adaptation is performed on demand by intermediary<br />

proxies close to the clients. In the adaptation paths concept, the document passes several distributed adaptation<br />

proxies performing the adaptation in a step-by-step manner.<br />

In CDNs, surrogate servers placed at the edge nodes can be extended for content-oriented processing. The<br />

adaptation tasks can be shifted to dedicated specialized application servers, leading to a service overlay network<br />

on top of a CDN. This is sometimes referred to as CSN (Content Service Network). Content adaptation is<br />

performed by adaptation services on behalf of the proxies. CSNs can be regarded as another network<br />

infrastructure built on top of CDNs to provide services thru interactions with web servers, surrogate servers, and<br />

ISP proxies.<br />

The ICAP (Internet Content Adaptation Protocol) protocol is a point-to-point protocol which has been designed<br />

in support of distributing the corresponding processing load among specialized application servers, actually<br />

dedicated servers. This in turn allows to scale up the web servers to meet the increasing demand.<br />

ICAP specifies how to make a request for content adaptation. ICAP is a protocol, with resemblance to HTTP,<br />

used between caching proxies and adaptation servers where the content is modified. ICAP communicates via<br />

TCP sessions. The ICAP-enabled caching proxy is the ICAP client. The adaptation server is the ICAP server.<br />

The decision about content adaptation for a request from a client is taken by the ICAP client. The ICAP client<br />

encapsulates the HTTP request into an ICAP request and sends it to the ICAP server for adaptation. The<br />

surrogate server is configured with the IP addresses of the ICAP servers. It has information about the adaptation<br />

services offered by the ICAP servers. This can also be used for content filtering or restricting access to certain<br />

contents. The ICAP server checks if the URL belongs to the list of URLs that are prohibited to this client.<br />

<strong>A2.</strong>6.8.4 Content negotiation – media feature sets<br />

Syntax and semantics to express user preferences and device capabilities have to be specified. The specification<br />

of the corresponding syntax and protocols is being done within the IETF ConNeg working group. The IETF<br />

Content Negotiation working group has developed the “Media Feature Sets” standard to allow for protocolindependent<br />

content negotiation. This protocol provides means to specify device capabilities, preferred content<br />

representation and user preferences. Preferences priorities can also be expressed. Attached to the request for<br />

<strong>Annex</strong> 2 - Page 110 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

content, the client device provides a description of its media handling capabilities. Its capabilities can be<br />

expressed with the IETF Media Feature Sets.<br />

<strong>Annex</strong> 2 - Page 111 of 282


<strong>A2.</strong>6.9 Content Delivery Networks<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

A CDN can be regarded as an overlay network of strategically placed surrogate servers (or edge servers) built<br />

upon an IP network. The surrogate servers store the frequently requested content closer to the clients. CDN are<br />

deployed on behalf of the web site providers and can be hosted in networks of possibly different ISPs. The<br />

goals and benefits of CDNs are, by bringing the content near network edges, to reduce bandwidth consumption,<br />

network traffic including upstream bandwidth usage, congestion, response time, origin server load, and to<br />

improve reliability in situations where the remote server may be unavailable. The content load is distributed<br />

among servers close to the clients. CDNs thus allow improving web performance beyond proxy caching. CDNs<br />

allow in addition to access content that is usually un-cacheable in classical caching proxies, such as secured<br />

content, streaming content, or dynamic content. CDNs are accessed thru application-specific proxies using<br />

HTTP for web traffic and RTSP for streaming traffic. They thus contribute to bringing value-added applications<br />

at the edge of the network, i.e. at the backbone NAP (Network access point) or at the ISP POP (Point of<br />

Presence).<br />

CDNs are key components in multimedia streaming QoS support, i.e., in providing better performance in the<br />

sense of faster response time, more efficient bandwidth utilization through caching and/or replicating content.<br />

The trend in the CDN is however to evolve from caching and replication only to more added value services such<br />

as localized and personalized content, secure access, and content adaptation.<br />

<strong>A2.</strong>6.9.1 CDN components and architecture<br />

CDNs are complex systems with many distributed components that collaborate:<br />

• Servers with cached or replicated content,<br />

• A management entity which monitors the whole system, the status of the various components, the<br />

addition/deletion of a content provider or of an ISP, …<br />

• A request routing engine: In case the content is not present in the cache, it is the responsibility of the request<br />

routing engine to get the content from a surrogate server. A user is connected with the nearest surrogate<br />

server. When a client requests a static object, it will be served from internal cache of the surrogate server, if<br />

present. If there is a cache miss or if the object is dynamic, then it will be pulled from the origin server or<br />

from another surrogate server. Metrics to re-direct requests include proximity derived from network routing<br />

tables, topological proximity, load balancing between servers. This leads to efficient bandwidth utilization<br />

and faster access to content.<br />

• A request routing peering system which deals with the routing of request between peered CDNs if the<br />

content is present on none of the surrogate servers of the current CDN. The request routing peering system<br />

maintains a database called content topology database with information on the objects in peer CDNs, address<br />

of surrogate servers containing these objects, topology information like number of hops, load, latency.<br />

• Accounting mechanisms which provide logs and accounting information to the origin server. An<br />

accounting/billing system collects information from various components of the CDN and from the peer<br />

CDNs. The information is processed to bill the customers.<br />

• Distribution components<br />

• Internet cache protocol (ICP); Using ICP messages, neighbouring caches exchange information about data<br />

present. ICP is thus used to locate objects in neighbouring caches.<br />

CDN administrators must determine the optimal number of surrogate servers as well as their locations. Several<br />

placement algorithms exist: tree-based, greedy, hot spot, etc ….<br />

<strong>A2.</strong>6.9.2 CDN peering<br />

CDN peering allows multiple CDN solutions to inter-operate. CDN peering allows to expand the CDN reach to<br />

a larger client population. It is possible that the RRI does not find the requested object on any surrogate server of<br />

the CDN. In this case, the request can be re-routed to a peer CDN through the request-routing peering system<br />

module.<br />

The interconnection of CDN is done thru CDN peering gateways (CDPG). The IETF has created the CDI<br />

working group with the goal of defining protocols to allow the interoperation of separately-administered content<br />

networks (see IETF draft draft-green-cdnp-gen-arch-03.txt, www.content-peering.org/ietf-cdi.html).<br />

<strong>Annex</strong> 2 - Page 112 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Initiated by Cisco systems, in August 2000, the content alliance gathers service providers, content providers,<br />

technology vendors with the goal of fostering interoperability of Content Delivery Networks (CDNs), and of<br />

supporting the development of open standards for the advancement of content networking.<br />

The technologies at stake with respect to CDN interconnections are:<br />

• Request-routing peering systems which deal with the selection of the delivery CDN;<br />

• Distribution peering systems which control the distribution between peering CDN;<br />

• Accounting peering systems which control the usage accounting and billing.<br />

There is a need to develop protocols between peering systems like distribution peering systems and routing<br />

peering systems.<br />

<strong>A2.</strong>6.9.3 Related technologies and R&D activities<br />

Caching technologies<br />

Efficient caching strategies are essential for high CDN performance. Compared with traditional proxy caching<br />

for Web pages (HTML pages, GIF images), streaming media presents new challenges for caching technologies<br />

due to the huge data volume of the audio-visual content, the intensive bandwidth usage, and the high<br />

interactivity. Caching the audiovisual content entirely at a web proxy is not practical. One solution is to cache<br />

only portions of an object. In this case, a client’s playback needs to communicate both with the proxy and the<br />

origin server. Minimizing bandwidth consumption becomes a primary consideration for proxy cache<br />

management, even taking precedence over reducing access latencies in many cases. Multicast delivery and<br />

cooperation among proxies become very attractive for media streaming application.<br />

Effective management of proxy cache resources (space, disk I/O,and network I/O) for streaming content is more<br />

challenging. Most existing caching algorithms focus on homogeneous clients, which have identical or similar<br />

configuration and capabilities behind a proxy. As such, a single version of an object would match the bandwidth<br />

and format demands of all requests to the object. Nevertheless, what to cache (which portions of which objects)<br />

and how to manage the cache (i.e., the cache placement and replacement) at the proxy remain difficult problems.<br />

Effective caching strategies involve finding which content needs to be present in the cache and which need not.<br />

Most algorithms take into account the probability of the requested content being accessed a number of times in<br />

recent past. According to the selection of the portions, the main categories of caching algorithms are slidinginterval<br />

caching, prefix caching, segment caching, and rate-split caching.<br />

Finding the most appropriate surrogate server<br />

The most appropriate surrogate server is the one with the closest topological proximity to the client in the sense<br />

of physical distance, speed, reliability and transmission cost. To select the most appropriate surrogate server,<br />

most CDN providers use DNS (Domain Name System) redirection, others use URL re-writing.<br />

In DNS re-direction (the most widely used approach), a DNS query is sent to the local DNS server which<br />

forwards the query to the CDN request-routing infrastructure (RRI). The RRI asks each surrogate server to<br />

examine their route to the local DNS server; Each surrogate server sends measurements results to the local DNS<br />

and to the RRI to allow to compare on the basis of latency, packet loss, router hops between surrogate servers<br />

and the client, the topological proximity of each server to the local DNS server; The RRI compares these<br />

measurements and selects the most appropriate server and send a DNS response to the client’s local DNS server.<br />

DNS redirection (due to lookup times) can increase latency.<br />

In URL re-writing, the origin server re-directs clients to different surrogate servers by re-writing the<br />

dynamically generated pages URL links. E.g., with a web page containing an HTML file and embedded objects,<br />

the web server would modify the links of embedded objects so that they can be retrieved from the best surrogate<br />

server. CDNs usually make use of scripts to parse the pages and replace the embedded URLs. The drawback is<br />

that the scripts must be continuously executed.<br />

<strong>Annex</strong> 2 - Page 113 of 282


<strong>A2.</strong>6.10 Roadmap<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Introduction<br />

The demand for – and the new revenue streams to be potentially created by - high-end applications for<br />

residential, enterprise and hotspots markets, triple play offerings bundling voice, data and video, is a key driver<br />

of broadband networks developments. Accommodating higher-end consumer applications, hot spots en various<br />

spaces such as airports, hotels, cafes, etc with increased numbers of connections requires obviously higher core<br />

and access networks effective throughput by several orders of magnitudes. This nomadic vision of “any content,<br />

anywhere, anytime”, fostering fixed/mobile convergence, has obvious impacts on physical and MAC layers<br />

technologies of core and access networks. Elements of service accessibility, reliability for data-intensive or high<br />

capacity multimedia applications also translate into a number of QoS related technological requirements, i.e., of<br />

speed, latency, jitter when dealing with delivery aspects.<br />

Ubiquitous access to integrated media services is thus a key driver in the development of broadband network<br />

technologies, providing needs for<br />

• bigger and faster pipes based on optical technology in core/access/metro networks;<br />

• beyond 3G wireless technologies allowing for ubiquitous mobile access, i.e., seamless and multi-access<br />

services via the most suitable network according to place, time, cost, etc,<br />

• high capacity wireless access networks based on high spectral efficiency digital communication technology,<br />

such as multiple antenna (smart antenna, diversity techniques), adaptive multi-carrier modulation, new<br />

multiple access schemes, efficient channel codes, etc.<br />

• integrated fixed and mobile Broadband networks. The foreseen challenge for 2008-2010 is the provision of<br />

full Integrated IP-based networks with 50M-1G for fixed access and 10M-100M for mobile access.<br />

• the integration and convergence of multiple services via end-to-end IP with QoS support, combining the<br />

reliability and quality of voice networks with the flexibility of data packet networks.<br />

• the convergence and integration of fixed and mobile services in a comprehensive system approach allowing<br />

broadband multimedia services (voice, data, broadcasting) through various mobile and fixed network access<br />

from various integrated terminals.<br />

• the capability for roaming between different mobile networks (GSM/GPRS/UMTS, WLAN-802.11a,<br />

802.11b, etc.), between different access technologies requiring terminal support of software radio capability<br />

for flexible and configurable radio access, between different operators.<br />

In turn, the advent of broadband connectivity, of home networking, or of nomadic access through wireless<br />

networks, should lead to significant changes in multimedia production and consumption with enriched content<br />

evolving from voice and data to triple play and 3D. This in turn should lead to enhanced and new interactive and<br />

personalized audio-visual services of different kinds.<br />

This document identifies emerging opportunities in terms of market, applications, technological gaps in content<br />

access and delivery services, on a temporal horizon of 10-15 years. It identifies key technical elements and<br />

research required to develop products, services and infrastructures.<br />

Technology roadmap for content delivery services<br />

One of the challenges for the years to come in the broadband multimedia arena is the capability for delivering<br />

streaming and/or live media with high QoS guarantees, easy (anywhere, anytime) access, and with the<br />

appropriate level of security. Content delivery services are to benefit from advances in technologies for<br />

• capturing, creating, coding of A/V and multimedia content,<br />

• distributing stored or live content in real-time,<br />

• advanced indexing and retrieval methods allowing for beyond basic access,<br />

• content adaptation and content protection (DRM – Digital Right Management),<br />

• for network management and/or allowing intelligent use of network and terminal resources for end-to-end<br />

optimized QoS<br />

<strong>Annex</strong> 2 - Page 114 of 282


Audio-Visual<br />

broadcasting<br />

services<br />

Wireless<br />

services<br />

Fixed<br />

services<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

2000 2005 2010<br />

digital<br />

broadcasting<br />

VOIICE<br />

satellite<br />

HDTV<br />

VOIICE<br />

DATA<br />

Figure 44:<br />

Capturing and content creation for added value services<br />

New business opportunities and revenue streams in the broadband multimedia sector are dependent on the<br />

capacity to create bandwidth demanding and attractive content and services, among those:<br />

• home cinema experience requiring e.g. multi-channel audio and high definition video coding and delivery.<br />

• interactive and “personalized” TV, with content enhanced with graphical overlays, video blending,<br />

simultaneous display of multiple cameras with the ability of the viewer to select among them, to continue the<br />

coverage from the new viewing perspective. Interactivity also includes the possibility to obtain information<br />

and customize the way in which they watch events/scenes, to be presented augmented views of events or<br />

scenes. The notion of personalization also refers to the possibility to filter the content according to the viewer<br />

preferences.<br />

• 3D interactive TV transforming TV watching into an immersive interactive experience with technology<br />

capitalizing on advances in Digital TV broadcast, 3D-visualisation, image processing, and efficient<br />

communication of rich interactive multimedia material. A business sector that significantly influences the<br />

developments in this sector is computer games. Many gaming companies develop new software applications<br />

that provide their customers with realistic animations. The game industry produces 3D games allowing the<br />

player to explore new interactive worlds. The Internet has also seen the emergence of 3D portals or web<br />

sites with interfaces allowing for increased interactivity and more user-friendly presentations. Virtual studios<br />

where real and virtual characters can interact, leading to a certain mixed reality experience where synthetic<br />

and real environments are mixed have also emerged in the TV production sector.<br />

But, a certain number of technological and economical barriers still need to be overcome before the user can<br />

really experience watching on TV real dynamic scenes interactively from different perspectives, i.e., before<br />

interactive TV comes to the market. It requires the development of efficient capturing and authoring tools, of<br />

new representation and compression formats and of low cost rendering techniques. Despite advances in the I/O,<br />

capturing and display devices, creating 3D environments and playing 3D scenarios remain costly. The creation<br />

of 3D environments is still for a large part manual. Playing 3D scenarios also require high-end systems not<br />

presently available for end-users<br />

<strong>Annex</strong> 2 - Page 115 of 282<br />

satellite<br />

digital media<br />

broadcasting<br />

3D<br />

broadcasting<br />

SMS e-Mail picture<br />

Text services Multimedia services Mobile (interactive)<br />

e-Mail<br />

video e-Mail TV services<br />

Wired services (Internet, TV,<br />

entertainment, games)<br />

TRIIPLE PLAY 3D<br />

((VOIICE, , DATA, , VIIDEO))<br />

3D, virtual reality<br />

Fixed<br />

-<br />

Mobile<br />

Multimedia<br />

services


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Content representation and coding<br />

Compression is an important factor in the successful uptake of audio-visual services over telecommunication<br />

networks, ranging from traditional TV broadcasting on xDSL and Home networks, to mobile and Internet<br />

multimedia services. The joint ITU/ISO H.264 standard (also called MPEG-4 part 10) is expected to be the<br />

successor of MPEG-2 for TV and conversational applications. This standard should allow simultaneous<br />

transport of several TV programs to the end user, including HDTV with uptake foreseen with the availability of<br />

HD-DVD on the market.<br />

Products are still to be developed. This requires investigation of complexity reduction in order to allow the<br />

development of real-time encoders at reasonable costs and with reasonable power consumption on powerconstrained<br />

devices. The coding gain obtained so far is indeed at the expense of high computational complexity.<br />

Also, this technology is particularly efficient in lower bit rate ranges, but further advances are to be expected for<br />

high quality high bit rate content of Digital Cinema applications, which should take-off soon. Candidate<br />

solutions for this application sector considered so far are based on Motion JPEG-2000. Further developments<br />

with possible compatibility with this solution are likely to happen.<br />

Although compression efficiency remains a key feature of audio-visual content (including 3D content)<br />

representation, this is not the only one required, given the diversity and heterogeneous nature of the distribution<br />

means and receiving devices. It is largely admitted that QoS support as well as re-use of created content in<br />

heterogeneous and evolving environments also require coding and representation solutions amenable to<br />

seamless dynamic adaptation to use context and user preferences, to wireless and mobile devices and networks.<br />

One important direction is thus the development of scalable audio-visual coding solutions.<br />

The goal of scalable coding is to produce streams embedding different bit rates possibly - corresponding to<br />

different spatial and/or temporal resolutions - in one single encoding pass. Scalable video coding should allow<br />

efficient delivery of bandwidth demanding content via infrastructures integrating heterogeneous networks, or<br />

addressing different categories of consumers. It can provide an alternative to stream switching or stream<br />

thinning for adaptation to heterogeneous network and terminal environments. Scalable coding working in<br />

conjunction with congestion control or streaming, pre-fetching strategies, can meet adaptation requirements in<br />

highly heterogeneous environments. Scalable coding both for audio and video, used in conjunction with error<br />

correcting codes and with QoS protocols, can provide a solution for robust transport over packet loss channels.<br />

Delivery of delay-constrained content on networks with no guaranteed transport services (e.g. IP wired and<br />

wireless environments) requires solutions in support of loss resilience. A direction under investigation in the<br />

research community is multiple description coding.<br />

Although H.264/MPEG64 AVC brings an answer for audiovisual services over error free channels, the<br />

development of services with end-to-end Quality of Service over error prone networks such as mobile access<br />

networks require specific solutions, i.e., tools in support bit error resilience such as error resilient source codes<br />

or joint source-channel codes, or coding solutions that would best make use of channel diversity in multichannel<br />

transmission environments. This finds strong synergy with emerging transmission solutions for wireless<br />

LAN and/or broadcast channels aiming at increased diversity gains, which should contribute to increase access<br />

capabilities by increasing spectrum efficiency. Developments in this area are also important to target increased<br />

storage capacity in storage devices such as DVD or HD-DVD.<br />

Advanced services beyond basic access<br />

Introducing new added-value services for content access, distribution, adaptation and negotiation is of<br />

paramount importance for the success of broadband multimedia services. The development of appropriate<br />

content delivery architectures exemplified by content delivery networks is strategic. Vendors are already<br />

implementing content service networks (CSN) as another layer built on top of CDNs to provide added-value<br />

services through interaction with web servers, surrogate and application servers, …<br />

Fast access to content requires developing strategies for using caching in conjunction with content adaptation to<br />

allow maximum cache efficiency. This includes increasing of the cache-ability of dynamic contents, e.g., by<br />

decomposing complex objects into fragments with different cache-ability in the spirit of ESI, or exploiting<br />

caching hierarchies to benefit from different representations of objects at different hierarchy levels.<br />

Another cornerstone of fast and easy access to content and more generally of ease of exploitation at the different<br />

stages of the value media chain is the possibility of extracting metadata which will enrich the content when<br />

attached to it, and/or allow structuring the databases containing large volumes of – generally compressed -<br />

content. Note that compression must not be an impediment to services such as content protection, indexing.<br />

However, from a technical point of view compression, compression and indexing are somewhat antagonist.<br />

<strong>Annex</strong> 2 - Page 116 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

To be more explicit, the transforms used in compression system, being critically-sampled, do no allow extraction<br />

of invariant signal descriptors to be used for content indexing purposes, making difficult the extraction of<br />

descriptors in the compressed domains. They have therefore to be extracted in the signal domain, hence<br />

requiring decompression, which may be very time consuming in a context of a search in very large data bases.<br />

Reciprocally, the transforms amenable to extraction of such invariant descriptors are over-complete and not<br />

amenable to compression. Effort is being dedicated in the research community to signal representations<br />

amenable to both compression and descriptor extractions.<br />

Finally, security and digital rights management (DRM) in CDN remain open and difficult issues to address.<br />

DRM is especially difficult to handle in CDN going further than caching only, and including for example<br />

content adaptation functionalities. If the stream has to be “adapted” hence modified, it has first to be descrambled”<br />

and re-scrambled (e.g. after transcoding if transcoding is the adaptation made). The CDN must have<br />

the corresponding keys, etc … This raises essential security issues.<br />

Finally, components of rich multimedia content may be produced separately and may exist in several versions<br />

intended for different end-users and distribution channels. These different components and versions must be<br />

identified, bundled somehow together, with metadata attached to them, and this for seamless and wide access<br />

and exposure at the different stages of the media chain. This falls within the objectives of the MPEG-21<br />

standard. Access to multimedia content is expected to be facilitated by enrichment of its description with<br />

semantic information, leading eventually to semantic multimedia as part of semantic web services. This requires<br />

the development of tools able to extract semantic information from multimedia content and to semiautomatically<br />

or automatically annotate multimedia data with semantic information. Effort is being made in this<br />

direction to bridge the gap between low level multimedia content description and semantic information with the<br />

help of ontologies and reasoning technologies.<br />

Delivery with intelligent use of network resources<br />

The QoS architecture in Broadband networks is expected to rely on the following components:<br />

• A core backbone infrastructure with high speed fibre and dense Wavelength Division Multiplexing<br />

(DWDM) operating in fibres.<br />

• MultiProtocol Label Switching (MPLS) in core routers allowing to switch packets at very high speed and<br />

enabling traffic engineering and QoS-based routing. ATM switches can also be used inside the core to<br />

complete the switching process.<br />

• Integrated services (IntServ) at the edges: All policy related processing such as classification, metering,<br />

marking etc. would take place at the edge routers. Resource provisioning at the edge can be done with RSVP<br />

and IntServ.<br />

• Differentiated Services (DiffServ) in the core to keep the number of flows manageable, to remove as much<br />

of computationally intensive functions as possible from the backbone routers, and push these functions<br />

towards the edges. The Diffserv architecture relies on a notion of prioritization and aggregation of flows in<br />

order to limit the number of traffic classes in the backbone. MPLS is halfway between IntServ and DiffServ<br />

and supports IP routing at the edges where IntServ can be used, and switching at the core where DiffServ<br />

techniques can be used. QoS descriptors are usually associated to Media content. The exact translations of<br />

the QoS parameters of each media to the network QoS are left to the network providers.<br />

Further R&D effort is required before the technology reaches a sufficient level of maturity, set aside the<br />

corresponding business models, regulation and pricing issues. A challenge for telecom operators will be to<br />

provide a vast suite of services in an open multi-vendor, multi-domain, multi-technology environment.<br />

In the wireless arena, the development of "Beyond 3G" - B3G – or "4G" networks and applications, with all-IP<br />

based ubiquitous and seamless service provisioning across heterogeneous infrastructures, presents a number of<br />

challenges beyond existing capabilities conceived so far by the IETF for Mobile IP and by the ITU for Third<br />

Generation networks. Management, signalling and transport functions must evolve from different link layer<br />

technologies, uniformly and end-to-end, to the packet switched IP-network layer.<br />

Distributing stored or live content with QoS<br />

End-to-end QoS support in stored or live content delivery requires from the application some capability to adapt<br />

to the varying characteristics of the networks. The applications must in particular support Congestion control,<br />

buffer management (or local caching) or, for delay-tolerant applications (e.g., VOD), mechanisms such as prefetching<br />

and optimum playback, allowing for adaptation of AV streaming to non stationary network conditions.<br />

Appropriate streaming strategies including scheduling in the application layer must also be designed.<br />

Preliminary solutions to the above problems can already be found in existing or emerging media streaming<br />

products.<br />

<strong>Annex</strong> 2 - Page 117 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Features such as relay or caching can also be found in some CDN (Content Delivery Network) solutions or<br />

overlay architectures. Some of these issues are at the core of research and standardization initiatives. Others<br />

are still in their infancy.<br />

The QoS requirements of AV flows (and applications) must be mapped to the different DiffServ classes of<br />

service. For this, appropriate DSCP (DiffServ Codepoints) marking strategies must be designed. The main QoS<br />

characteristics of interest for multimedia applications are the packet end to end delay and the packet loss rate.<br />

Each flow is characterised by different delay and loss requirements, which depend on the application<br />

characteristics. Some data flows require reliable transport, while for others, to meet real-time constraints,<br />

unreliable and unresponsive transport protocols, such as UDP, have to be used. One can thus differentiate<br />

between the following categories of applications:<br />

• “Elastic” Applications: the only requirement is the delivery of the packets. Applications over TCP fall into<br />

this category, since TCP guarantees that the packets will be delivered. There is no demand on delay bounds<br />

or bandwidth requirements. e.g. web browsing and email.<br />

• Real Time Tolerant Applications: they demand weak bounds on the delay of delivery by the network;<br />

occasional packet loss is acceptable, e.g. streaming applications.<br />

• Real Time Intolerant Applications: this class demands minimal latency and jitter, e.g. audio/videoconferencing.<br />

Delay above a maximum bound is unacceptable and communicating parties should be brought<br />

as close as possible.<br />

which will be delivered with different classes of service. E.g., in 3GPP, four traffic or QoS classes have been<br />

identified: conversation, streaming, interactive and background classes. Typical application examples would be<br />

respectively voice with stringent and low delay requirements, streaming video with requirements comparable to<br />

voice except for delay, Web browsing requiring the 'preservation of the payload content' and download of email<br />

for which the recipient would expect the data within a certain time, but with the payload content preserved.<br />

The above QoS or traffic end-to-end classes are parameterized by sets of QoS attributes of different applicability<br />

and/or values at different layers. The mapping of these parameters onto the lower layer mechanisms remains an<br />

open issue.<br />

Beyond the obvious concepts of QoS mapping from application layer parameters to lower layer parameters,<br />

cross-layer design has recently gained interest especially for wireless networks to satisfy QoS requirements in a<br />

more unified way. A stack will adapt dynamically taking into account information provided by other layers.<br />

E.g., Medium Access Control (MAC) schemes and routing entities become aware of PHY-layer parameters. The<br />

MAC layer can make use of appropriate services provided by the physical layer, and this way enhance the<br />

system performance. This may require to revisit the MAC schemes, e.g., design new MAC protocols.<br />

<strong>Annex</strong> 2 - Page 118 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>6.11 Appendix I : Applications that would benefit from scalable AV coding<br />

A set of applications that would highly benefit from scalable and reliable video coding is given below:<br />

Broadband video distribution, Wireless LAN video including in home networks, Wireless Multimedia Broadcast<br />

Multicast Services (MBMS), Mobile wireless video for conversational services and VOD, live broadcasting,<br />

end-to-end Internet/wireless video delivery, multi-channel content production and distribution, Storage<br />

applications, layered protection of content, Multi-point surveillance systems.<br />

Broadband video distribution<br />

The delivery of advanced video services over DSL (TV broadcasting, video-on-demand, news-on-demand,<br />

interactive video, business TV, distance learning) represents a significant business opportunity for service<br />

providers. Fast internet access has been the major driver for DSL deployment up to now. Telecom operators<br />

intend to propose multi-service delivery on copper cable.<br />

HDTV delivery is considered as an opportunity for developing the DSL market. However, HDTV over ADSL is<br />

impossible today: the existing standards are not efficient enough to get the required compression ratios. Other<br />

DSL (ADSL2, ADSL2+, VDSL) solutions offer more bandwidth. Multi-level offers, depending on the client<br />

bandwidth capabilities, e.g., simultaneous offers of HD and SD TV content are very likely.<br />

Compared to usual broadcast applications (satellite, cable or terrestrial), multi-service provisioning on IP<br />

networks has to face different client capabilities (e.g., spatial resolutions, complexity) and bandwidth<br />

capabilities. The bandwidth available depends on the network infrastructure deployed (e.g., DSL bandwidth<br />

decreases with distance). The customer can also choose/pay for different bandwidths. Different users will<br />

consume different services. If the network is designed for a certain level of occupancy, it must still deliver video<br />

services with a decreased quality level in case of over-occupancy.<br />

To cope with these multiple service/display/channel combinations, current solutions (based on non scalable<br />

technologies such as MPEG-2 or MPEG-4 AVC) require content multiplication (encoding and delivery of the<br />

same content at different bit-rates) as well as trans-rating to handle the multiple combinations. A scalable<br />

solution presents the following advantages: operating costs reduction and universal / seamless availability. In<br />

terms of operating costs, scalability provides the following economical assets:<br />

• Decoupling of the encoding process from the streaming, which reduces the number of processes (encoding,<br />

re-encoding, trans-rating) necessary for universal video delivery.<br />

• Saving storage space and facilitating management at the video server (one unique stream instead of multiple<br />

versions of the same content).<br />

• Saving of bandwidth and increasing the channel capacity on a given infrastructure. Associated to this point,<br />

the network traffic is reduced, with a better use of the backbone and client bandwidths.<br />

• Leverage, with a reduced cost, of the content already created for broadband or broadcast, on other<br />

distribution media. For instance, in case of broadcasting live event on DSL, the content can be delivered<br />

without additional cost via other channels (web, mobile phone).<br />

Also, in a near future, with the advent of Giga-Ethernet (GE) networks, the DSLAM capabilities are very likely<br />

to come closer to the ones of Internet routers. In that context, scalable bit-streams would allow to make full use<br />

of Multicast join/prune capabilities of the network. The base and enhancement layers would be distributed to<br />

different IP multicast groups. Only useful IP streams will be routed to the DSLAM; currently all the streams for<br />

all TV channels are down-routed to each DSLAM. This would also allow to use the Internet differentiated<br />

services approach (Diffserv) to provide several classes. The base land enhancement layers may be tagged with<br />

different traffic classes and precedence drop levels. When necessary the DSLAM may drop the packets with the<br />

lowest classes/drop levels in case of congestion.<br />

Wireless LAN video<br />

The use of IEEE 802.11 Wireless Local Area Networks (WLANs) as an extension to the existing wired<br />

infrastructure, offering the convenience of mobility and portability in the enterprise environment, is growing at a<br />

rapid pace. Although currently most WLANs are predominantly used for data transfer, the higher bandwidth<br />

provided by new WLANs technologies such as IEEE 802.11a, IEEE 802.11g, the future 802.11N will ultimately<br />

lead to their increasing use for multimedia transmission.<br />

The key requirements for this type of application in related to audio-visual coding are:<br />

• Adaptability to bandwidth variations which may due to interference, competing traffic, mobility, multi-path<br />

fading, etc. rate switching would require storage of the content at different bit-rates at the wireless server.<br />

On-the-fly transcoding would be too costly.<br />

<strong>Annex</strong> 2 - Page 119 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Viewing on various wireless devices having different capabilities and display sizes.<br />

• Robustness to data losses, since depending on the channel condition, or during handover between access<br />

points, partial data losses may occur.<br />

• Prioritized video that takes advantage of the 802.11e QoS MAC specification that provides prioritized<br />

transmission of video using different priority queues.<br />

• Support for bandwidth and device scalability, since various clients may be connected at different data rates.<br />

• Scalable power requirements since most portable wireless video devices are battery-powered and tradeoffs<br />

should be possible between longer battery power and lower quality video.<br />

In home networks (including Wireless LAN) Video<br />

As a lot of new services are being introduced thanks to the DSL networks, the data rates increase, the terminals<br />

become more and more efficient and interoperable, the cost of WLAN products is falling down. All these factors<br />

lead to the increased use of WLAN products and services in the enterprise environment and consumer homes.<br />

After the video stream is delivered to the home, it may be stored on a home gateway (ADSL connected PC); A<br />

major goal of home networks is to provide access to different video services from the home gateway in any<br />

room, with any terminal. This implies on-line adaptation at the home gateway to different client capabilities and<br />

access conditions, both in terms of display (e.g., multimedia and web-browsing PCs, TVs with set-top-boxes,<br />

digital PVRs, networked mp3 players, gaming consoles) and bandwidth.<br />

The first interest of scalability is the adaptation to a variety of devices, which requires both spatial, temporal,<br />

SNR and complexity scalability. A typical scenario would be to consider three basic configurations, implying<br />

combined scalability: HD / 60Hz / 8-10Mbps; SD / 30Hz / 2-3Mbps; CIF / 15Hz / 500-1000kbps. Compared to<br />

other possible solutions (storage of multiple versions of the same content, or real time re-encoding/trans-rating<br />

of the content for devices not compliant with the original content), scalability greatly simplifies the storing and<br />

distribution processes. Scalable coding would avoid simultaneous storage of video content at different bit rates<br />

at the wireless home server which is heavy, and would avoid on-the-fly transcoding which is costly and<br />

penalizing in terms of quality. On the contrary, pulling parts of the bitstream thanks to parsing and MPEG21<br />

stream description would be more efficient and less complex.<br />

Depending on the modulation, total bit rates - as opposed to useful bit rate - provided by 5 GHz technology<br />

range from a few Mbits to 54 Mbits. Additionally to the chosen modulation mode, there is also a wide variability<br />

in the available bit-rates depending on the location in the home. The available bandwidth depends as well on the<br />

distance and on the obstacles between the 2 communicating nodes (masking effects). Other parameters such as<br />

competing traffic, interference, multi-path (echoes) fading have also an impact on the bandwidth. It is<br />

considered that typical average useful bit rate in a wireless network is in the 10 - 30 Mbps range. Bandwidth<br />

adaptation can of course be managed using multiple versions at different bit-rates of the content, but this<br />

solution does not ensure smooth adaptation of bandwidth variations. Fine-grain SNR scalability is actually the<br />

ideal solution to address this issue. Fully embedded FGS can improve the content storage management, thanks to<br />

an easy re-encoding (actually streams truncation) of the content, for instance to free memory space. Functions of<br />

browsing, searching and even video editing on a low resolution version of the content are directly provided by<br />

spatial/SNR scalability.<br />

Mobile streaming<br />

Scalable coding would be very beneficial in a roaming context, i.e. moving from a 2.5G to a 3G network and<br />

vice-versa. When the mobile device moves from an UMTS covered area to an uncovered area, it has to switch to<br />

GPRS, which implies, if the session is not lost, to resume the streaming in 2.5G format. File and/or rate may be<br />

envisaged to solve this problem:: for one given content, several (at least 2) files are systematically encoded and<br />

stored on the server (e.g. one at 100kbps for the 3G network, another one around 40 kbps for the 2.5G network).<br />

This file switching technique is currently used for GPRS applications. The contents are encoded at different bitrates<br />

and stored in the same file (e.g. Intelligent streaming multiple bit-rate, Microsoft or Real). This solution is<br />

very expensive in terms of production and storage. SVC would allow lower storage requirements and lower<br />

production costs as well as faster switching (drop one or several enhancement layers if needed, do not swap<br />

files).<br />

MBMS Wireless video<br />

MBMS (Multimedia Broadcast Multicast Services) are planned for GERAN (GSM/EDGE Radio Access<br />

Network) and UTRAN (UMTS Radio Access Network).<br />

<strong>Annex</strong> 2 - Page 120 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

MBMS targets simultaneous distribution of identical content, of real-time and non real-time services, from a<br />

single base station to a large number of mobile stations over one common radio resource. An example of realtime<br />

services would be to allow 10000 spectators in a stadium being able to have access to instantaneous replay<br />

of the most important scenes on their cell phone.<br />

The following 3GPP TR 25.992 V6.0.0 (2003-09) requirements for UTRAN/GERAN MBMS are relevant to<br />

audio-visual coding. 3GPP release 6 should allow for<br />

• “simultaneous reception of more than one MBMS services, or MBMS and non MBMS services<br />

• Reception of MBMS shall not be guaranteed at RAN level<br />

• MBMS does not support individual re-transmissions at the link layer<br />

• In the case of UTRAN, guaranteed ‘QoS’ linked to a certain downlink power setting is not required<br />

• MBMS multicast transmission mode should use dedicated p-t-p or common p-t-m resources.<br />

• MBMS solutions should minimize the impact on the RAN physical layer and maximize re-use of existing<br />

physical layer and other RAN functionality<br />

In this context, the number of served users depends on the chosen operation point of the system lower layers. It<br />

the system is configured in order to be adapted to the worst class of users in terms of channel quality, then high<br />

redundancy may be needed in lower layers, inducing low effective data rate. If the system is configured to be<br />

adapted to channels with high quality, this will allow higher data rates, however a large number of users with<br />

worse channel conditions will not be supported. Service outage due to channel non stationnarity may also<br />

happen.<br />

SVC would allow to choose the operation point such that the minimum tolerable IP packet loss rate is satisfied<br />

for most users in a serving area. A large portion of users with channel conditions below the operation point may<br />

still be supported with a lower application quality. Complete service outages would then be avoided; gradual<br />

increase/decrease in QoS would be supported.<br />

SVC would thus allow to optimize the trade-off between the desired QoS and the number of supported users, to<br />

support guaranteed service in a MBMS scenario. The same applies to other multicast / broadcast service on<br />

error-prone channels (other than 3GPP)<br />

Multi-channel content production and distribution<br />

To address the variety of user devices (PDA; laptop; desktop; set-top box; TV), with different screen resolution,<br />

memory and computational power capability, or the different distribution networks, with the existing coding<br />

technology, the content has to be encoded into a number of different bitstreams to support the different network<br />

connections and device types. To be able to use the same digital services and content available across a broad<br />

range of available and emerging channels and a wide variety of user platforms would allow to share, hence to<br />

reduce, the production costs. This should enable enhanced services and new market opportunities, as well as<br />

integration of previously unconnected or fragmented markets. E.g., an interactive content developed for a DVD<br />

disk (the channel) to be operated from a DVD player (the platform) could be deployed on other channels (HFC,<br />

telephone line, DAB, DVB-T/S/C, etc.) and platforms (personal computers, set-top boxes, mobile phones,<br />

personal digital assistants, etc.) with different communication and interaction characteristics. The above<br />

requirements cannot be fulfilled with current (standardised) technologies.<br />

Video Streaming over Heterogeneous IP Networks and in CDN<br />

There are many challenges in streaming video over IP networks which may be highly heterogeneous (wireless<br />

including GSM, GPRS, 3G, WLAN, Bluetooth, and wired including dial-up, ISDN, cable, xDSL, fiber, LAN,<br />

WAN), with connection bandwidth ranging from 9.6kbps to 100Mbps and above. In addition, IP networks do<br />

not provide QoS guarantee: bandwidth, packet loss rate, delay, jitter may vary in time. Finally, the convergence<br />

of Internet and wireless networks is creating a whole new level of heterogeneity in multimedia communications.<br />

For instance, while the Internet is a “best-effort” network, the emerging 802.11 wireless LAN standards can<br />

provide different Quality-of-Services (QoS) levels. This increased level of heterogeneity emphasizes the need<br />

for flexible video coding and streaming algorithms that are able to adapt on-the-fly to the various network<br />

conditions and device characteristics.<br />

In this context, SVC allows for seamless rate adaptation and congestion control: The bit rate can be adjusted on<br />

the fly to adapt to the channel bandwidth and help to reduce the channel congestion; there is not need for<br />

switching and re-buffering. Compared to stream switching, fine grain SNR scalability may bring significant<br />

improvement of video quality for a given bit rate (no need to compromise for preset rates). This is especially<br />

true for VOD applications, where FGS allows for smooth transitions between successive qualities. For instance,<br />

let us consider a non-scalable solution using a 3 streams switching solution (1Mbps, 2Mbps, 3Mbps).<br />

<strong>Annex</strong> 2 - Page 121 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

A client with bandwidth of 1.9 Mbps has only access to the 1Mbps stream. With FGS, he can access to a stream<br />

with bit-rate (and quality) close to 2Mbps.<br />

It leads to natural prioritization of the data for loss protection, and is thus amenable to Unequal Error Protection<br />

or to Diffserv marking, which in turns allows to recover from losses gracefully. The enhancement layer can be<br />

truncated intentionally or un-intentionally due to packet losses or errors without causing catastrophic distortion<br />

in the video quality The more important layer, for example the base layer, can be turbo-streamed (streamed with<br />

higher speed than its actual bit rates) to bring in fast-on user experience without the need of more bandwidth and<br />

also it can enable a non-stopping video playback<br />

Scalable coding can address some of the problems associated with the CDN characteristics such as network<br />

heterogeneity, bandwidth and device variations, and some degree of losses.<br />

Professional video production<br />

The editing and content manipulation inside professional studios are based today on the managing of two<br />

versions of the videos: a low resolution (LR) (e.g. CIF version encoded with MPEG-1 at 1.5Mbps), and a high<br />

resolution (HR) that will be the final targeted resolution (e.g. SDi encoded with MPEG-2 or DV at 25 or<br />

50Mbps). The low resolution version is used in steps involving video searching, browsing, editing. In particular,<br />

the creation of the editing list from the input rushes is achieved on the low resolution. Once this step is achieved,<br />

the true editing on the high resolution is performed, with possibly modifications of the initial editing based on<br />

the viewing at the high resolution.<br />

In film production and management, there is also the need to store and manage several versions of the same<br />

content. Usually only a low-resolution proxy copy of digital cinema content is stored (usually using MPEG-1,<br />

WM or RealVideo) for desktop review and full-resolution versions are stored on external devices or a shelf<br />

(tapes, …). At the end of the film route, films are mastered and distributed in many different formats (film:DPX<br />

4K/2K/1K, HD/SD-DVD, VHS, internet), which requires the management of multiple different versions.<br />

The first interest of scalability for this application would thus be workflow simplification, which would in turn<br />

speed up the whole process of video production. The first point is the possibility to upload a unique highresolution<br />

version of the video onto a server. The low-resolution proxy can be directly deduced from this highresolution<br />

version without having to manage two versions of the content. Additionally, as the editor can at any<br />

time work on the low or high resolution, he can in one single editing process create the final montage: rushes<br />

selection and browsing can be achieved on the proxy version, and the precise editing can be achieved without<br />

additional manipulations on the high resolution version.<br />

Concerning digital movie content production and distribution, spatial scalability with many levels of scalability<br />

may be useful for facilitating the mastering at any output format (for instance SD-DVD mastering from a native<br />

2K film), which, thanks to the spatial scalability, could be achieved at a spatial resolution close to the final<br />

output resolution (in our example, SD resolution).<br />

A very important requirement of film production is loss-less coding for archiving. The possibility to get a lossy<br />

version from the loss-less archive without trans-coding or trans-rating is an additional advantage for simplifying<br />

the workflow.<br />

<strong>Annex</strong> 2 - Page 122 of 282


<strong>A2.</strong>6.12 Appendix 2: List of relevant standardization bodies<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The following list is not exhaustive. It gives the names of the main standardization groups concerned by the<br />

technological development described in this document.<br />

The transport, delivery aspects:<br />

• IETF AVT (Audio-Visual Transport) working group dealing with the specification of RTP payload formats;<br />

• IETF DCCP (Datagram Congestion Control Protocol) working group aims at defining a new protocol called<br />

DCCP;<br />

• IETF ROHC (Robust Header Compression) working group;<br />

QoS and routing aspects:<br />

• IETF DiffServ (Differentiated Services) working group;<br />

• IETF RPS (Routing Policy System) working group related to routing policy; the goal is to define a language<br />

called Routing Policy Specification Language (RPSL) to describe routing policy constraints.<br />

• IETF Policy Framework (policy) working group,<br />

• IETF QoS Routing (qosr) working group,<br />

• IETF MPLS (Multiprotocol Label Switching) working group.<br />

QoS signalling:<br />

• IETF NSIS (Next Steps In Signalling) working group: The goal of the NSIS group is to define a nextgeneration<br />

signalling architecture and a generic signalling protocol;<br />

• IETF RAP (Resource Allocation Protocol) working group proposed a policy framework which intends to<br />

"establish a scalable policy control model for RSVP.";<br />

Session and application signalling:<br />

• IETF MMUSIC WG<br />

• IETF SIP (Session Initiation Protocol) working group.<br />

• IETF SIPPING (Session Initiation Proposal Investigation): SIPPING describes the requirements for any<br />

extension to SIP.<br />

• ICAP (Internet Content Adaptation Protocol) working group.<br />

• IETF Media Feature Sets working group;<br />

• IETF CDI working group: defining protocols to allow the interoperation of separately-administered content<br />

networks<br />

Coding, streaming, content adaptation:<br />

• ISMA (Internet Streaming Media Alliance): Initiative aiming at specifying complete solutions of multimedia<br />

streaming (see. http://isma.tv/index.html);<br />

• ISO/MPEG-21 DIA (Digital Item Adaptation): Specifies the syntax and semantics of tools that may be used<br />

to assist the adaptation of Digital Items. The tools could be used to satisfy transmission, storage and<br />

consumption constraints, as well as Quality of Service management by the various Users;<br />

• ISO/MPEG-21 SVC (Scalable Video Coding): New MPEG initiative aiming a defining a fine grain scalable<br />

coding solution for video signals, amenable to network or terminal adaptation.<br />

• The World Wide Web Consortium (W3C) consortium;<br />

• The Web3D consortium: It produced the X3D specification. X3D is an extensible open file format standard<br />

for 3D visual effects, behavioural modelling and interaction. It provides an XML-encoded scene graph and a<br />

language-neutral Scene Authoring Interface (SAI). The XML encoding enables 3D to be incorporated into<br />

web services architectures and distributed environments, and facilitates moving 3D data between<br />

applications. The Scene Authoring Interface allows real time 3D content and controls to be easily integrated<br />

into a broad range of web and non-web applications.<br />

<strong>Annex</strong> 2 - Page 123 of 282


<strong>A2.</strong>6.13 Appendix 3: Overview of MPEG-21<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The goal is to improve interoperability across applications while supporting creation at all points in the<br />

distribution and consumption chain as well as intellectual property protection. The tools defined should valuable<br />

in support of end-to-end QoS, session mobility, adaptation control, value-added content, content fragment<br />

usage,…)<br />

MPEG-21 comprises 12 parts specifying methods for<br />

• content declaration, identification and description (Parts 2 and 3)<br />

• intellectual property rights management and protection (Parts 4, 5 and 6)<br />

• content adaptation and processing (Parts 7 and 10)<br />

• evaluating the appropriateness of persistent association of information (Part 11)<br />

Part 8 comprises the reference software of the standard, i.e. a reference implementation of all the MPEG-21<br />

normative components. Part 12 contains a testbed architecture for MPEG media streaming applications<br />

comprising a player, a server and a network emulator.<br />

Part 1 describes the MPEG-21 vision and strategy in the direction of transparent and easy access to multimedia<br />

content. It describes the multimedia framework targeted by the standard together with its main architectural<br />

elements.<br />

Part 2 “digital item declaration (DID)” specifies an abstraction schema for declaring the structure and<br />

compositions of digital items. Using a so-called digital item declaration language (DIDL), a digital item is<br />

specified by its resources (individual asset such as video or audio clip), metadata and inter-relationships. The<br />

digital item is declared with a set of abstract terms and concepts which constitute the so-called DID model. Each<br />

digital item is described with XML using the DID grammar.<br />

Part 3 “digital item identification and description” specifies the way to identify digital items, its related<br />

intellectual property, description schemes, the relationship between digital items and existing identification<br />

systems (e.g., international standard recording code, book number, serial number, ….), the relationship between<br />

digital items and description schemes (e.g., international standard musical work code, text code, …). The<br />

identifiers can be included in a statement in the DID.<br />

Part 4 “intellectual property management and protection” is still on progress. It should provide specifications in<br />

support of the declaration of IPMP processing for given components of a digital item, in support of secure peerto-peer<br />

and intra-peer communications. It targets a trust management architecture or framework.<br />

Part 5 “Rights expression language (REL)” specifies the syntax and semantics of a language to express the rights<br />

of users acting on digital items. It actually specifies a set of actions that can be taken on a DI. The REL declare<br />

rights using the terms defined in the RDD (part 6). Mechanisms for extending the language itself and for<br />

creating a new dictionary have been foreseen.<br />

Part 6 “Rights data dictionary (RDD)” specifies the structure and core terms for expressing the rights related to a<br />

digital item. It also specifies how further terms may be defined under the governance of a registration authority.<br />

Part 7 “Digital Item Adaptation (DIA)” is still on progress: (See section xxx for a description of this part of the<br />

standard). It can be regarded as a set of metadata describing the context of delivery of the resources (terminal,<br />

network, natural environment, user preferences…). These metadata can be used as inputs to signal processing<br />

algorithms that will adapt the content itself. They can be alternatively be used by higher level software that will<br />

select appropriate media streams based on the content and context-related metadata.<br />

Part 9 “File format” (on progress) provides a normative method to include a composite digital item into a single<br />

file. It is based on the MP-4 file format.<br />

Part 10 “Digital Item Processing” provides methods to declare possible actions on a given DI, i.e. how the<br />

information should be processed. The DID is a static declaration of the information only, i. e. of its structure,<br />

resources and metadata. Unlike in a HTML-based web page, where the presentation of the information is mixed<br />

with the information itself, the DID does not state the processing of that DI information. The information is<br />

decoupled from the processing such as downloading, or the processing required for its presentation. This allows<br />

for example different users to have different presentations of the same content. User would have a list of<br />

methods or processes that can be applied to the item.<br />

This part of the standard specifies digital item methods (DIM), i.e. a list of possible actions on the content, a<br />

digital item method language (DIML), i.e. a way to specify new methods or actions, a digital item method<br />

engine (DIME) which support standard base operations, digital item base operations (DIBO) which can be<br />

regarded as a programming language’s standard library of functions, and digital item extended operations<br />

(DIXO).<br />

<strong>Annex</strong> 2 - Page 124 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Part 11 “Evaluation tools for persistent association” is on progress. It should specify a set of tools for the<br />

evaluation of different technologies (e.g, watermarking, fingerprinting) used for persistent association of<br />

information with a given DI or one or more of its components.<br />

Part 13 “Scalable Video Coding” ‘see annex II.<br />

MPEG-21 is also planning to specify event reporting mechanisms to monitor and communicate among peers and<br />

users events related to the processing of a DI. This would allow for example to monitor and record the use of DI.<br />

MPEG-21 also explores requirements and technology for highly scalable audio and video coding, and looks at<br />

how these developments can be optimally aligned with MPEG-21 and in particular with DIA.<br />

<strong>Annex</strong> 2 - Page 125 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>6.14 Appendix 4: Main streaming media products with their characteristics<br />

The most widely adopted streaming media products are:<br />

Apple’s QuickTime Server (http://www.apple.com/quicktime/products/qtss/): Apple’s server solution,<br />

QuickTime Streaming Server (QTSS) is a standard component of MacOS X Server. While designed for MacOS<br />

it is also available via the Darwin open source project as Darwin Streaming Server (DSS). Ready-made versions<br />

are available for Linux, Windows, and Solaris. QTSS facilitates live broadcasting, simulated live, as well as endto-end<br />

on-demand streaming. Its technical characteristics can be summarized as follows:<br />

• Supported media formats are QuickTime media and MPEG-4; Therefore, it is compatible to all ISOcompliant<br />

MPEG-4 media players and Quicktime players.<br />

• Delivery over RTP, streaming control with RTSP, RTP/RTSP tunnelling over HTTP;<br />

• Supports fast start delivery over HTTP; In a fast start mode, the server sends the video to the client at a<br />

higher rate. This is a type of pre-fetching to be used jointly with appropriate buffer management and<br />

playback on the client side;<br />

• Supports stream relay functionality.<br />

Microsoft’s Media Services: complete end-to-end multimedia system including encoders, video-streaming<br />

servers and multimedia players. Every Microsoft OS (from desktop to PDAs and Smartphones) includes<br />

Windows Media Player. Its main technical characteristics can be summarized as follows:<br />

• The protocols supported are UDP, in Unicast and Multicast, RTP/RTCP, RTSP, and HTTP/TCP.<br />

• Supports its own encoder and video-audio file format (Windows Media Format). Its encoder is based on the<br />

MPEG-4 standards, however not compatible to MPEG-4;<br />

• Multi-rate encoding and corresponding intelligent streaming; Depending on the network state (congestion<br />

state, bandwidth), the server decides which bit rate to stream;<br />

• Supports stream thinning: This is a process that eliminates video frames from a video feed in order to protect<br />

the audio feed;<br />

• The windows media SDK provides developers with the ability to build streaming cache/proxy solutions. A<br />

programming interface is provided to aid developers to implement relay or caching functions.<br />

RealNetworks’ Helix Universal Server<br />

(http://service.real.com/help/library/guides/helixuniversalserver/realsrvr.htm): RealNetworks’ Helix Universal<br />

Server comes in 4 different versions: Standard, Enterprise, Internet and Mobile. Its main technical characteristics<br />

can be summarized as follows:<br />

• Depending on its license, the server is capable of live and on-demand content delivery of most major file<br />

formats, including RealMedia, Windows Media, QuickTime and MPEG-4.<br />

• It supports UDP, TCP, RTP, IP-multicast, the MMS (Microsoft proprietary) protocol, RTSP, PNA<br />

(Realnetworks proprietary protocol);<br />

• It supports Microsoft’s Multiple Bit Rate (MBR) encoding technology.<br />

• Encoder, transmitter, receiver support the relay functionality;<br />

• Incorporates SureStream technology that allows to detect changes in the bandwidth or packet loss conditions<br />

and to translate these changes into stream switching.<br />

PacketVideo (http://www.packetvideo.com/) develops solutions for transmitting multimedia streams over<br />

wireless networks low bitrate (mainly Smartphones and PDAs). The leading product is pvServer. Its main<br />

technical characteristics can be summarized as follows<br />

• supports MPEG-4 and Windows Media formats. Players are available for most mobile devices under<br />

Symbian and Microsoft’s PocketPC.<br />

• supports RTP/RTCP, RTSP over UDP and HTTP;<br />

• supports FastTrack, FrameTrack mechanisms.<br />

Envivio (http://www.envivio.com/) maintains a collection of media delivery products. The 4Sight streaming<br />

server is an MPEG-4 server conforming to both ISO and ISMA standards. Its characteristics are as follows<br />

• The software version is available for Microsoft Windows, Linux and Irix. End-user clients can use either the<br />

standalone EnvivioTV player or install the EnvivioTV plug-in into RealNetworks Player, Windows Media<br />

Player or Apple’s QuickTime.<br />

<strong>Annex</strong> 2 - Page 126 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Supported protocols are standard RTP/RTCP, RTSP over UDP (Unicast/Multicast) or through HTTP<br />

tunnelling.<br />

• the hardware version of Envivio’s streaming server supports stream switching and includes load-balancing<br />

mechanisms.<br />

Dicas (http://www.mpegable.com/) specializes in developing MPEG-4 related products for the Microsoft<br />

Windows platform. Its product line includes an encoder, a streaming server and a player. Characteristics:<br />

• Mpegable Broadcaster can take either live input or any Windows supported media format and output MPEG-<br />

4 compliant streams.<br />

• Protocols supported are RTP/RTCP, RTSP. Multicast is also supported.<br />

• software development kit for allowing external developers to extend their products.<br />

<strong>A2.</strong>6.15 Appendix 5: Some content delivery networks (CDN) providers<br />

Multimedia streaming in a CDN is a difficult problem due to the significant traffix and bandwidth consumption,<br />

the necessity for the CDN provider to ensure proper AAA for each client, and to guarantee quality for live video<br />

or VoD for each client, according to his preferences and demands. Some current CDN service providers (listed<br />

below) already provide streaming services, but there is yet a lot to be done on this field.<br />

• Adero (http://www.webvisions.com)<br />

• Akamai (http://www.akamai.com)<br />

• Mirror Image (http://www.mirror-image.com)<br />

• ActiVia Networks (http://www.activia.net)<br />

<strong>Annex</strong> 2 - Page 127 of 282


<strong>A2.</strong>7 OPTICAL METRO / CWDM<br />

<strong>A2.</strong>7.1 Introduction<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The “Broadband for All” concept emerges as a substantial element of the Ambient Intelligence scenario<br />

(partially explored in the FP5 IST-OPTIMIST project 160 161 ): In the future environment people will be<br />

surrounded and supported by a network structure, which will support all their communication needs. This<br />

structure can be considered as offering an ambient intelligence linking people and services. Ambient intelligence<br />

requires the embedding of technology in natural surrounding such as buildings, providing constant access to a<br />

variety of services such as entertainment, personal communications, tele-education, etc. Realization of an<br />

ambient intelligence requires a flexible network providing large capacity links, which can be supplied at low<br />

cost only through the connection of diverse access physical media to a photonic network. The Optical or<br />

Photonic Network 162 163 164 165 in conjunction with appropriate control software, provides an intelligent network,<br />

which can interconnect the ambient intelligent environment (through wireless link for example), with large data<br />

and computing resources. The high capacity and low latency of photonic networks makes it ideal for this<br />

purpose.<br />

Figure 45: Intelligent Optical network enabling ambient intelligence 162<br />

The Metropolitan Area Network (MAN) 166 lies in between two dominant networking domains, namely the<br />

access network and the backbone. It must interface with the ultrahigh-bandwidth long-haul core network while<br />

addressing the growing connectivity of the access infrastructure. The Metro Access Network is spread over<br />

distances of tens of kilometers while the Metro Core Network can reach up to 200 to 300km. Both include<br />

numerous nodes in order to supply the connectivity demanded by the internal and through traffic of the<br />

structure. Confronted to the expansion of access traffic and solutions (wireless, ADSL, xDSL, FTTx…) on one<br />

side and to the under exploited backbone huge transmission capacity, (which capital investment needs<br />

justifying), on the other side, the Metropolitan Area Network lies in the centre of the operators and vendors<br />

interests. CAPital and OPerational EXpenditure (CAPEX and OPEX) considerations for the overall network (at<br />

installation and over the years) is strongly related to a proper design (in terms of architecture, protocols,<br />

subsystems and components) of this network key element.<br />

160<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, on line http://www.ist-optimist.org/<br />

161<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004, on line http://www.istoptimist.org/<br />

162<br />

FP5-IST OPTIMIST project, http://www.ist-optimist.org/<br />

163<br />

FP6-IST e-Photon/ONe NoE, on line http://e-photon-one.org/<br />

164<br />

J.E. Berthold « SONET and ATM » in Optical Fiber Communications IIIA, Academic Press San Diego, 1997<br />

165<br />

Strand « Optical Network Architecture Evolution» in Optical Fiber Communications IVB, Academic Press San Diego, 2002<br />

166 IEEE 802 LAN/MAN Standard Commitee, on line htttp://www.ieee802.org/<br />

<strong>Annex</strong> 2 - Page 128 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

In addition, Metropolitan network design is strongly influenced by a strong legacy, particularly in terms of<br />

protocols used for transport and services which tends to pile up into a complex structure (IP, ATM,<br />

SDH,WDM...) 167 168 169 170 171 172 .<br />

In this report, we analyze the current and future keys to the evolution of the metropolitan network towards a<br />

fully connected, high data and switching rate, protected network. The latter will match the future demand for<br />

voice, data, video services while allowing minimal and progressive evolution of CAPEX and OPEX for the<br />

vendors and service providers.<br />

OADM<br />

WAN<br />

H Hub<br />

CPN/E CPN/E<br />

<strong>Annex</strong> 2 - Page 129 of 282<br />

WDM<br />

Metro Core Ring<br />

50 - 200 km circ.<br />

WDM<br />

Metro Access Rings<br />

10 - 50 km circ.<br />

Figure 46: Metro core and access networks (Costumer Premise Network/enterprise)<br />

<strong>A2.</strong>7.2 The Metropolitan Optical Networks<br />

Traditionally, the telecommunication network is segmented in three parts. The access network collects and<br />

delivers the voice or data signals from and to the end users. Long distance information transfers are taken care of<br />

by the backbone network. In between, the metropolitan area network aggregates the access data signals in order<br />

to deliver them at the entrance node of the backbone or transfer it directly to other access nodes of the same<br />

regional area. The rapid penetration of broadband in the landscape of the “information society” both in the<br />

domestic domain and for professional applications induces an increasing demand on the Metropolitan<br />

infrastructure. Telecommunication traffic has experienced an important mutation between a “voice centric”<br />

structure towards a “data centric” one supporting internet traffic. Home services concentrate traffic via some<br />

access network of increasing efficiency into the metropolitan edge rings, whereas large industrial premises may<br />

have direct access to metropolitan nodes. Traffic in the MAN network is growing rapidly due to increased<br />

broadband penetration, and also the deployment of regional content servers (such as video (VoD) 173 and storage<br />

networks (SAN) 174 , Figure 47), which must be accessed via the network access points. Due to the fact that more<br />

peer-to-peer type of networking becomes a reality, the dynamic and quick optimization of resources is necessary<br />

especially in the MAN area. In the next few years, this traffic enhancement is bound to follow an everincreasing<br />

expansion rate consequently to the fast evolution of the access technology (availability of the fibre<br />

access solutions FTTx, FTTH 175 ) coupled with the bandwidth demand of application driven services.<br />

On the other side, the evolution of optical transmission technologies in the 1990’s and early 2000’s have<br />

delivered a huge capacity for long-haul data transmission that is still under utilized and created a very strong<br />

commercial incentive towards traffic enhancement in order to justify capital expenditure. In addition, the<br />

present emphasis on metropolitan network results from an evolution of traffic going from a majority of long<br />

distance interchanges towards a very dense local (metropolitan) traffic.<br />

167<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Technology trends: T1-optical network architecture, T2-Optical network<br />

systems and sub-systems, T3-optical network components, 2004, on line http://www.ist-optimist.org/<br />

168<br />

« Introduction to DWDM for metropolitan networks » on line http://www.cisco.com/univercd/cc/td/product/mels/cm1500/dwdm/<br />

169<br />

N. Ghani, J-Y Pin, X. Chen « Metropolitan Optical Network » in Optical Fiber Communications IVB, Academic Press San Diego, 2002<br />

170<br />

IEEE 802.3 CSMA/CD (Etherenet), on line htttp://www.ieee802.org/3/<br />

171<br />

F. Bruyère, « Metro WDM », Alactel Telecommunication review 1Q2002, pp. 7-25, 2002.<br />

172<br />

S. Wright et al « Deployment challenges for Access/Metro Optical Networks and Services » J. Ligthwave technol.- Special issue on<br />

Metro & access networks, Vol 22, N°11, pp. 2606-2616, Nov 2004.<br />

173<br />

refer to the present BREAD deliverable Video chapter, March 2005<br />

174<br />

J. Elmirghani and I. White “Optical Storage Area Networks” and included articles, IEEE Communications Magazine, pp. 70-99, March<br />

2005<br />

175<br />

refer to the present BREAD deliverable FTTH chapter, March 2005


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

An Alcatel perspective foresees 176 that a change in content-based networks will reverse the backbone versus<br />

metropolitan shares from 80%-20% in 1998 to 10%-90% in 2005. “Hence the metropolitan network will, in<br />

future, have a dominant role in terms of capacity and infrastructure requirement”.<br />

Figure 47: Future Services and bit-rates for residential customers, SOHOs and SMEs (left)<br />

and for large enterprises (right)<br />

176 C.Coltro and J. van Bogaert, « Global Alcatel metropolitan solutions », Alactel Telecommunication review 1Q2002, pp. 27-32, 2002<br />

<strong>Annex</strong> 2 - Page 130 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The metropolitan network exhibits a highly connected structure, including many nodes interconnecting it with<br />

various access points, with the backbone network and possibly directly with other metropolitan rings.<br />

Consequently, the transmission rate along the metropolitan links and the switching rate at metropolitan nodes<br />

could reach rapidly very high values (Tb/s). The MAN network will need to be able to provide easy and<br />

dynamic routing capabilities through high capacity bandwidth pipes and at an adequate granularity. All these<br />

factors together need to ensure that a high mix of services can be provided to all the different customers<br />

connected to the MAN.<br />

The metropolitan network is presently structured according to a strong legacy and is generally subdivided into a<br />

Metro access network (or metro edge), which spreads over distances of tens of kilometres and the Metro core<br />

network expanding over 200 to 300 km. This architecture implies that a metropolitan network must<br />

accommodate very varied applications (voice, video, internet…), must collect traffic from diverse access sources<br />

and technologies (LANs, PONs, Wireless, Hybrid fibre-coax (HFC), etc.) and should provide services with<br />

adequate quality-of-service, reliability or security level. The bit rates of the data tributaries accessing the<br />

“metro-access” network can also vary quite significantly (e.g. from OC-3 to OC-196 for SONET and from<br />

100Mb/s to 10Gb/s for Ethernet traffic) 177 178 . Therefore network nodes at the edges of the metro network<br />

should perform also traffic aggregation to improve the efficiency of the transport network by combining the low<br />

bit-rate access signals to a high bit-rate wavelength channel. Moreover, the network should be able to support<br />

bandwidth provisioning with various levels of granularity. This very vast heterogeneity produces a strong<br />

demand on the network performance and management in the metropolitan area. C. Mendler 179 proposes a<br />

checklist for service providers choosing a vendor partner for metro infrastructure: “Must provide integrated<br />

management, must be scalable, must support next generation services, must support legacy services and<br />

infrastructures, must demonstrate value for money.”<br />

Thus, the emerging requirements of Ambient Intelligence today send new challenges to the metropolitan<br />

networks because the latter operate at the limits of the presently implemented technologies, both in terms of<br />

transmission capacity and access traffic. It is a reasonable vision to consider that the challenge will be met<br />

thanks to a renewed Metropolitan Area Network widely including optical technologies. Worldwide optical<br />

network hardware market revenue hit $2.18 billion in 3Q04, a 3% decrease from 2Q04, with slow growth<br />

projected to 2007, according to Infonetics Research’s quarterly worldwide market share and forecast report.<br />

Metro makes up 75% of all optical network hardware revenue; long haul makes up 25% 180 . This demonstrates<br />

the interest of operator for this application area. Furthermore, worldwide metro WDM revenue hit $294 million<br />

in 2Q04, and is projected to grow 32% to $388 millions by 3Q05, and just under $2billions by 2007. The<br />

development of WDM technology is driven by the requirement for Metro capacity enhancement.<br />

177 Tomkos, I.; Vogiatzis, D.; Mas, C.; Zacharopoulos, I.; Tzanakaki, A.; Varvarigos, E.;, « Metropolitan Area Optical Networks » , Circuits<br />

and Devices Magazine, IEEE , Volume 19, Issue: 4, pp. 24-30, July 2003.<br />

178 I. Tomkos, « Performance engineering of metropolitan area optical networks through impairment constraint routing » , Communication<br />

Magazine, IEEE , Volume 42, Issue: 8, pp. s40-s47, Aug. 2004.<br />

179 C.Mendler , « The metamorphosis of Metropolitan Networks. », Alactel Telecommunication review 1Q2002, pp. 2-4, 2002.<br />

180 « 3Q04 Optical Network Hardware Market Share and Forecasts » December 2004, on line<br />

http://infonetics.com/resources/purple.shtml?ms04.opt.nr.3q.shtml<br />

<strong>Annex</strong> 2 - Page 131 of 282


<strong>A2.</strong>7.3 The vision<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Broadband rapid expansion in the information society has rapidly introduced a new “data-centric”<br />

telecommunication network concept. This evolution from voice-based transport requires a redefinition of the<br />

network infrastructure and management capabilities. The metropolitan network represents a keystone in this<br />

evolution being situated in between the high capacity wide area network and the profusion of access solutions.<br />

The wide area network has been the great benefactor of the 1990’s optical telecommunication development<br />

exhibiting point-to-point impressive performances thanks to WDM technology 181 182 . State-of-the-art<br />

transmission over a single fibre using WDM reaches several Tb/s over distances of several thousands of<br />

kilometres (optical transport is treated in the “Core network” chapter of the present deliverable, 182 ). On the<br />

other hand, local traffic and access solutions have boomed following the development of Internet services, both<br />

for home or business applications, (e.g. more peer-to-peer type of networking, access to content servers (such as<br />

video (VoD). Storage area networks (SAN) and disaster recovery are viewed as a major demand for future<br />

metropolitan network requiring secured transfers of huge quantities of information. In addition, large enterprises<br />

are usually directly connected to the MANs. One of the highest growths of optical networking markets is related<br />

to carrier-delivered wavelength services for enterprise applications.<br />

The metro network has to deal with the huge amount of data, which must be routed and delivered. It must<br />

provide a very high connectivity, with numerous nodes of very high capacity. Clearly, optical technology and<br />

networking is the only solution for providing such a service at the metro level (just as it is at the core and<br />

WAN level). In a near future, metro architecture will have to deal with fibre link carrying 100 to 1000 optical<br />

2.5/10 and possibly 40 Gb/s signal requiring Tb/s Add&Drop capacity.<br />

Because of mission critical applications, only the so-called carrier grade MANs are acceptable. Carrier Grade<br />

mainly means high reliability/availability (>99.999) and QoS differentiation.<br />

Today, Synchronous Digital Hierarchy/Synchronous Optical Network (SDH/SONET) 183 184 remains the<br />

dominant transport technology in the metropolitan area network. It has been the foundation for the MANs<br />

over the last decade, serving as the fundamental transport layer both for TDM-based circuit switched network<br />

and most overlay data networks 185 . SONET/SDH has evolved into a very resilient technology. This transport<br />

protocol was developed in the voice centric telecom environment of the 80’s. The necessity for data<br />

transmission has resulted in superposing protocols in order to allow for internet packet based architecture to fit<br />

into the SDH format physically transported over a WDM optical layer, (IP over ATM over SDH over WDM).<br />

SDH is fairly expensive to implement. The position of SONET/SDH into the metro domain is thus strongly<br />

supported by the necessity of supporting legacy. SDH also remains very attractive thanks to its ability to<br />

support advanced services such as QoS, priority control… However, the circuit switching oriented protocol<br />

developed for voice application is rather inefficient and consequently expensive when confronted to data<br />

transport. The evolution of SDH toward what is already denominated Next Generation SDH (NG-SDH) is the<br />

major issue concerning the future life of the technology.<br />

Improving the bandwidth occupation efficiency and the related cost efficiency in the MAN will impose a<br />

reduction of this protocol stack. This necessitates some development in terms of systems, sub-systems and<br />

components in order to provide the required “agility” directly onto the physical optical layer. On the other side,<br />

coming from the local area network world (LAN) the Ethernet protocol 186 tends to spread its simplicity<br />

towards the metropolitan world. The question here is whether evolution of Ethernet (10GbE) 187 may offer the<br />

reliability and quality of service of SDH and support traffic over the long distance of the MAN. Introduction of<br />

MAC protocols dedicated to packet transport protocol (GMPLS) such as Ethernet or IP in the metropolitan<br />

network under study. The development of resilient packet ring concept 188 RPR offers and interesting mean of<br />

interfacing Ethernet technology within the metro domain.<br />

181 FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Technology trends: T1-optical network architecture, T2-Optical network<br />

systems and sub-systems, T3-optical network components, 2004, on line http://www.ist-optimist.org/<br />

182 refer to the present BREAD deliverable core network chapter, March 2005<br />

183 J.E. Berthold « SONET and ATM » in Optical Fiber Communications IIIA, Academic Press San Diego, 1997<br />

184 ITU (CCITT) standard defining SDH protocols (G707 03/96), on line http://www.itu.int/ .<br />

185 « Introduction to DWDM for metropolitan networks » on line http://www.cisco.com/univercd/cc/td/product/mels/cm1500/dwdm/<br />

186 IEEE 802.3 standard defining CSMA/CD (Ethernet), on line htttp://www.ieee802.org/3/<br />

187 IEEE standard defining the 10 Gigabit Ethernet (10GbE), on line htttp://www.ieee802.org/3/<br />

188 standard defining Resilient Packet Ring (RPR), on line htttp://www.ieee802.org/17/<br />

<strong>Annex</strong> 2 - Page 132 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Introduction of enhanced WDM service in the networking architecture requires development of new optical<br />

functionalities such as Optical Add&Drop Multiplexers (OADM) to allow the routing of wavelength channels<br />

to and from particular access nodes, and to the WAN network. Initially these OADMs will be fixed in relation to<br />

the wavelength channels added and dropped. They will eventually move towards dynamically reconfigurable<br />

OADMs which allow much greater routing flexibility 189 190 . For MANs with many fibres in the ring-topology,<br />

(the likely topology in the near term), Optical Cross-Connects (OXC) may be necessary, if full flexibility of adddrop<br />

and fibre switching is required. Also as the MAN grows (in both number of access nodes and geographical<br />

size), optical amplification is required to make up for the losses involved in both OADMs and fibre. Ideally low<br />

cost amplifiers are needed, and both semiconductor and fibre are being considered. It is likely that 10 Gb/s<br />

channels will dominate, but 40 Gb/s technology is available and suited (from a transmission viewpoint) to the<br />

relatively short distances associated with a MAN. This high single-wavelength-channel bit-rate will be<br />

dedicated to the MAN internal traffic. Interface with the backbone will probably conserve the single-channel<br />

bit-rate in order to avoid electrical demultiplexing at the interface while using O/E/O conversion in order to<br />

generate good quality high bit-rate optical signal (low chirp) within the backbone for long haul transmission. At<br />

a latter stage 2R/3R regeneration may be implemented in the metro/backbone node interface requiring good<br />

quality optical signal within the metro network. The technologies associated with reconfigurable OADMs and<br />

OXCs will allow switching times of


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 48: Evolution of Metro Core Networking and Metro Access Networking for Enterprises<br />

Figure 48 illustrates the evolution scenarios for the services supported by the metro area network for enterprises.<br />

The economic equation is a difficult aspect that will influence greatly the network deployment. Installed<br />

systems must imperatively support present legacy (SDH) services, which represent a very high investment.<br />

On the other hand they need to be able to evolve towards new service in a smooth manner allowing a regular<br />

expansion of Capital expenditure (CAPEX). This implies a modular and scalable structure for the equipment.<br />

Scalable node throughput at costs strongly dependent on size and capacity starting with low initial cost (pay-asyou-grow)<br />

is leading to lower CAPEX. In addition, on a system component scale, there is an obvious<br />

requirement for low cost, user friendly, compact and encapsulated devices. Furthermore new networking<br />

approaches/standards such as Coarse WDM (CWDM) 193 are being proposed. CWDM uses fewer wavelengths<br />

than DWDM, with greater channel spacing, thus enabling the network equipment providers to make use of<br />

lower cost un-cooled devices (e.g lasers). As the network capacity grows, CWDM may be upgraded to DWDM<br />

more profitably however than direct installation of WDM.<br />

<strong>A2.</strong>7.4 Gap analysis<br />

The evolution of the optical metropolitan network will be very highly related to the economic investment while<br />

the service requirements will constitute the pulling force of this evolution. The latter will determined the<br />

network interrelated parameters: transmission capacity, switching granularity (at the optical and at the electrical<br />

levels), transparency, flexibility, supported application and the related quality of service.<br />

Fast reconfigurability and high granularity (sub-lambda channel capacity) allow fast and easy (point and click)<br />

provisioning and efficient use of MAN infrastructure, which reduces the operating expenditure (OPEX).<br />

Scalable node throughput at costs strongly dependent on size and capacity starting with low initial cost (pay-asyou-grow)<br />

is leading to lower CAPEX.<br />

193 ITU standard defining CWDM, on line http://www.itu.int/<br />

<strong>Annex</strong> 2 - Page 134 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Because of mission critical applications, only the so-called carrier grade MANs are acceptable. Carrier Grade<br />

mainly means high reliability/availability (>99.999) and quality –of-service (QoS) differentiation.<br />

The accommodation of the traffic evolution in the MAN implies a reduction of the protocol stack over the<br />

years. The protocol stack of an optical (D/C)WDM metro network has to provide at least the following basic<br />

functions:<br />

• Fast protection switching means fast (< 50 ms) and automatic switching over of the traffic to a preestablished<br />

recovery path in case of a failure of the main path.<br />

• Sub-lambda multiplexing is a mechanism that is capable of partitioning the bandwidth of a wavelength<br />

channel.<br />

• Traffic Engineering is the ability to adopt different routing configurations in order to optimize the traffic<br />

flow.<br />

• Services differentiation is necessary to enable different QoS levels.<br />

• Packet forwarding means that the metro network modes may be used as transmission nodes for IP data<br />

packets.<br />

Another important issue is multiple service capability.<br />

Multiple services capability including legacy services means that each traffic flow has to support new (IP packet<br />

oriented) services as well as the existing ones.<br />

By far the biggest part of the installed metro networks today is based on a four-layered protocol stack (IP over<br />

ATM over SDH/SONET over Optical layer), mainly for historical reasons. The evolution goes in the direction<br />

of reducing the number of protocol layers as shown in Figure 49.<br />

Packet<br />

forwarding<br />

Services<br />

Differentiation<br />

Traffic<br />

Engineering<br />

Sub-Lambda<br />

Multiplexing<br />

Fast Protection<br />

Switching<br />

Transmission<br />

Legacy<br />

4- Layer<br />

IP<br />

ATM<br />

WDM Optical<br />

Layer<br />

SDH<br />

NG SDH<br />

3-Layer<br />

with MPLS<br />

IP + MPLS<br />

<strong>Annex</strong> 2 - Page 135 of 282<br />

3-Layer with<br />

Thin Mux Blade<br />

2-Layer<br />

IP + GMPLS<br />

Adaptation, "Slim SDH"<br />

"Thin Mux Blade"<br />

Burst-Switched<br />

Packet-Switched<br />

Optical Layer<br />

Circuit Switched WDM Optical Layer<br />

Figure 49: Evolution towards a reduced (“collapsed”) protocol stack<br />

Particularly the metro domain is dominated by SDH (ring systems) and it still represents a dominant (yet<br />

declining) portion of incumbent metro expenditures. The major indispensable functions performed by the SDH<br />

protocol layer are rapid protection, multiplexing/demultiplexing of sub-lambda rates and performance<br />

monitoring. To overcome the major weakness of SDH - the low efficiency for bursty traffic (packets) - three<br />

complementary protocols have been developed, namely virtual concatenation (VC), link capacity adjustment<br />

scheme (LCAS) 194 and generic framing procedure (GFP) 195 . New generation (NG) SDH - which is traditional<br />

SDH combined with VC, LCAS and GFP - shows a significantly improved packet transport efficiency. The<br />

protocol stack of some of the Multi Services Platforms for metro and access networking available today already<br />

include NG SDH.<br />

Current product developments are going in two directions: multi-service, multi-technology platforms (MSP) and<br />

predominantly (next generation) Ethernet-based (E-MAN), highly integrated, one-box, low cost solutions,<br />

concentrating on packet services. The MSPs are evolving from the legacy four-layered architecture. They are<br />

based on WDM optics, SONET/SDH as a robust service aggregation layer (sub-lambda multiplexing), and<br />

packet based technologies (ATM, MPLS, IP) for data services. The technologies are integrated under a common<br />

management umbrella for modular, controlled network evolution. The most prominent E-MAN architecture is<br />

the 196 Resilient Packet Ring (RPR), which is being standardized.<br />

194 link capacity adjustment scheme (LCAS) , on line http://www.itu.int/<br />

195 generic framing procedure (GFP), on line http://www.itu.int/<br />

196 standard defining Resilient Packet Ring (RPR), on line htttp://www.ieee802.org/17/


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The RPR is intended to be a carrier grade solution supporting QoS differentiation (Medium Access Control<br />

(MAC) protocol, different priority queues) and fast protection switching. The core of 802.17 is a MAC layer<br />

protocol, which controls the access to an underlying SDH/SONET ring or Ethernet link.<br />

In these scenarios, the optical layer is occupying an important portion of the networking activity and requires<br />

new capabilities such as burst or packet switching 197 . Optical packet switching can offer the desired flexible<br />

and bandwidth efficient architecture desired since it provides smaller granularity to the optical layer allowing a<br />

high degree of statistical multiplexing 198 . The gap analysis in this field can be found in the core network<br />

chapter of this deliverable. Recent studies have demonstrated the viability of an optical packet switching based<br />

architecture in metropolitan networks, provided the MAN capacity increases as it is likely to do 198 199 . Active<br />

and passive node can both offer positive evolution for OPEX and CAPEX depending on the price evolution of<br />

the optical components.<br />

High-capacity traffic, wide regional expansion and reasonable cost considerations for MANs, has important<br />

consequences on component and systems development. High functionality networking requires some more<br />

wavelength agile components (tunable lasers, tunable receiver) for wavelength switched network 200 . On the<br />

other hand, CWDM technology allows relaxed operating procedure (un-cooled sources) but requires new in-line<br />

components (very large band amplifiers). Dispersion management may become necessary in the metro domain.<br />

Further traffic increases will later lead to DWDM in the network.<br />

<strong>A2.</strong>7.5 Network architecture key issues<br />

<strong>A2.</strong>7.5.1 Physical and logical network topology<br />

The optical networks in the metro core and access domain show physical topologies that are usually composed<br />

of the basic elements (star, bus, tree, ring, mesh) in a hierarchical structure. Basically we can distinguish<br />

between the metro core and the metro access sub domain. The metro access sub networks collect traffic streams<br />

from different sites and concentrate them to the nodes of the metro core sub network. Therefore the logical<br />

topology (traffic patterns) of the metro access sub networks are typically star shaped, supporting hubbed traffic.<br />

The metro core sub network interconnects (meshes) the metro access networks and connects them to the WAN.<br />

Consequently the metro core sub network shows mainly a logical mesh topology, which is superimposed by a<br />

star topology supporting hubbed traffic towards the WAN. Figure 3 shows today’s most prominent physical<br />

topology of the metro network, namely ring access domains interconnected by a ring core with some extended<br />

spurs to connect distant sites. The physical ring topology provides the advantage of a relatively simple<br />

protection mechanism with good performance together with relatively low cabling costs and efficient use of the<br />

network elements.<br />

The long-term evolution seems to go toward meshed metro (core and access) networks. Mesh networks with<br />

multiple wavelengths offer more flexibility in designing protection and restoration mechanism as well as more<br />

flexibility in traffic engineering and scaling<br />

<strong>A2.</strong>7.5.2 Key architectural issues<br />

Protection<br />

The carrier-grade reliability (the famous > 99.999 % availability or < 6 minutes average down time per year) can<br />

only be achieved by redundancy. Particularly the fibre facility between sites is considered the least reliable<br />

component in the system (fibre cuts are fairly common). In ring networks, immunity against link failures can be<br />

achieved in a relatively simple way because each node is connected by at least two disjoint paths. Mesh<br />

networks are much more complex to organize for resilience. But because arbitrary topologies can be<br />

decomposed into rings, protection methods developed for rings can be a basis also for mesh networks.<br />

197<br />

T. Battestilli and H. Perros « optical Burst Switching for the next generation internet » IEEE Communications magazine vol.41, n°8, Aug<br />

2003<br />

198<br />

C. Develder et al « Benchmarking and Viability assessment of Optical Packet switching for metro networks » J. Ligthwave technol.-<br />

Special issue on Metro & acess networks, Vol 22, N°11, pp. 2377-2385, Nov 2004.<br />

199<br />

L. Dittman et al, “The European IST-DAVID project: A viable approach toward optical packet switching” IEEE J. Select. Areas<br />

Commun., Vol. 21, pp.1026-1040, 2003<br />

200<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Technology trends: T1-optical network architecture, T2-Optical network<br />

systems and sub-systems, T3-optical network components, 2004, on line http://www.ist-optimist.org/<br />

<strong>Annex</strong> 2 - Page 136 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Protection refers to the concept of switching traffic from broken to alternative pre-planned routes and can be<br />

realized at any layer with different granularity in a layered architecture. Figure 50 shows a classification of<br />

optical layer (WDM) protection.<br />

Optical Channel (OCh) section<br />

channel (light-path) granularity<br />

Dedicated Path (DP) Shared Path (SP)<br />

DP Protection (DPP)<br />

1+1 path<br />

1:1 path<br />

DP-WSHR<br />

OUPSR<br />

OCh/DPRing<br />

SP Protection (SPP)<br />

1:N path<br />

SP-WSHR<br />

OBPSR<br />

OCh/SPRing<br />

Optical layer protection<br />

<strong>Annex</strong> 2 - Page 137 of 282<br />

Optical Multiplex Section (OMS)<br />

line (multiplexed light-paths) granularity<br />

Dedicated Line (DL) Shared Line (SL)<br />

DL Protection (DLP)<br />

1+1 line<br />

1:1 line<br />

OULSR<br />

Figure 50: Classification of optical layer (WDM) protection<br />

SL Protection /SLP)<br />

1:N line<br />

SL-WSHR<br />

2F-BLSR<br />

4F-BLSR<br />

OMS/SPRing<br />

WSHR WDM Self-Healing Ring<br />

OUPSR Optical Unidirectional Path Switched Ring<br />

DPRing Dedicated Protection Ring<br />

OBPSR Optical Bidirectional Path Switched Ring<br />

SPRing Shared Protection Ring<br />

OULSR Optical Unidirectional Line Switched Ring<br />

2/4F-BLSR 2/4 Fibre – Bidirectional Line Switched Ring<br />

1+1 The source node transmit on both the working and protection light-path/line (selection at<br />

destination)<br />

1:1 Transmission occurs on the working light-path/line only. The protection light-path/line may<br />

carry<br />

low priority traffic (switch over in case of failure)<br />

1:N 1 spare light-path/line for N protected light-paths/lines<br />

Transparency<br />

Fixed or circuit switched wavelength channels are protocol agnostic by nature. This allows carriers to offer cost<br />

effective protocol-transparent wavelength services and to generate in this way new revenues. For enterprise<br />

business, customers’ wavelength channels are very attractive alternatives to dark fibre links, which are very<br />

often not available.<br />

Support of provisioning<br />

3rd generation WDM optics technology with dynamically (re)configurable add/drop or cross connect<br />

wavelengths – after WDM point-to-point transmission (1st generation) and WDM optics with fixed add/drop or<br />

cross connect wavelengths (2nd generation) - plays an important role regarding provisioning. Together with<br />

software intelligence (optical control layer), 3rd generation optics allows automated provisioning, which is a big<br />

step in reducing the operating expenditures (OPEX).<br />

<strong>A2.</strong>7.5.3 State of the art offerings on the equipment market<br />

Table 1 shows state of the art metro networking products on the equipment market.<br />

Topology PP: Point-to-Point, LAD: Linear Add/Drop, R: ring, M: Mesh


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

WDM type C Coarse, D Dense / Number of wavelengths<br />

WDM optics 1 st Generation 1G: Point-to-point, 2 nd Generation 2G: Fixed OADM / OXC, 3 th Generation 3G:<br />

Dynamically reconfigurable OADM / OXC<br />

Vendor Product Type<br />

Topologies<br />

supported<br />

<strong>Annex</strong> 2 - Page 138 of 282<br />

WDM<br />

type<br />

BR/<br />

(Gb/s)<br />

WDM<br />

optics<br />

Sorrento GigaMux OP PP, LAD, R D64 10 2G<br />

Lightscape XDM-100 OMP PP, LAD, R, M D40 10 1G<br />

Ciena ONLINE Metro OMP PP, LAD, R D33 10 2G<br />

PacketLight WD-1000 PL-16000 OMP PP, LAD, R D32 10 3G<br />

Siemens TransXpress OMP PP, LAD, R D32 10 2G<br />

Nortel OPTera Metro 5200 OMP PP, LAD, R D32 10 2G<br />

Alcatel Metro Span 1696 OP PP, LAD, R D32 10 3G<br />

ADVA FSP 3000 OP PP, LAD, R D32 10 2G<br />

Marcomi PMM-E OP PP, LAD, R D32 10 2G<br />

Lightscape XDM-200 OP PP, LAD, R C16 2.5 2G<br />

λ DS1/DS3<br />

OC-x<br />

10/100<br />

GbE<br />

ATM/<br />

STM-x<br />

(NG) SDH (X Matrix)<br />

Table 14: State-of-the-art metro networking products<br />

IP/ATM<br />

IP/MPLS<br />

over SDH<br />

IP<br />

(Router)<br />

Ethernet ATM (X) MPLS XXX<br />

XXXX<br />

YYYY<br />

Optical/DWDM/OADM 2 nd /3 rd Generation<br />

Optical Multi-service Platform (OMP) using (NG)<br />

SDH for sub-wavelength multiplexing<br />

Figure 4<br />

Platform types<br />

λ<br />

10/100<br />

GbE<br />

OC-3..48<br />

STM-1..16<br />

ESCOM<br />

(SAN)<br />

Ethernet SDH ESCON FICON<br />

FICON<br />

(SAN)<br />

Optical/DWDM/OADM 2 nd Generation<br />

Cable<br />

Video<br />

(CATV)<br />

Cable<br />

Video<br />

Optical Platform with multi-service wavelength<br />

channel interfaces (OP)<br />

• mapping of services directly into wavelengths<br />

• conceptual simple solution<br />

• can be inefficient use of bandeidth<br />

Figure 51: Platform type and evolution<br />

Metro Ethernet<br />

From its first proposal in 1973, 201 202 Ethernet has evolved very rapidly in the world of computer connection.<br />

The low cost and ubiquity are the main argument for making the Ethernet solution attractive. However Ethernet<br />

is clearly becoming increasingly diverse 203204205 .<br />

201<br />

R. Metcalfe et al, « Ethernet : disturbed Packet Switching for Local Computer Networks » Communication of the ACM, Vol 19, n°7, pp<br />

395-404, 1976.<br />

202<br />

R. Metcalfe et al, « Multipoint data communication system with collision detection »,US patent 4,063,220 assigned to Xerox Corporation<br />

203<br />

C. F. Lam « Beyond Gigabit :Application and Development of High-Speed Ethernet Technology» in Optical Fiber Communications<br />

IVB, Academic Press San Diego, 2002


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The present question in the Metro domain is whether Ethernet can be utilized as a direct transport protocol or<br />

remains supported on a SDH/SONET transport layer. This debate finds some very strong supporters of Ethernet<br />

solutions.<br />

The “Metro Ethernet forum” 206 claims “Now is a great time for metro Ethernet!”. “For a variety of business and<br />

technical reasons, metro Ethernet is clearly gaining momentum on a global basis as an alternative infrastructure<br />

to SONET/SDH.”<br />

According to market research firm Infonetics 207 worldwide metro Ethernet equipment revenue totalled $3<br />

billion in 2003, and is projected to grow 157% to $7.7 billion in 2007. Further, Infonetics predicts that close to<br />

$25 billion will be spent worldwide on Ethernet in metro networks between 2003 and 2007. In fact, every year<br />

over the next 10 years, Ethernet will account for a larger portion of metro capital expenditures (CAPEX),<br />

driving double-digit growth through 2007.<br />

In addition to cost effectiveness and inherent scalability and flexibility (Ethernet has the ability to scale from low<br />

speeds (1 Mbps) to high (10 Gb/s) and ultra-high (40 Gb/s and 100 Gb/s) speeds), the enhancement of Ethernet<br />

in Metro solutions is related to technical developments which allows delivery of carrier-class protection,<br />

guaranteed services, and TDM support.<br />

Service convergence (voice, data, video) requires evolution of the MAN structure. According to Mitch Auster<br />

from Ciena 208 , the key alternative is Optical Ethernet, including the multiplexing of up to eight Gigabit Ethernet<br />

(GbE) clients (with copper or low-cost optical interfaces) onto a single 10Gbps DWDM wavelength completely<br />

at Layer 1. The result is eight transparent GbE channels per wavelength – each a one Gb/s circuit-like subchannel.<br />

IP-TV or triple play services are emerging as a key target application for metro Ethernet 209 .<br />

The new generation of Ethernet 10GbE is likely to lead a further penetration towards the metropolitan area<br />

network where presently ATM/SDH dominates. Furthermore, a 100GigabitMetroEthernet (100GbME) is<br />

already envisaged while requirement for this new application is been analyzed 210 .<br />

The recognized weakness of Ethernet services presently is its inability to provide network management<br />

capability, which needs to be introduced in order for Ethernet to spread from LANs to MANs. Ethernet<br />

operation, administration and maintenance (OAM) is not up to the job of enabling or supporting a large-scale<br />

carrier network, which would mean that a end-to-end OAM facilities be provided by an underlying transport<br />

mechanism such as NG-SDH. Furthermore “Can Ethernet actually support QoS or is it destined to remain<br />

forever Best effort”? In addition to the above carrier-class features, today’s metro Ethernet also needs to support<br />

ring and mesh topologies. This demand is addresses via the development of new concepts such as resilient<br />

packet ring (RPR) which constitute a more cost effective solution than the circuit-switched SDH for Ethernet<br />

transport and such as MPLS (or G(eneralized)MPLS for providing QoS and differentiated service provisioning.<br />

Circuit, Burst, Packet switching<br />

The development of network technology and node architecture represents a major aspect for the design of the<br />

MAN. Present solutions rely mainly on opaque electronic switching element supported on the ETDM<br />

SDH/SONET layering. Some circuit architectures are elaborated on the WDM layer by the use of OADM and<br />

OXC working on a wavelength basic granularity (or potentially waveband granularity). The node capacity and<br />

the network flexibility will be largely enhanced by the usage of wavelength agile elements such as tunable laser<br />

or receiver allowing dynamic wavelength allocation at each node. Passive or active mode switching can be<br />

envisaged with obvious consequences on switching element (MEMS spatial switch appear presently as a very<br />

promising technique).<br />

204 M. Blum, FibreSystems Europe, Nov 2004, on line http://fibers.org/<br />

205 IEEE 802.3 standard defining CSMA/CD (Ethernet), on line htttp://www.ieee802.org/3/<br />

206 N. Chen, « Now is a Great Time for Metro Ethernet », Metro Ethernet Forum, on line<br />

http://www.convergedigest.com/blueprint/ttp05/z1mef1.asp ?ID=151&ctgy=8<br />

207 « Metro Ethernet Equipment rend it able to delivers » July 2004, on line http://infonetics.com/resources/purple.shtml?ms04.met.nr.shtml<br />

208 M. Auster, « Optical Ethernet drives Convergence in Triple Play Networks », Metro Ethernet Forum, on line<br />

http://www.convergedigest.com/blueprint/ttp05/z1mef1.asp ?ID=169&ctgy=8<br />

209 M. Blum, FibreSystems Europe, Nov 2004, on line http://fibers.org/<br />

210 A.Zapata et al « Next generation 100-Gigabit Ethernet (100GbME Using Multi-Wavelength Optical Rings » J. Ligthwave technol. -<br />

Special issue on Metro & Access networks, Vol .22, N°11, pp. 2420-2434, Nov 2004.<br />

<strong>Annex</strong> 2 - Page 139 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Optical burst and packet switching 211 212 213 214 215 would offer a much greater switching and band usage<br />

efficiency. Optical nodes for that type of switching required new characteristics in particular packet switching<br />

requires special labeling and label recognition mechanism. The FP5-IST-Stolas 216 217 has elaborated some label<br />

inscription and the FP6-IST-Lasagne project 218 is dedicated to optical label recognition procedure for efficient<br />

packet switching. The crucial barrier to true photonic packet switching is the lack of an optical equivalent to the<br />

electronic buffer or random-access memory. There are visions of using the effect of electromagnetically<br />

induced transparency for making optical buffers, but the road to the market will be long<br />

A very extensive comparison study and roadmap can be found in the Core network chapter of the present<br />

deliverable 219 . Switching solutions are a key element of the backbone and the metro-core network. However,<br />

the latter demand switching capacity on a finer granularity and exhibit a densely connected structure. Evolution<br />

towards more advanced switching techniques is likely to be more effective at this level. On the other hand the<br />

backbone network is much less cost sensitive and can be used more easily as a development platform.<br />

<strong>A2.</strong>7.5.4 Roadmap Summary<br />

The following figures issued form the FP5-project 220 summarize the roadmap element for the network<br />

architecture<br />

Figure 52: Optical Metro and Enterprise Access Networks 220 .<br />

211<br />

T. Battestilli and H. Perros « optical Burst Switching for the next generation internet » IEEE Communications magazine vol.41, n°8, Aug<br />

2003<br />

212<br />

T. Battestilli and H. Perros « an introduction to Optical Burst Switching » IEEE potentials Dec 2004-Jan 2005<br />

213<br />

C. Develder et al « Benchmarking and Viability assessment of Optical Packet switching for metro networks » J. Ligthwave technol.-<br />

Special issue on Metro & acess networks, Vol 22, N°11, pp. 2377-2385, Nov 2004<br />

214<br />

C. Develder et al, Architecture for Optical Packet and Burst Switches » (Invited), Proc. 29th Eur. Conf. Optical Commun. (ECOC)-<br />

IOOC) Vol1, Rimini, Italy, pp 100-103, Sept 2003.<br />

215<br />

Kataoka et al « 40Gb/s Packet-selective Photonic Add/Drop Multiplexer Based on Optical Coded Label Header Processing », J.<br />

Ligthwave technol.- Special issue on Metro & Access networks, Vol 22, N°11, pp. 2377-2385, Nov 2004.<br />

216<br />

FP5-IST STOLAS project, on http://www.ist.stolas.org/<br />

217<br />

T. Koonen et al « Optical Label Switched Networks. the FP5-IST STOLAS », BroadBand Europe Conference, dec 20004, ,<br />

http://www.medicongress.com/broadband/<br />

218<br />

FP6-IST LASAGNE Strep, on line : http://www.ist-lasagne.org/<br />

219<br />

refer to the present BREAD deliverable core network chapter, March 2005.<br />

220<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004, on line http://www.istoptimist.org/<br />

<strong>Annex</strong> 2 - Page 140 of 282


<strong>A2.</strong>7.6 Enabling technologies<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 53: MAN network key issues 220 .<br />

In metro networks, the increasing demand for higher capacities is met by higher data rates and by increased<br />

spectral density, by upgrading the channel spacing. Coarse Wavelength Division multiplexing (CWDM)<br />

systems are considered a cheaper alternative to D(ense)WDM systems as those deployed in the WAN. The<br />

CWDM channel spacing of 20nm (to be compared to 0.8 nm in DWDM) take advantage of low cost un-cooled<br />

distributed feedback lasers 221 and less stringent wavelength multiplexing and demultiplexing components.<br />

Implementing CWDM, the whole low attenuation window from 1,2µm to 1,6µm (app. 60THz) can be used by<br />

cheap un-cooled lasers/transponders. CWDM system is limited presently to un-amplified point-to-point (PtP)<br />

connections since broadband amplification over such a wide bandwidth is not available. DWDM is used<br />

whenever amplification is necessary, for example in C-band (1530nm – 1565nm).<br />

Contrarily to long haul WAN transmission, Metro link dispersion impairments is not too critical. The<br />

dispersion compensation, which is required for long distance (Hundreds Km) particularly for high bit rate<br />

transmission (40Gb/s) is a mastered technology. Polarization dispersion is not a critical issue.<br />

Reconfigurable optical add/drop multiplexers or MEMS-switches will enable optical layer management. Node<br />

designs should offer pay-as-you-grow scalability. Real time optical monitoring of the wavelength paths is<br />

essential to manage and control the optical network cost effectively. This can be done by attaching an optical tag<br />

to each optical channel and by monitoring these tags across the network.<br />

For future optical networks new optical technologies will mature and will be industrialized. Some of these<br />

technologies are:<br />

• Large space switches for burst/packet switching (ns switching time)<br />

• Wide bandwidth wavelength converters<br />

• Optical buffers (necessary for packet switching in a meshed network)<br />

• Optical signal processors e.g. for label processing and 3R regeneration<br />

• Tunable optical transmitters and receivers (tuning time down to ns)<br />

• Optical Time Division Multiplexers (OTDM) for ultrahigh speed transmission (160 Gb/s and above)<br />

• Adaptive dispersion compensators.<br />

The quest for higher capacity is solved by higher data rates and increased spectral density in WDM networks.<br />

The current trend is towards more complex and dynamic networks with dynamic reconfiguration and<br />

provisioning. The key concept for flexibility and dynamics is tunability.<br />

221<br />

H. Debrégeas-Sillard, “Low-cost coolerless integrated laser-modulator for 10Gbit/s transmissions at 1,5 µm” Electronics Letters, Vol 40<br />

n° 21, 2004<br />

<strong>Annex</strong> 2 - Page 141 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Distances, bit-rates, number of nodes, ability to support of many protocols are the main characteristics that<br />

influence the choice for systems and devices in metro WDM network. Cost consideration is a key issue and<br />

induces an effort on monolithic or hybrid integration for the components such as lasers (possibly wavelength<br />

selectable), multiplexers, receivers, transceivers... CWDM is seen by many as a first step in the introduction of<br />

WDM in the metro area. The use of widely spaced (20nm) optical channel relaxes the demand for the<br />

wavelength accuracy of sources and filters and allows using un-cooled devices.<br />

<strong>A2.</strong>7.6.1 Introduction to most recent developments in CWDM<br />

Coarse Wavelength Division Multiplexing (CWDM) 222 is becoming increasingly important for metro, access,<br />

and cable TV systems where, closer to the end-user, low first-cost economics is the primary issue. Compared to<br />

dense WDM (DWDM), lower-cost CWDM devices, such as un-cooled laser diodes and passive WDM<br />

components with relaxed performance specifications and packaging constraints enable a 35% to 40% equipment<br />

cost savings. Full-spectrum CWDM, un-repeated transmission using sixteen 2.5-Gb/s, 20-nm-spaced directly<br />

modulated lasers (DMLs) at wavelengths between 1310 nm and 1610 nm has been demonstrated over 75 km of<br />

Lucent-AllWave fibre, a zero-water-peak fibre with low loss around 1390 nm. The penalties measured for the<br />

short-wavelength channels in the O-band (1310-1350nm) are negligible since the zero dispersion wavelength<br />

(ZDW) of Lucent-AllWave, like other G.652 fibres, is close to 1310nm and the system performance at those<br />

wavelengths is only limited by the increased fibre loss. Due to the combination of increasing fibre dispersion<br />

and the positive laser chirp, the transmission penalties increase towards longer wavelengths.<br />

In addition several approaches have been investigated to obtain longer system reach, increased bit-rates up to<br />

10Gb/s and capacity upgrades with combination of DWDM and CWDM systems. Hence, the architecture of<br />

existing transmission systems can be modified to deliver additional bandwidth in existing metro applications by<br />

the introduction of the CWDM concept.<br />

Length extension<br />

In non-amplified CWDM, fibre attenuation, fibre splice loss, connector loss and optical filter mux/demux losses<br />

limit the attainable system reach. The system reach is also dependent on the number of channels and channel bit<br />

rate, which dictates the power budget of a transmitter/receiver pair. For example, a 16 channel hubbed ring<br />

network utilizing optical add/drop filters at 4 nodes and a hub with each channel operating at 2.5 Gb/s may be<br />

limited to about 40 km in circumference. However, a significant percentage of metro edge rings exceed a<br />

perimeter of 40 km<br />

To extend the reach of metro local or edge rings employing non-amplified CWDM technology, one approach is<br />

to increase launch powers. Another strategy is the careful equalization of the spectral loss of the fibre with<br />

appropriately designed passive components. However, a significant improvement in transmission distance<br />

requires broadband amplification over the bandwidth of all 16 channels, which is impossible with conventional<br />

EDFA technology and impractical with Raman amplification. Semiconductor optical amplifiers (SOAs) are<br />

inexpensive devices for metro applications with small form factors and can be integrated on InP substrates with<br />

other functionalities. Gain bandwidths of more than 60 nm are typical, however, without significant attention to<br />

suppressing the gain dynamics, they are unsuitable for multi-channel operation due to cross gain modulation,<br />

resulting in cross-talk between channels. On the other hand, linear optical amplifiers (LOA) are single-chip<br />

amplifiers designed to be immune to inter-symbol interference, inter-channel crosstalk, and gain transients. The<br />

use of a linear optical amplifier and semiconductor optical amplifier in a 2.5-Gb/s CWDM system was<br />

demonstrated to extend the reach of four of the C-band selected channels from 75 to 125km of Lucent-AllWave<br />

fibre. An additional channel at 1410 nm was separately amplified by an SOA to simulate additional extendedreach<br />

traffic. Erbium doped waveguide amplifiers (EDWA) are also regarded as a possible integrated device of<br />

interest. The noise introduced by optical amplification is not considered as a major drawback since the distances<br />

remain relatively short. Power divergence between WDM channels can degrade system performance and may<br />

require active or passive equalization.<br />

Capacity upgrade with increased bit-rate<br />

Due to the wide channel spacing, a full-spectrum CWDM system operating at 2.5-Gb/s line rate is limited to 16<br />

channels with 40-Gb/s total transmission capacity. One approach to increase the capacity of such a system is the<br />

migration to 10-Gb/s un-cooled Direct-Modulated lasers (DMLs), which have recently become available. Only a<br />

few groups have reported 10-Gb/s-based CWDM transmission since the low dispersion tolerance of such<br />

devices limits the reach in dispersive fibre.<br />

222 [ITU-T G694.2] ITU standard defining CWDM, on line http://www.itu.int/<br />

<strong>Annex</strong> 2 - Page 142 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Also, 10 Gb/s un-cooled DMLs are only available for a limited number of wavelengths in the 1300 and 1550 nm<br />

region. However, a simultaneous 10-Gb/s transmission on all 16 commercially available CWDM wavelengths<br />

was shown with DMLs rated for 2.5-Gb/s operation. Initial work was reported for a prototype non-zero<br />

dispersion fibre (NZDF) with low water-peak allowing 40-km uncompensated transmission for all channels. All<br />

lasers maintained error-free transmission to at least 65 0 C.<br />

In subsequent measurements, this technique was extended and enabled bidirectional CWDM transmission<br />

capability of 32x10 Gb/s over 30 km (9.6 Tb/s-km) and a 16-channel capacity of 16x10 Gb/s over 40 km<br />

(6.4 Tb/s-km). This record capacity was achieved with a potentially low-cost and highly integrated electronic<br />

equalization in combination with forward error correction (FEC), using 16 un-cooled DMLs (1310 nm to 1610<br />

nm) and standard LWPF (AllWave). Electronic equalization combats chromatic dispersion for the longer<br />

wavelength channels, while FEC both establishes higher margins and enables bidirectional transmission due to<br />

increased in-band crosstalk tolerance at FEC error rates.<br />

Capacity upgrade with DWDM<br />

The main increase in transmission capacity is achieved by the substitution of the 1550-nm CWDM channel with<br />

a DWDM sub-band. The channel spacing of the DWDM channels, e.g. 100GHz 223 and the transmission<br />

window of the CWDM multiplexer in this experiment, e.g. 12nm determines the number of additional<br />

wavelengths. Hence, channel counts up to 15 can be obtained with temperature stabilized DMLs and integrated<br />

lasers electro-absorption-modulators due to the narrow channel spacing. Although more CWDM-DWDM<br />

transmission capacity can be obtained when migrating those new DWDM channels to 10Gb/s dispersion<br />

compensation (using dispersion compensating modules or properly engineered optical fibres) can also be<br />

introduced for the DWDM channels alone at the respective multiplexer port to relax the requirements for new<br />

10Gb/s DMLs in the additional DWDM band. Filter concatenation-induced distortion may become a severe<br />

problem in transmission.<br />

In a previous work, a combination is shown enabling the transmission of 100-Gb/s total capacity using a mixed<br />

bit-rate, 2.5-Gb/s and 10-Gb/s transmission system based on 16 CWDM channels and a DWDM channel<br />

overlay. This includes the direct substitution of 2.5-Gb/s lasers with 10-Gb/s lasers, the use of a 10-Gb/s<br />

externally modulated laser (EML), and a dense WDM sub-band to upgrade individual 2.5-Gb/s channels 224 .<br />

Increasing the bit rate carried over a single wavelength channel in a WDM transmission does not necessarily<br />

lead to a capacity improvement. The latter is limited by the total bandwidth available. However, it can lead to<br />

lower cost if the reduction in the number of components and equipment footprint is not overcompensated by the<br />

cost enhancement of single components when the frequency is increased. Furthermore, high bit rate<br />

transmission is extremely sensitive to various impairments such as polarization mode dispersion and non-linear<br />

effects. So, although the 40Gb/s transmission systems have been widely demonstrated in the labs, they have not<br />

been deployed commercially (According to a very recent commercial agreement, T-Com, the fixed-network unit<br />

of Deutsche Telecom, is to use Marconi “Multihaul 3000” plate-forme as a building block of a 40Gb/s core<br />

optical network. The plate-forme is dedicated to both metro and backbone applications and can afford rate as<br />

high as 3.2 Tb/s on each fibre pair. It includes OADMs 225 .<br />

OTDM<br />

Further increase of the single-wavelength-channel bit-rate (prior to wavelength multiplexing) can be envisaged<br />

through two different techniques. The first one relies simply on a continuation of the past evolution trend of<br />

high-speed electronics. It consists in ETDM multiplexing at higher bit-rate (80Gb/s and possibly 160 Gb/s at a<br />

later stage). The second option is Optical Time Division Multiplexing (OTDM) that relies on interleaving short<br />

optical pulses. The demand on the electronic speed is transferred onto the optical technology: short pulse laser,<br />

synchronization, OTD-demux. On the other hand, in comparison to WDM, optical signal processing (OA&D,<br />

OXC, Label intelligence) is implemented on a very different basis on OTDM support. This may lead to some<br />

earlier introduction of ODTM in the metro network, where the distances are shorter and the connectivity higher<br />

than in the backbone network 226 227 . A number of OTDM and field-trial demonstrations have been done within<br />

228 229<br />

the FP5 programme<br />

223<br />

Spectral grids for WDM applications: DWDM frequency grid, on line http://www.itu.int/.<br />

224<br />

K. Iwatsuki et al « Access and Metro Networks Based on WDM Technologies Services » J. Ligthwave technol.- Special issue on Metro<br />

& Access networks, Vol 22, N°11, pp. 2623-2630, Nov 2004.<br />

225<br />

“Deutsche Telekom, Marconi make 40G a reality (January 2005) - News & Analysis », on line http://fibers.org/<br />

226<br />

FP5-IST FASHION project, on line http://www.ist-optimist.org/prdc.asp?id=26<br />

227<br />

FP5-IST TOPRATE project, on line http://www.ist.optimist.org/prdc.asp ?id=28<br />

228<br />

G. Lehmann et al « FASHON: ultraFAst Swiching in HIgh-speed OTDM Networks, An 160 Gbit/s Network Field Trial Report » »,<br />

BroadBand Europe Conference, Brugges, Belgium, Dec 20004, http://www.medicongress.com/broadband/<br />

<strong>Annex</strong> 2 - Page 143 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 54: IST-FASHION OADM for OTDM.<br />

Figure 55: TOPRATE field trial 230 ([6]= 231 )<br />

Transmitter (Tx) and receiver (Rx) modules Tx/Rx.<br />

Thanks to the standardization of 10 Gb/s data link interfaces, such as 10-Gigabit Ethernet or SONET OC-<br />

192/SDH STM-64, and several recent Multi-Source Agreements (MSAs) between vendors to design a common<br />

Tx/Rx modules packaging technology, several solutions would exist to drastically reduce the cost of 10 Gb/s<br />

transceivers/transponders dealing with the evolution of Metro Access Network. Among the different MSA<br />

transceiver types, the XFP (small form factor pluggable) 232 transceiver is the most promising from the point of<br />

view of downsizing, cost effectiveness and pluggability. It supports all the 10G format protocols via a serial<br />

electrical interface. With a length of 67 mm, a width of 23,5 mm and thickness of 8,5 mm, XFP package is the<br />

smallest among the other MSA packages (XENPAK, X2 and XPAK). For comparison, the area of XFP is 6.5<br />

times lower than that of a standard 300-pin-MSA transponder used for conventional 10Gb/s SONET/SDH<br />

applications.<br />

229 M. Schmidt « Enabling Nx160 Gbit/s DWDM transmission with terabit/s capacity in the core network – FP5-IST project TOPRATE<br />

results. », BroadBand Europe Conference, Brugges, Belgium, Dec 20004, , http://www.medicongress.com/broadband/<br />

230 M. Schmidt « Enabling Nx160 Gbit/s DWDM transmission with terabit/s capacity in the core network – FP5-IST project TOPRATE<br />

results. », BroadBand Europe Conference, Brugges, Belgium, Dec 20004, , http://www.medicongress.com/broadband/<br />

231 M. Schmidt et al, proc. ECOC 2004, PD paper Th4.1.2<br />

232 XPF multi-source agreement for small form factor pluggable optical transceiver, on line http://www.xfpmsa.org<br />

<strong>Annex</strong> 2 - Page 144 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 56: XFP multisource agreement 232<br />

The optical part of XFP transceivers contains a 10 Gb/s Transmitter (laser source) and Receiver (photodiode)<br />

mounted on a low cost Optical Sub-Assembly (TOSAs and ROSAs respectively). As both TOSA and ROSA<br />

have an optical interface based on LC connector, XFP modules can easily be plugged with fibres supported the<br />

same connectors interface. The main electronic circuits included in the XFP modules are the source driver, the<br />

detector pre-amplifier, and the clock and data recovery Integrated Circuits (CDRs). Total power electric<br />

consumption is limited to several Watts. 10 Gb/s pluggable electrical interface called XFI, permits to insert the<br />

modules in the equipment while the system remains in service. This “hot-pluggability” simplifies and lowers the<br />

cost of upgrade and maintenance activities.<br />

The 10G XFP modules are presently available in a short-range version (up to 300 m over multi-mode fibre)<br />

using an 850 nm VCSEL. Longer reach modules (up to 10 km) are also available for transmission over singlemode<br />

fibres by using a 1310 nm un-cooled directly modulated DFB laser. The present cost of 10 km/10 G<br />

modules is less than 500 $ for production volume of 1000 Units a year. Further reduction could be achieved by<br />

using 1,3 µm VCSELs for which error-free high speed modulation at data rates up to 10Gb/s and transmission<br />

through 10 km long Single mode fibre have been recently demonstrated 233 .<br />

As 10 Gb/s Ethernet Data-com Networks and 10 Gb/s SONET Telecom Networks are more and more<br />

converging and co-existing in Metro/Access applications, many constructors are pushing the performances of<br />

XFP transceivers towards longer reach (40, 80 km) and with a wavelength stability compliant with DWDM<br />

applications.<br />

1,55 µm TOSA based on monolithic integration of Electro-Absorption modulator (EAM) and DFB laser are the<br />

key component to meet these specifications. Optical isolator, and miniature Peltier thermo-cooler have to be<br />

added in the module to minimise feedback and cooled (or semi-cooled) the chip. Already, several manufacturers<br />

have demonstrated the first 40 km and 80 km versions, including PIN and APD photodetectors respectively.<br />

Electrical Power dissipation is less than 3,5 W for 40 km and 4W for 80 km. The best wavelength stability is<br />

less than 0.1 nm, which is sufficient for ITU-T WDM grid with 100 GHz spacing 234 .<br />

Progresses on un-cooled emitters are very promising to further decrease the electrical power dissipation of the<br />

module by suppressing the Peltier cooler. In that respect, 1,3 µm directly modulated laser diode with<br />

temperature-insensitive characteristics has been fabricated by using a stack of p-doped quantum dots in the<br />

active layer 235 . 10 Gb/s laser operation has been maintained from 20°C to 70°C without current adjustments.<br />

On the other hand, cooler-less operation has also been demonstrated with an Integrated Laser modulator with a<br />

special design allowing 10 Gb/s transmission over 50 km for temperature ranging from 10 to 80°C.<br />

233 J. Cheng et al, “Efficient CW lasing and High-speed Modulation of 1.3 µm AlGaInAs VCSELs with Good High Temperature Lasing<br />

Performance” IEEE Photonic Technol. Letters, Vol. 17, n°1, pp. 7-9, 2005.<br />

234 A. Kanda et al, “10 Gbit/s small form factor optical transceiver for 40 km WDM transmission”, Electronics Letters, , Vol. 40, n° 8, 2004<br />

235 N. Hatori et al, “20°C-70 °C Temperature Independent 10 Gb/s Operation of a Directly Modulated Laser Diode Using P-doped Quantum<br />

dots”, Postdeadline paper, Proceedings ECOC’2004, Stockolm, Sweden, September 2004<br />

<strong>Annex</strong> 2 - Page 145 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

For length extension greater than 80 km, Mach-Zehnder (MZ) modulators have to be used to minimise<br />

transmission penalties induced by the fibre dispersion. Although InP MZ modulators has excellent transmission<br />

characteristics for long distance (600 km long at 10 Gb/s has been demonstrated), the InP MZ chips are too large<br />

to be included into an XFP module. In that respect, new modulator designs are required.<br />

Finally, at longer term, the upgrading of the bit–rate up to 40 Gb/s could be considered in order to increase the<br />

channel capacity. Although feasibility of 40 Gb/s integrated laser–modulator has been demonstrated, both at 1,3<br />

and 1,5 µm, many problems have to be solved to integrate all the high-speed components in the XFP packages.<br />

Among these, the availability of high-speed electrical connectors and low electrical consumption I-Cs (CDRs),<br />

which have to be based on InP technology, is very challenging.<br />

Switching nodes<br />

In the metro domain, present switching nodes are opaque electronic node working at the SDH level on the basis<br />

of the ETDM granularity. Exploitation of the wavelength multiplexing could improve flexibility and reduce cost<br />

by increasing the network transparency. It relies on a number of wavelength selective or wavelength tunable<br />

photonic components and subsystems. The light sources may be fixed-wavelength lasers but there is a<br />

considerable advantage in network management and inventory cost savings by using tunable lasers. The optical<br />

cross-connect (OXC) is the key building block of the WDM network. For ring and mesh topologies it would<br />

allow direct transmission of some traffic at wavelength granularity level at a given node without optoelectronic<br />

to electronic costly transition. Optical add-drop multiplexers (OADMs), through the wavelength routing<br />

concept, can transform a physical ring topology into any type of network logical topology (i.e. ring, mesh, or<br />

star) 236 . This would enhance resource utilization. However, the inability OXCs to perform grooming, as<br />

opposed to Digital cross Connects (DXC), may prevent their introduction in metro networks. Other additional<br />

advantages of OADMs/OXCs include the smaller size than the opaque SONET ADMs and DXCs, and also the<br />

lower power consumption requirements at the central offices.<br />

OADM may consist of arrayed waveguide grating (AWG) multiplexers, wavelength routers, wavelength<br />

converters, tunable wavelength filters, etc 237 238 . The challenge is to integrate many of these subsystems on a<br />

chip in a modular system that is upgradeable. For central node (hubs) optical-cross-connect OXC can be used to<br />

distribute traffic among connected optical fibres on a wavelength or waveband level. Simplified provisioning<br />

and management will lead to a cost reduction in network operation. In order to allow remote and dynamical<br />

provisioning of wavelength channels (reduction of OPEX), relatively large optical switches are necessary.<br />

MEMS is the most mature switching technology today, and it seems to be suitable for the realization of large<br />

switches. Commercial two-dimensional MEMS modules only provide up to 16 ports. Larger three-dimensional<br />

devices have been delayed. There are also sill concerns about reliability. However, based on silicon technology,<br />

MEMS is a sound concept, which will lead to sufficiently large optical circuit switches for the metro domain.<br />

The typically < 10 ms reconfiguration time of MEMS switches is sufficient for circuit switching. SOA gate have<br />

been proposed as an integrated alternative 239 . Polymer technology is expected to become a central technology<br />

for WDM optical cross-connects and their building blocks. The advantages include low-cost and diversity of<br />

polymer materials (electro-optic, thermo-optic, thermo-insensitive) and relatively simple fabrication techniques.<br />

In the EU-projects NAIS 240 and APPTECH 241 , the aim is to create low-cost subsystems on a chip (WDM<br />

transceivers, switches, tunable AWG multiplexers, etc.) by using polymer materials, and in the case of<br />

APPTECH, embossing methods for fabrication.<br />

Practical technologies for packet switching are still in the research state. Although architecture based on optical<br />

node has been thoroughly studied in the past years, optical technology has not yet proven to be sufficiently cost<br />

attractive for vendors to take the risk of introducing them. On a more advanced level, burst or packet switching<br />

requires all-overs specific nodes with labelling facilities (recognition and switching). A general description of<br />

switching nodes can be found in the optical core network chapter.<br />

236<br />

Tomkos, I.; Vogiatzis, D.; Mas, C.; Zacharopoulos, I.; Tzanakaki, A.; Varvarigos, E.;, « Metropolitan Area Optical Networks » , Circuits<br />

and Devices Magazine, IEEE , Volume 19, Issue: 4, pp. 24-30, July 2003.<br />

237<br />

FP5-IST METEOR Project, on line http://www.ist-optimist.org/prdc.asp?id=12<br />

238<br />

K. Okamoto et al, Electron. Letters, Vol. 32, n° 16, pp 1471-1472, 1996.<br />

239<br />

D. Chiaroni, “Packet switching matrix: a key element for the backbone and the metro”, J. Selected Areas in Communications, IEEE, Vol.<br />

21, n° 7, pp. 1018 - 1025, 2003<br />

240<br />

FP5-IST NAIS Project, on line http://www.ist-optimist.org/prdc.asp ?id=32<br />

241<br />

FP5-IST APPTECH Project, on line http://www.ist.optimist.org/prdc.asp?id=38<br />

<strong>Annex</strong> 2 - Page 146 of 282


<strong>A2.</strong>7.6.2 Tunable sources<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The solution to flexibility requirements for metro network lies an enhancement of the transparency of WDM<br />

networks. This extensively makes use of wavelength tunability. This concept primarily impacts the light<br />

sources, but also the multiplexing optical components.<br />

The wavelength tunable light source is a key component for multiplexing, routing, conversion, dynamic<br />

provisioning and reallocation, protection and management of the WDM network, as well as for solving<br />

inventory problems. It may simply be a hot-swappable laser array, which may have switching times in the<br />

nanosecond range. The electronically tuned multi-electrode laser, e.g. based on sampled gratings, is more cost<br />

effective, but has presently a switching time of the order of 1µs and a tuning range of less than 100 nm. A<br />

number of configurations are examined in the FP5-IST project NEWTON 242 . The external cavity laser based on<br />

quantum dashes may have a large tuning range of ? 300 nm as explored in the FP5-IST-BIGBAND 243 project,<br />

but has usually a switching time of the order of milliseconds. The VCSEL with movable mirror is studied in the<br />

FP5-IST-TUNVIC project 244 . Unfortunately, the European industry on tuneable lasers has been severely hit by<br />

the slowdown of the telecom market.<br />

<strong>A2.</strong>7.6.3 Roadmap summary for systems and components<br />

The following figures issued form the FP5-project 245 summarize the roadmap element for the optical systems<br />

and systems components.<br />

<strong>A2.</strong>7.7 Summary<br />

The metropolitan area network presently represents a major interest for operators and equipment<br />

manufacturers. It holds a strategic position at the junction between the long haul network and the access nodes.<br />

Triple play, Video-on-Demand, gaming, Storage Area Networks are some of the applications that will drive the<br />

predictively fast demand for a resilient, packet oriented, carrier grade metro network. The IP/ATM/SDH legacy<br />

needs to evolve toward a reduction of the protocol tack and a more efficient statistical multiplexing. Innovation<br />

includes the new-generation SDH protocol and the introduction of Ethernet solution (10GbE) (originally<br />

dedicated to Local networks) into the MAN (often via RPR). Long-term evolution foresees the introduction of<br />

packet switching directly in the optical domains. Presently, the cost reduction required for the MAN is obtained<br />

in the physical layer by the development of Coarse WDM using un-cooled sources and normalized packaged<br />

transceivers/receivers. Capacity and reach enhancements rely on development of low cost high bit rate<br />

transmitters (10 Gb/s DML or 40 Gb/s ILM) compatible with CWDM then DWDM. Wavelength agility<br />

represents an important longer-term requirement for the full usage of the WDM networking services.<br />

242<br />

FP5-IST NEWTON Project, on line http://www.ist-optimist.org/prdc.asp?id=31<br />

243<br />

FP5-IST BIGBAND Project, on line http://www.ist.optimist.org/prdc.asp?id=45<br />

244<br />

FP5-IST TUNVIC Project, on line http://www.ist.optimist.org/prdc.asp ?id=19<br />

245<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004, on line http://www.istoptimist.org/<br />

<strong>Annex</strong> 2 - Page 147 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 57: Photonic Systems for MAN 246<br />

Figure 58: Photonic Systems for the metropolitan network, Key issues 246<br />

246<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004, on line http://www.istoptimist.org/<br />

<strong>Annex</strong> 2 - Page 148 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 59: Photonic components for MAN 247<br />

Figure 60: Photonic components for MAN 247<br />

247<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004, on line http://www.istoptimist.org/<br />

<strong>Annex</strong> 2 - Page 149 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>7.8 Appendix 1: The OPTIMIST roadmapping exercise for Metro network 248<br />

Two main objectives of the OPTIMIST thematic network were:<br />

• Setting up a concertation actions with the FP5 IST projects involved in the area of optical communications<br />

• Developing a EU Roadmap for Optical Communications and Technology Trend Documents.<br />

These 2 main activities are carried out simultaneously and in a coordinated fashion and in this context the<br />

consortium has organized different events were the Roadmap was presented. Before the end of the project it was<br />

decided that a different event was required. The idea behind the jigsaw event was developed by the requirement<br />

for a more interactive discussion and that was quite difficult to be achieved in the context of a workshop or an<br />

informal discussion. The content of the roadmap (characteristics of the real network) would be cut down into<br />

jigsaw puzzle pieces that had to be put together to create the roadmap for optical communications. In this<br />

context an event was needed where many attendees other than the OPTIMIST consortium could put this ‘jigsaw’<br />

together. So the event was co-organised with the ONDM 2004 conference. The puzzle pieces would concern the<br />

three different parts of the network: Access, Metropolitan Area and Wide Area Networks and will concern a<br />

timeframe between 5, 10 and 20 years.<br />

Figure 61<br />

Metro Network<br />

In this part of the network discussion was focused more on the control issues where the evolution was described<br />

in much detail and with specific timeframes. The transport mode (channel switching, burst switching and packet<br />

switching) was discussed in detail and a lot of debate was over the exact sequence and timeframe that they were<br />

going to be deployed, finally agreeing that OBS is most likely appear in Metro areas by the next 10 years but<br />

OPS is not likely to appear before the next 10 years. As far as the control was concerned it seems that the<br />

different versions of GMPLS (0,1,2) are more likely to be established by the next 5, 10 and 20 years<br />

respectively.<br />

248 FP5-IST OPTIMIST consortium: EU Photonic Roadmap, Jigsaw Event_minutes.pdf, Feb. 2004, on line http://www.istoptimist/pdf/workshops/ONDM/JigsawEvent_minutes.pdf<br />

<strong>Annex</strong> 2 - Page 150 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

As far as transmission is concerned things seem to start a little slow in the beginning with the ETDM+WDM<br />

being the dominant modes by the next 10 years where CWDM will take over until the number of wavelength<br />

starts increasing (100-1000) and DWDM will be the dominant mode in the next 20 years in the Metro area<br />

networks. The technology follows these trends of the network evolution and the envisaged capacity demands.<br />

Switching times evolve with the transport mode and amplifier bandwidth follows the number of wavelength<br />

channels. PMD and dispersion compensators will soon be deployed in Metro networks (5 years) will opaque<br />

switches and OEO wavelength converters will dominate the technology evolution in the 5-10 years. All optical<br />

wavelength converters will only be needed in 10 years and optical buffers will be required in the next 15 years.<br />

<strong>A2.</strong>7.9 Appendix2: FP6-IST-NOBEL scenario–Extract from NOBEL D11. 2004<br />

<strong>A2.</strong>7.9.1 Short-term network scenario<br />

<strong>Annex</strong> 2 - Page 151 of 282<br />

249 250<br />

<strong>A2.</strong>7.9.1.1 Introduction on short-term network scenario<br />

The short-term network scenario basically reflects the situation and implementations of most major carriers<br />

today and in the very near future. So no major differences with respect to the existing general scope of the<br />

network architecture is part of this scenario. However, many different flavours of individual architectures are<br />

existing, depending on parameters like size of the networks, amount of traffic, mix of services, still existing<br />

legacy equipment and last but not least the history (and administrative organization) of the individual carrier.<br />

<strong>A2.</strong>7.9.1.2 Data Plane<br />

Metro network<br />

The short-term metro scenario reflects the trend of data awareness of the transport network. Based on the SDH<br />

infrastructure not only the classical leased line and TDM based services will be provided, but also the<br />

interconnection of IP routers via classical PPP mapping and Ethernet based layer 2 and layer 3 services via<br />

Generic Framing Procedure (GFP) mapping. GFP will be used to use the existing SDH infrastructure in the most<br />

efficient manner.<br />

First implementations of native Ethernet platforms will start to compete with the transport of layer 2 based<br />

network services via the classical SDH based infrastructure.<br />

The capacity of the fibre-infrastructure will be enhanced by the deployment of WDM-metro equipment. In the<br />

real world scenarios the starting of this deployment will depend on the amount of traffic, the situation of the<br />

existing fiber and SDH based transport infrastructure.<br />

<strong>A2.</strong>7.9.2 Mid-term network scenario<br />

<strong>A2.</strong>7.9.2.1 Introduction<br />

Working from application and network service requirements for the mid-term scenario, we identify seven highpriority<br />

areas in which research and development, deployment, and support are required. These areas, and their<br />

benefits, are as follows.<br />

<strong>A2.</strong>7.9.2.2 Data Plane<br />

Core<br />

The most important data plane evolution in this scenario is that the Ethernet is under deployment in access<br />

Metropolitan Area Networks (MAN) and its extension to metro-core and core network environments is foreseen.<br />

Once Ethernet became the predominant layer 2 technology in MAN it is getting challenge Multi-Protocol Label<br />

Switching (MPLS) as the convergence layer for core networks in combination with IP.<br />

249 FP6-IST NOBEL Integrated Project, on line http://www.ist-nobel.org<br />

250 FP6-IST NOBEL Integrated Project, Deliverable D11, restricted communications


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Metro<br />

The mid-term scenario is built over the trend that the short-term scenario supported the incorporation of Ethernet<br />

switching into metro nodes to provide a more efficient architecture for access, and transport of rapidly growing<br />

Ethernet services.<br />

Non-Ethernet payload such as TDM, ATM/FR and IP/MPLS is adapted into an Ethernet MAC frame. Therefore<br />

any incoming non-Ethernet payload behaves as Ethernet payload from a network perspective. Reverse operation<br />

is performed at the outgoing interface of the egress network node.<br />

The Ethernet Media Access Control (MAC) layer, in effect, is one of the key (if not the sole) layer 2<br />

technologies that is going to remain really attractive in the long run. For instance, the dominant link layer used<br />

in enterprise networks is Ethernet. Since Ethernet interfaces to network equipment have been so far significantly<br />

less expensive than for example TDM interfaces of similar bandwidth, enterprise customers had an incentive to<br />

deploy Ethernet interfaces to their network service provider. As a consequence, major carriers are exploring<br />

methods to provide Ethernet interfaces and services in addition to traditional private line TDM interfaces and<br />

services. However, traditional TDM transport does not allow statistical multiplexing of burst packet data.<br />

Together with pure IPv4/IPv6 packet forwarding, Ethernet (or layer 2) makes part of the two fundamental data<br />

plane building blocks of any future access, metro, and backbone network. The mid-term scenario brings the<br />

benefits, on the one hand, that the Ethernet enables a scalable transition to emerging packet-based services and<br />

customer Ethernet interfaces while simplifying the transport architecture. On the other hand, Ethernet leverages<br />

more efficient and cost-effective interfaces to packet switches.<br />

<strong>A2.</strong>7.9.2.3 Control Plane<br />

Core<br />

In the mid-term scenario, Ethernet transport (PHY) is becoming the predominant Layer 2 technology in<br />

Metropolitan Area Networks (MAN) and paving the way to replace the Multi-Protocol Label Switching (MPLS)<br />

as the convergence layer for core networks in combination with IP. Furthermore, core networks are driven by a<br />

unified GMPLS control plane capable to perform horizontal and vertical integrated operations as explained<br />

above.<br />

Note that, specific to the mid-term scenario, IP/MPLS and GMPLS capabilities may co-exist in the core<br />

segment.<br />

Metro<br />

Since Layer 2 (typically Ethernet) Label Switched Paths (L2-LSP) environments are driven by substantially<br />

different architectural constraints (such as meshing instead of common broadcast access segments and larger<br />

distances) the control of the Ethernet technology must be designed such that it can face these new challenges. To<br />

achieve this goal in the mid-term scenario, it is foreseen to extend the capability of the ASON/GMPLS protocol<br />

suite to Ethernet and more generically to any layer 2 switching technology.<br />

One approach to broaden Ethernet capabilities to provide the properties of a switched technology consists in<br />

enhancing the Ethernet MAC frame so that its header's Virtual LAN Identifier (VLAN ID) provides a label<br />

semantic. This semantic enhancement of the Ethernet frame header is provided without modifying the IEEE<br />

802.3 frame format nor its header. Using this label semantic (with link local or eventually domain-wide scope),<br />

VLAN ID swapping can be then considered and applied to any device that can process this information field.<br />

Scalability is achieved by using a VLA TAG stacking i.e. "nQ" approach. These concepts constitute the basic<br />

elements supporting the layer 2 label switched path (L2-LSP) approach that perfectly fits the integrated transport<br />

network vision of NOBEL.<br />

<strong>A2.</strong>7.9.3 Long term network scenario<br />

<strong>A2.</strong>7.9.3.1 Introduction on long term network scenario<br />

As mentioned the evolution of transport networks is likely to be lead by a few elementary drivers, i.e.<br />

evolutionary network solutions have:<br />

• to optimize the use of resources (reducing CAPEX);<br />

• to reduce the operating costs (reducing OPEX);<br />

• to improve quality, efficiency in providing current and new services (increasing and generating new<br />

revenues).<br />

<strong>Annex</strong> 2 - Page 152 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Furthermore it is widely recognized that the traffic in next generation transport networks will be progressively<br />

dominated by data. This is due to the progressive migration of many applications and services over the Internet<br />

Protocol (IP). Given that the statistical characteristics of data traffic are rather different from those of traditional<br />

voice traffic, for which TDM networks have been designed, this will pose important technical requirements.<br />

<strong>A2.</strong>7.9.3.2 Metro<br />

In the metro area there are two segments: former segment composed by metro hubs nodes (mainly responsible<br />

for traffic aggregation) and a latter segment composed by metro PoPs.<br />

The metro-hubs can be L2 (Ethernet) switches or Sonet/SDH/OTH ADM and OADM assuring the aggregation<br />

of the traffic coming from the access areas.<br />

The metro-core segment is composed by metro PoPs (GMPLS-capable LSR) some of which are linking the<br />

metropolitan network to the IP/optics core backbone (core PoP). These metro PoP, processing traffic even at L3<br />

(IP/GMPLS), offer also access to application servers and may host the high level network functions, such us<br />

VoIP gatekeepers, etc.<br />

The offer of Layer 2 Ethernet services requires scalable network solutions and Ethernet technology doesn't seem<br />

to be capable to span the core network. Both a connection oriented packet switched layer (IP) and a connection<br />

oriented circuit switched layer can be adopted.<br />

<strong>A2.</strong>7.10 Appendix 3: Standards<br />

IEEE 802 LAN/MAN Standard Commitee, on line htttp://www.ieee802.org/<br />

IEEE 802.3 CSMA/CD (Etherenet), on line htttp://www.ieee802.org/3/<br />

standard defining Ethernet protocols<br />

IEEE-802-3an: IEEE standard defining the 10 Gigabit Ethernet (10GbE), on line htttp://www.ieee802.org/3/<br />

IEEE 802.17: Resilient Packet Ring (RPR), on line htttp://www.ieee802.org/17/<br />

ITU–G.707,G.708,G.709: ITU (CCITT) (G707 03/96) standard defining SDH protocols, on line<br />

http://www.itu.int/ .<br />

ITU-T G694.1 (06/02): Spectral grids for WDM applications: DWDM frequency grid, on line<br />

http://www.itu.int/.<br />

ITU-T G694.2: ITU standard defining CWDM, on line http://www.itu.int/.<br />

ITU-T Rec. G.7041/Y.1303: generic framing procedure (GFP), on line http://www.itu.int/.<br />

ITU-T Rec. G.7042/Y.1305: link capacity adjustment scheme (LCAS) , on line http://www.itu.int/.<br />

XPF multi-source agreement for small form factor pluggable optical transceiver, on line http://www.xfpmsa.org<br />

<strong>A2.</strong>7.11 Appendix4: Related IST projects<br />

<strong>A2.</strong>7.11.1 FP5<br />

FP5-IST APPTECH Project, on line http://www.ist-optimist.org/prdc.asp?id=38<br />

FP5-IST BIGBAND Project, on line http://www.ist-optimist.org/prdc.asp?id=45<br />

FP5-IST DAVID project, on line http://david.com.dtu.dk/<br />

FP5-IST FASHION project, on line http://www.ist-optimist.org/prdc.asp?id=26<br />

FP5-IST METEOR Project, on line http://www.ist-optimist.org/prdc.asp?id=12<br />

FP5-IST NAIS Project, on line http://www.ist-optimist.org/prdc.asp ?id=32<br />

FP5-IST NEWTON Project, on line http://www.ist-optimist.org/prdc.asp?id=31<br />

FP5-IST OPTIMIST consortium: EU Photonic Roadmap, on line http://www.ist-optimist.org/<br />

<strong>Annex</strong> 2 - Page 153 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

FP5-IST STOLAS project, on http://www.ist-stolas.org/<br />

FP5-IST TOPRATE project, on line http://www.ist-optimist.org/prdc.asp ?id=28<br />

FP5-IST TUNVIC Project, on line http://www.ist-optimist.org/prdc.asp ?id=19<br />

<strong>A2.</strong>7.11.2 FP6<br />

FP6-IST e-Photon/ONe NoE, on line http://e-photon-one.org/<br />

FP6-IST LASAGNA Strep, on line : http://www.ist-lasagne.org/<br />

FP6-IST NOBEL Integrated Project, on line http://www.ist-nobel.org/<br />

<strong>Annex</strong> 2 - Page 154 of 282


<strong>A2.</strong>8 OPTICAL BACKBONE<br />

<strong>A2.</strong>8.1 Introduction<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The backbone, national or wide area network may extend over distances of thousands of kilometers and<br />

provides an interconnection fabric for regional and metropolitan networks. In recent years considerable capacity<br />

has been installed in this network layer, so major investment is not expected in the near future. There is a trend<br />

towards reducing the number of major network nodes and building a very high capacity backbone, essentially a<br />

fabric of very high capacity pipes, with much of the processing and routing devolved to the regional and metro<br />

layers.<br />

The deployment of Wavelength Division Multiplexing (WDM) techniques and equipment in the field has<br />

provided backbone networks with high capacity and long reach capabilities. It is in this part of the network, in<br />

order to maximise the use of the available fibre bandwidth, the trend has been to develop systems with more<br />

WDM channels together with higher bit rates. Currently deployed systems could transmit 160 channels each at<br />

10Gbit/s. In reality, few links use more than a handful (~20) of these channels however. Although practical<br />

40Gbit/s systems have been developed, the economic downturn in telecoms has delayed their deployment.<br />

The optical links are point to point and are terminated in electronic SDH/SONET switches. The SDH/SONET<br />

layer provides management of the links. The functions it provides include:<br />

• connection set-up<br />

• connection and link performance monitoring<br />

• management data communications<br />

• protection and restoration<br />

The network providers have a large investment in SDH/SONET equipment. The downturn in the<br />

telecommunications sector from 2000 onwards has restricted investment and limited expectations to more<br />

realistic horizons than those professed in the late 1990s. The implication of this is that near term investments are<br />

likely to be based mainly on SDH/SONET technology variants. Another consequence is that any new<br />

technology deployed in an existing network will necessarily have to work alongside SDH/SONET.<br />

SDH/SONET was conceived for use in voice networks. It is not ideally suited for data networks. In particular:<br />

• circuit provision is slow. Circuits can take days to plan and provision<br />

• network churn may cause non-contiguous available bandwidth islands to get isolated in the network. This<br />

fragments the network and reduces efficiency in a similar way to the fragmentation of a poorly maintained<br />

hard disk.<br />

• the way certain protocols such as Gigabit Ethernet are handled in SDH/SONET require large contiguous<br />

capacity (large neighbouring virtual containers) which may not be available. Mapping these services into<br />

SDH/SONET frames may be a time consuming process and capacity reservation may be difficult.<br />

• the largest STMn frames may have transparency issues as the standards do not require different networks to<br />

handle them the same way. Indeed different networks may use different primary clocks for synchronization<br />

As voice and data networks converge there is a force for upgrading SDH/SONET to overcome these problems.<br />

Next Generation SDH/SONET is being employed to provide an evolutionary upgrade to legacy infrastructure by<br />

introducing:<br />

• virtual concatenation (VCAT) which allows services to be mapped onto several identically-sized noncontiguous<br />

low order circuits (e.g. VC4) rather than a large circuit (e.g. STM64). The separate frames may<br />

even traverse the network by different routes as they are re-assembled at a network element at the<br />

destination. This gives better utilization of the network as it minimizes isolated unusable capacity.<br />

• Link capacity adjustment scheme (LCAS) also maximizes utilization. This is a signaling protocol that<br />

complements VCAT as it allows hitless in-service addition of circuits to the VCAT group as more resource<br />

is required i.e. it allows the group to grow. LCAS also dynamically removes failed circuits from the group<br />

and adds other circuits to maintain the overall group connection.<br />

• Generic framing procedure which allows efficient mapping of any service/protocol onto the virtual<br />

containers, and avoids standardization delays for new services. It is a completely transparent process, has<br />

wide industry support and is already widely deployed.<br />

<strong>Annex</strong> 2 - Page 155 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• A new control and signaling plane called Automatic Switched Transport Network (ASTN) will allow real<br />

time provisioning and tear down of circuits<br />

• New signaling protocols will allow automatic discovery of networks elements and will allow<br />

defragmentation of the network along with automated provision<br />

Next generation SDH (NGSDH) improves the situation by increasing the capability of the edges of the network,<br />

while keeping the core of the network essentially unchanged. It allows a higher utilization of the network<br />

resources, but the core network is still managed by a centralized control scheme. This has an impact in the<br />

scalability of the network.<br />

One approach to improve matters borrows from data networking ideas and distributes the management. This is<br />

called Multi-Protocol Label Switching (MPLS). Here the nodes have local knowledge of their neighbourhoods<br />

and switch/route incoming data streams accordingly. Carrier class functions such as restoration are also provided<br />

and managed locally. MPLS can be thought of as providing carrier class circuit equivalents over packet based<br />

networks and thus unifying data and voice networks. This is gathering traction with some carriers implementing<br />

MPLS networks 251 252 for data transmission.<br />

These techniques will improve matters and overcome near term issues in the migration of services supported by<br />

networks. In the future a more fundamental change in network design will be required to support economic<br />

dynamic networking and fast provisioning.<br />

In the intermediate future deployment of optical cross-connects would enable routing at the wavelength or<br />

wavelength-band level. The use of photonic switched express paths to reduce switching costs is facilitated by<br />

recent improvements in optical transmission, modulation and forward error correction techniques enabling<br />

longer distances to be achieved without regeneration.<br />

The investment in WDM has led to the development of better, cheaper and more stable photonic technology that<br />

has permeated into other parts of the network (e.g. metro). All the components are in place to enable WDM<br />

optical communication systems to evolve from simple point-to-point links to complex network architectures.<br />

The wavelength-routed network solution allows the signals to remain optical. Here networking will be achieved<br />

through the use of optical add/drop multiplexers (OADM) and optical cross-connect (OXC) nodes. Node<br />

designs are envisaged which will provide provisioning capabilities as well as protection and restoration in the<br />

optical layer. Generalised multi-protocol label switching (GMPLS), a control plane offering intelligence in the<br />

optical layer, is a good candidate for routing and management of the traffic demands.<br />

However the choice between optical or electrical switching technology deployments is still an open issue.<br />

Transparent optical solutions offer attractive features associated with reducing unnecessary optoelectronic<br />

conversions (cost of many transponders), and allowing transparent (bit-rate and modulation format independent)<br />

networks with reduced capital and operational costs. There is a large investment in existing electrical switch<br />

fabrics, and the SDH/SONET technology that they support. SDH/SONET also provides proven networking<br />

features, such as fast restoration that will need to be replicated in transparent optical solutions. Development of<br />

evolutionary migration paths will be critical for deployment of these technologies in existing networks.<br />

Looking further ahead, the research community has been focusing on optical packet (and burst) switching,<br />

where packets of data are statistically multiplexed, in order to offer better bandwidth utilization. This is<br />

envisaged as the ultimate IP and WDM integration. There are still many technological issues with this approach<br />

to be solved, such as the lack of optical buffering and robust fast switch fabrics. Networking questions here<br />

relate to the support of these switching technologies, whether this is to be done at backbone or regional level and<br />

the type of control plane that is required.<br />

In this report we will try to give an overview of the current transmission and optical networking (data plane and<br />

control) technology available together with the trends in optical networking research in order to support the<br />

optical backbone network evolution. Some of the information based on previously gained experience within the<br />

IST OPTIMIST project.<br />

251 BT Global Services: http://www.btglobalservices.com/business/global/en/about_us/our_network/index.html<br />

252 Interoute: http://www.interoute.com/<br />

<strong>Annex</strong> 2 - Page 156 of 282


<strong>A2.</strong>8.2 Roadmap for Optical Core Networks<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

As the internet and now Broadband has rapidly penetrated world markets and Internet traffic has increased<br />

rapidly in the core network, there has been paradigm shift in the telecommunications industry from voiceoptimized<br />

to IP centric networks.<br />

Intelligent optical networks have been seen as the only way forward that will facilitate: a) the reduction of the<br />

protocol stack and lead to the IP over WDM network that may be able to support well adapted architectures and<br />

protocols b) the delivery of large capacity links in a flexible, dynamic and reliable way and c) the ubiquitous<br />

availability of a network that can deliver services with cost efficiency, reliability, service differentiation,<br />

minimal downtime, fast service provisioning, adequate delay and jitter limitations.<br />

<strong>A2.</strong>8.2.1 Current status<br />

In order to support the ever-increasing growth of capacity demand, core networks rely on different multiplexing<br />

techniques. Over the years there has been an evolution from analog to digital transmission, from Plesiochronous<br />

Digital Hierarchy (PDH) to Synchronous Digital Hierarchy (SDH) and recently from SDH to SDH upon<br />

Wavelength Division Multiplexing (WDM). Initially digital transmission was introduced with a capacity of 2<br />

Mbit/s (the primary multiplexers) and a granularity of 64 kbit/s 253 . A next step was to improve the transmission<br />

efficiency by allowing higher bitrates and introducing cross-connection, so that currently, with SDH, the<br />

granularity is 155 Mbit/s and a line capacity of 10 Gbit/s is possible.<br />

Advances in electronic processing could not follow the traffic growth and the next step was the use of WDM 254 .<br />

Here currently deployed systems are capable of capacities of 1600 Gbit/s (160 wavelengths) or even more with a<br />

granularity of 10 Gbit/s. While the capability to use 160 wavelengths allows growth capacity, few systems<br />

currently deploy the full capability.<br />

Hero experiments with WDM and OTDM 255 if plotted on a graph (Figure 62) show that technology steps are<br />

typically introduced at a moment when required capacity increases are in the range of a factor 30 to 60. In order<br />

to fully support former technologies the granularity of the new technology typically coincides with the link<br />

capacity of the previous technology generation.<br />

In today’s networks services are aggregated, mapped onto frames and transported by the SDH/SONET layer,<br />

which is also responsible for management functions such as link and connection set-up and monitoring,<br />

protection and restoration. The SDH/SONET frames are then transported over the individual optical wavelength<br />

channels. Today’s optical core networks rely on the enormous fibre transmission bandwidth, potentially larger<br />

that 25 THz, available by single mode fibre deployment, with WDM the preferable multiplexing technique.<br />

OTN<br />

WDM<br />

SDH<br />

PDH<br />

Teleph<br />

0.01<br />

0.1<br />

1<br />

switch switch granularity granularity<br />

10<br />

100<br />

1000<br />

link link capacity capacity<br />

10000<br />

100000<br />

Figure 62 Logarithmic graph that shows how link capacity has been growing. Switch granularity follows the<br />

growth by a multiplexing factor of 30 to 60. By projecting this graph is obvious that after the level of 320Gbit/s<br />

link capacity, 10 Tbit/s link capacity will be introduced with a switch granularity of 320 Gbit/s.<br />

253<br />

64 kbit/s still forms the basic bitrate in telephony networks.<br />

254<br />

Ken-ichi Sato: Key enabling Tecnologies for Future Networks, Optics and Photonics News, p. 34 May 2004<br />

255<br />

R. DeSalvo et al., “Advanced Components and Sub-Systems Solutions for 40 Gb/s Transmission”, J. Lightwave Technol., vol. 20, pp.<br />

2154-2181, Dec. 2002<br />

10^6<br />

10^7<br />

10^8<br />

<strong>Annex</strong> 2 - Page 157 of 282<br />

Mbit/s<br />

switch<br />

granularity<br />

link<br />

capacity<br />

switch granularity<br />

x<br />

multiplexing factor<br />

=<br />

link capacity


<strong>A2.</strong>8.2.2 The vision<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The evolution of networks from voice-centric to IP-centric has implications for their design. The continual<br />

traffic growth on the core network, the unpredictability of the destination of this traffic, the burstiness of the<br />

connection durations, and the network economics will dictate the evolution towards a dynamically flexible<br />

network with requirements for continually smaller connection set-up times and fine switching granularity. This<br />

network should efficiently provide high capacity, fast and flexible provisioning of links, reliability, costefficiency,<br />

intelligent control and management, in a multi-vendor multi-operator environment in order to be able<br />

to support the stringent service requirements.<br />

It is obvious however that this migration will also be dictated by relative network economics. In order to support<br />

this data growth and link capacity evolution researchers have been envisaging and demonstrating innovative<br />

optical networking technologies. However, investors have become more skeptical, after the investment peak that<br />

led to a telecom industry crisis in the beginning of the 21 st century. Unrealistic business models has led many<br />

important carriers into financial difficulties characterized by slower growth and deterioration of revenues and<br />

profits from traditional services. Reduction of revenues has not yet been compensated by newer services, and the<br />

production costs of these newer services are unnecessarily high as they are supported by circuit oriented legacy<br />

systems.<br />

It has become obvious that in order to reduce CAPEX and OPEX 256 , node complexity must be reduced. Reduced<br />

complexity can be achieved by so called "delayering", which means reduction of the protocol stack. The<br />

traditional protocol stack for IP services consists of 4 protocols (IP over ATM over SDH over the optical<br />

layer) 257 . Of course the complexity depends very much on the way these functions are realized and organized.<br />

Another important requirement for any solution to be cost effective is also to allow a smooth evolution from the<br />

traditional SDH/SONET-based network to the target infrastructure. The existing SDH/SONET based<br />

infrastructure represents an immense capital investment and is still providing important services. Therefore any<br />

new solution should enable the deployment of new packet services on existing lit fibre, without putting out of<br />

action or even disrupting the current SDH/SONET infrastructure and services traffic.<br />

Total Bit Rate [Gbit/s]<br />

20000<br />

10000<br />

5000<br />

2000<br />

1000<br />

500<br />

200<br />

100<br />

50<br />

20<br />

10<br />

10<br />

40Gx273<br />

42.7Gx256<br />

42.7Gx159<br />

40G x 40 x 4 OCDM<br />

42.7Gx256<br />

40G x 160<br />

11.6Gx300 20G x 160<br />

40G x 82<br />

20G x 132<br />

20G x 50<br />

20G x 55<br />

10G x 100<br />

80G x 13<br />

40G x 30<br />

10G x 73<br />

10G x 84<br />

10G x 50<br />

10G x 25<br />

11G x 16<br />

10G x 16<br />

20<br />

20G x 32<br />

32G 40G<br />

32G 40G<br />

100G x 10<br />

WDM<br />

160G x 19<br />

200G x 7<br />

20G x 17<br />

100G x 4<br />

200G<br />

20G x 11<br />

160G<br />

40G x 4 100G<br />

20G x 5<br />

80G<br />

TDM<br />

50 100 200 500 1000 2000<br />

Single Channel Bit Rate [Gbit/s]<br />

<strong>Annex</strong> 2 - Page 158 of 282<br />

1 Tbit/s<br />

640G<br />

400G<br />

1280G<br />

Up to 1999<br />

OFC 2000<br />

ECOC 2000<br />

OFC 2001<br />

ECOC 2001<br />

OFC 2002<br />

ECOC 2002<br />

Figure 63 In this graph the total capacity versus single channel bit rate, as achieved in major experiments<br />

presented in conferences by September 2004. Note that in the last two years there is no significant change in the<br />

hero experiment capacity in the last two years. Two trends are obvious for achieving range total capacity:<br />

WDM based experiments that rely on ETDM (


G<br />

R<br />

A<br />

N<br />

U<br />

L<br />

A<br />

R<br />

ITy<br />

Packet<br />

Burst<br />

Circuit<br />

circuit-switched<br />

static<br />

FLEXIBILITY<br />

Static Dynamic<br />

X<br />

X<br />

X<br />

X<br />

circuit-switched<br />

dynamic<br />

IP<br />

ATM<br />

SDH<br />

Pt to Pt WDM<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

burst-switched<br />

packet-switched<br />

IP<br />

MPLS<br />

FRAMING<br />

<strong>Annex</strong> 2 - Page 159 of 282<br />

t i m e<br />

WDM / OTN<br />

λ-networking<br />

Figure 64 a) Optical Network evolution for different levels of granularity and flexibility as suggested in b)<br />

Today, IP networks are mainly transported over existing transport infrastructure, typically consisting by an<br />

SDH and/or ATM layer. WDM point-to-point systems are used to increase the capacity. The enormous traffic<br />

growth of IP traffic will lead to an IP network, directly supported over an Optical Transport Network. This<br />

evolution shown above can mainly be explained by: 1.The IP network will absorb the traffic engineering<br />

features of ATM. 2. Cross connecting functionality will be absorbed by the optical layer.<br />

Based on the above facts, the Intelligent Optical Network seems like the only solution that can provide these<br />

high capacity links in a cost-effective way. The Intelligent Optical Network is facilitated through the use of<br />

advanced photonic networking, subsystems and components technologies. Next to providing vast bandwidth, the<br />

migration from simple point-to-point links to all optical networks seems like the only promising way to move<br />

forward and overcome any bottlenecks and limitations arising from electronic processing and opto-electronic<br />

conversions.<br />

<strong>A2.</strong>8.2.3 Gap Analysis<br />

Although optical networking is the only foreseeable way to accommodate all the above requirements the exact<br />

route from the current situation to this vision is not straightforward. Service requirements will set the network<br />

parameters: transmission capacity, flexibility, granularity and supported traffic. These parameters are not<br />

completely independent however their temporal evolution will be mainly dictated by network economics. It is<br />

obvious that a successful network scenario for transport network evolution must be able to accommodate the<br />

cost effective smooth evolution of any of these four parameters and the combination of any of them.<br />

Even if we assume that the ultimate goal is to achieve a reconfigurable, scalable network that can support<br />

transparent signal routing between the nodes on demand, this does not necessarily assume the most cost efficient<br />

and preferable evolution of the network. Indeed to provide capacity on demand requires that spare capacity must<br />

be available in the network for much of the time. Economic considerations require minimization of provision of<br />

capacity that is not generating income. Charging a suitable premium to customers for capacity on demand<br />

availability is one possible business model. To minimize cost by minimizing spare provision, carriers must<br />

maintain good knowledge of their traffic statistics.<br />

Evidently capacity requirements will increase over time. However the way that the capacity will grow to follow<br />

this demand will rely on the switching granularity supported by the network. In Figure 62 it is assumed that<br />

switching granularity will follow the capacity demand. So for example if the capacity required is 10 Tbit/s the<br />

granularity will be 320 Gbit/s. But whether this will be handled by wavelength in bands (32×10 Gbit/s, 16×40<br />

Gbit/s) or by ultra high OTDM (320 Gbit/s) systems will depend on technology maturity and economics.<br />

320Gbit/s is a huge capacity and is unlikely to be required by single users for the foreseeable future. Attention


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

will need to be paid to strategies and locations within the networks for aggregation of individual users’ streams<br />

into these “fat” pipes.<br />

The supported traffic will also play a role in deciding if and/or when burst/packet granularity will be summoned.<br />

If data intensive short-lived connections are expected then 320 Gbit/s packets may be the finest granularity of<br />

system.<br />

Time<br />

Capacity<br />

Pb/sec<br />

10 Pb/sec<br />

100 Pb/sec<br />

Flexibility<br />

Static<br />

Dynamic<br />

Granularity<br />

Fibre<br />

Channel<br />

Subchannel<br />

Packet<br />

<strong>Annex</strong> 2 - Page 160 of 282<br />

Traffic<br />

Voice<br />

Voice<br />

+Data<br />

Figure 65 Gap analysis between the current situation and the vision for a transparent reconfigurable optical<br />

network: It is the relative positioning of the four parameters with respect to time and in conjunction with costefficiency<br />

that will dictate the network<br />

In Figure 66 there are two scenarios for possible network evolution. There are two ways of driving the core<br />

network scenarios forward. The first will rely on low bitrate, high channel count in order to accommodate the<br />

high capacity requirements and the shift towards more dynamic networks. This network evolves from the current<br />

network and will address the capacity demand by increasing the multiplexing factor in WDM, until the capacity<br />

is ~40 Gbit/s per channel 258 . Flexibility requirements will initially be addressed by over-provisioning and<br />

OADMs, however soon opaque OXCs will be deployed. However as connection duration requirements decrease<br />

together with connection set-up times demands, optical switching will appear in the form of multi-granular<br />

optical cross connects (MG-OXC), a direct evolution from the SONET/SDH scenario due to the high number of<br />

channels, MG-OXCs will accommodate sufficient switching granularity. This is dictated by Figure 66 where<br />

switching granularity has to follow the capacity growth. However the ever-increasing demand for flexibility and<br />

capacity will impose the introduction of optical burst switching and then packet switching for some of the<br />

channels.<br />

At the same time if capacity requirements start growing substantially and network dynamics are be satisfied with<br />

low provisioning times, the transport network may have to turn to high-speed OTDM for cost-efficiently<br />

achieving channel bit-rates with >>40Gbit/s. This scenario is endorsed by the research community and is more<br />

challenging in terms of physical layer implementation. There are three main advantages of this adaptation, other<br />

than the historical proof that channel capacity increase is more cost effective. RZ pulses that are used for OTDM<br />

can have transmission efficient modulation formats that can reach large spans. RZ related formats also facilitate<br />

optical processing. The evolution towards high-speed channels will require simple processing at each node, for<br />

example optical 3R regeneration, that would be either impossible or very expensive to do electronically. The<br />

main advantage however, is that if transparent techniques are incorporated then capacity growth can be achieved<br />

without component count increase. It is possible that such a scenario will be adopted together with WDM, but<br />

efficient add/drop multiplexers together with time-slot interchange techniques for high speed time slot switching<br />

will need to be incorporated. Finally other all-optical functions may a move towards optical packet switching.<br />

However there is still a lot of debate on how OTDM and optical packet switching might combine.<br />

258 ETDM and NRZ<br />

Triple<br />

Play


Total Capacity x Flexibility<br />

Total Capacity Capacity x x Flexibility Flexibility<br />

Point to Point WDM<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

MG-OXC with<br />

OBS/OPS<br />

MG-OXC<br />

Time slot OXC<br />

OADM for WDM<br />

Time slot with<br />

Optical functions<br />

<strong>Annex</strong> 2 - Page 161 of 282<br />

Opaque OXC<br />

Time<br />

OADM for OTDM<br />

Point to Point OTDM<br />

Figure 66 Two evolution scenarios for core networks<br />

Time


<strong>A2.</strong>8.2.4 Key Issues for Optical Networks<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

In the Figures below the Key Issues for Optical Networks as seen by the OPTIMIST project 259 , are illustrated.<br />

GAN/WAN: Networking Issues<br />

For state of the art networks (< 5 y) For future communication systems (> 5y)<br />

WAN: meshed networks<br />

optical channel routing<br />

digital wrapper<br />

WAN: high capacity:<br />

mesh network<br />

GAN: >10 000 km unregenerated distances<br />

specialised supervisionary channel<br />

GAN: manage non-linear effects<br />

adequate delay and jitter limits<br />

Spectral efficiency<br />

in DWDM systems<br />

Fiber, waveband, wavelength<br />

routing/switching, grooming<br />

GAN: small number of OADM’s<br />

and switch nodes<br />

Dispersion<br />

management<br />

Capacity x<br />

Length<br />

Non regenerated span length<br />

Active PMD<br />

compensation<br />

Distributed<br />

amplification<br />

<strong>Annex</strong> 2 - Page 162 of 282<br />

WAN: supporting optical<br />

channel and burst networking<br />

(from MAN)<br />

WAN: high capacity<br />

mesh network<br />

High Capacity<br />

Pipes<br />

Adequate delay<br />

and jitter limitations<br />

WAN: waveband routing<br />

GAN: high aggregation level<br />

for optimised cost (bit*length)<br />

Multi-protocol<br />

transport<br />

Figure 67 Key Issues for Optical Networks 259<br />

Inter-domain<br />

management<br />

Capacity x<br />

Length<br />

Efficient<br />

multiplexing<br />

Reliability<br />

259 IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004 (www.ist-optimist.org)<br />

Wavelength /<br />

waveband<br />

granularity


<strong>A2.</strong>8.3 Transmission Technology<br />

<strong>A2.</strong>8.3.1 Deployed Technology – 10 Gbit/s WDM<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Much work was done in developing 40 Gbit/s systems during the late 1990’s and early 2000’s by the system<br />

houses. The technical issues involved in making this step from 10Gbit/s to 40Gbit/s are considerably more<br />

severe than those encountered when the jump was made from 2.5 to 10Gbit/s. In particular management of<br />

dispersion from all sources is demanding. The cost saving available moving to this line rate was smaller than the<br />

40% widely reported for the previous increase to 10 Gbit/s. This issue coupled with the telecom downturn, has<br />

prevented 40 Gbit/s from penetrating the transmission networks yet.<br />

The majority of currently deployed systems use 10 Gbit/s as a line rate. By using C & L bands together, each<br />

containing 80 channels, up to approx 160 channels can be provided giving a total capacity of 1.6 Tbit/s. Systems<br />

are available that provide an un-regenerated reach of over 4000km. All the major suppliers compete for this<br />

market e.g. 260 261 262 and systems have been deployed to carriers e.g. 263 . Marconi has deployed a soliton based<br />

system with 80 X 10 Gbit/s wavelengths with an un-regenerated reach of 3000km. Trials have demonstrated the<br />

capability of the system with 160 wavelengths, 40 Gbit/s and 4000km reach (not all necessarily at the same<br />

time).<br />

This said the typical deployment for carriers is short of this. Current (early 2005) links have maybe 20 X<br />

10Gbit/s channels utilized by large carriers in the core network. Although up to 160 channels may be deployed<br />

by using the upgrade path to the L-band, it is far more likely that the carrier will deploy newer systems with<br />

other improved functionality e.g. in network management, rather than take this step.<br />

Current optical systems are typically point-point links 264 . The WDM multiplex in each band is periodically<br />

amplified by optical amplifiers. The separation of these amplifiers depends on the geographical layout of<br />

facilities but is typically 100km or less. Dispersion compensation is provided on a per band basis, usually by<br />

dispersion compensating fibre within amplifier sites. For most systems NRZ encoding is used, but some systems<br />

use other techniques, e.g. Marconi use RZ-solitons to boost un-regenerated systems 261 . The links are terminated<br />

by O-E conversions.<br />

Current and Available Technology – State of the Art 40 Gbit/s<br />

Transmission at speeds up to 40 Gbit/s has been demonstrated in the field, whilst total fibre capacities of more<br />

than 1 Tbit/s have been deployed in the transport network. Individual channel speeds up to 600 Gbit/s per<br />

channel transmission have been demonstrated in the laboratory. However implementing 40 Gbit/s commercially<br />

is a different matter, influenced by numerous factors mainly by the demand and flexibility that these bit rates<br />

offer. The factors include industry and market conditions, directly related to mass production, as well as many<br />

technical issues. Transmission degradations at 40 Gbit/s, such as chromatic and polarisation mode dispersion,<br />

filtering cross-talk etc are more significant than at lower speeds. Consequently, the techniques used to<br />

compensate for these effects such as efficient amplification techniques, compensation for Polarisation Mode<br />

Dispersion (PMD) and complex Forward Error Correction (FEC) methods need to be deployed. At these bit<br />

rates PMD and dispersion compensating methods must be tunable. Compensation must be done on a per<br />

wavelength channel basis.<br />

These factors will play a pivotal role in the case for the cost efficiency of integrated systems operating at 40<br />

Gbit/s. The step from 2.5Gbit/s to 10Gbit/s increased capacity by a factor of 4 and cost by only a factor of 2.5<br />

when introduced. The transmission degradations at 40Gbit/s coupled with the perceived current overprovision of<br />

capacity in the backbone network, make the step to 40Gbit/s currently less obvious. Perversely, if 40 Gbit/s<br />

begins to be deployed in the metro network, because of demand, this is likely to be a strong driver for provision<br />

of 40 Gbit/s in the backbone network.<br />

260<br />

Nortel Network’s Optera Long Haul 1600: http://www.nortelnetworks.com/products/01/optera/long_haul/1600/collateral/56020.39-<br />

1216-02.pdf<br />

261<br />

Marconi’s Multihaul 3000:<br />

http://www.marconi.com/Home/customer_center/Products/Access/Optical%20Multservice%20Edge/Multihaul%203000/Multihaul3000<br />

_ds.pdf<br />

262<br />

Alcatel’s 1626 Light Manager: http://www.alcatel.com/doctypes/opgdatasheet/pdf/ds1626LM1a.pdf<br />

263<br />

Interoute's intercity high speed network: http://www.interoute.com/networks_i21.html<br />

264<br />

A moderate number of OADMs which drop a small percentage of the traffic while allowing most of the traffic to pass through remaining<br />

optical have been deployed in the core network. They are more common in metro networks where there is less need to regenerate<br />

<strong>Annex</strong> 2 - Page 163 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Commercial 40-Gbit/s systems are currently available e.g. 265 . At the same time, the span of transmission is of<br />

crucial importance for planning appropriate backbone network architectures. A main issue is the unregenerated<br />

spans. Most current transmission systems operating at 2.5 Gbit/s or 10 Gbit/s require regeneration of the signal<br />

after four or five amplification stages. Some systems however, allow 10 Gbit/s transmission over 4,000 km<br />

without any regeneration e.g. 266 . With the present technology, 40-Gbit/s systems could reach 1,100 km by using<br />

for example the CS-RZ format, with 100-GHz channel spacing and 80 km long fibre spans. In order to avoid<br />

regeneration these systems would need dynamic dispersion post-compensation, active gain equalisers, Raman<br />

amplification and Polarisation Mode Dispersion control. System engineering must also take account of the<br />

existing infrastructure, which may use old non-optimum fibre types, and have variable amplifier spacing.<br />

By using wavelength division multiplex (WDM) it is possible to transmit all information over one fibre and<br />

amplify all channels simultaneously. This is much cheaper than to amplify (and regenerate) every channel<br />

separately. WDM allows the fibre capacity to be increased above Tbit/s, without the necessity to generate bit<br />

rates of more than 10 or 40 Gbit/s electrically.<br />

Based on the well established synchronous digital hierarchy (SDH) 10 Gbit/s and 40 Gbit/s are natural steps in<br />

increasing the single channel bit rate. But as already mentioned, whereas 10 Gbit/s System are already in the<br />

field in commercial networks, there are still no commercial networks using 40 Gbit/s single channel bit rate.<br />

The step from 2.5 to 10 Gbit/s was taken, because one 10 Gbit/s channel is cheaper than four 2.5 Gbit/s<br />

channels. Often a factor of 2.5 is quoted for the comparison of a 10 Gbit/s channel with a 2.5 Gbit/s channel 267<br />

268 . So a 40% cost reduction was achieved for the same capacity.<br />

The implementation of higher bit rates is expected to result in cost reductions, because of lower number of<br />

components, lower WDM channel count, smaller footprint, and lower power consumption. Increased interface<br />

reliability will result because there are fewer failures due to reduced parts count and number of ports. Reduced<br />

operational expenses will result from the lower power, cooling and space requirements. The lower WDM<br />

channel count will make management of the channels easier.<br />

On the other side high speed components are usually more expensive than lower speed components. For 40<br />

Gbit/s systems this could be essential as all components and subsystems represent the borderline of current<br />

technology. Moreover the higher bit rate signals are more sensitive to degradations such as chromatic dispersion,<br />

polarisation mode dispersion, and fibre non-linearity. It is possible to counteract these effects, but the solutions<br />

might require much more complex and expensive systems at 40 Gbit/s. The considerations of all these aspects<br />

will lead to a choice of an optimum single channel bit rate. Whether this bit rate will be 40 Gbit/s in the near<br />

future or if it is 10 Gbit/s is not obvious. For the transition from 2.5 Gbit/s to 10 Gbit/s the cost advantages<br />

certainly prevailed and resulted in the quoted 40% cost reduction, although dispersion management had to be<br />

introduced at 10 Gbit/s for long haul transmission.<br />

Even if similar cost reductions could be finally achieved for the transition from 10 to 40 Gbit/s, one could not be<br />

sure of a fast replacement of 10 Gbit/s by 40 Gbit/s systems. There is a lack of money on the network providers’<br />

side, and due to the over provisioning situation in most of the core networks there is no urgent need to install<br />

new capacity or replace old infrastructure. The investment in the deployed infrastructure has to be amortized.<br />

The introduction of 40 Gbit/s has to be done as an evolutionary and cost effective upgrade. The less has to be<br />

changed in the fibre infrastructure the better for the deployment of 40 Gbit/s. The best option would be if only at<br />

the edges of the networks, at the termination points of the transmission lines, additional devices have to be<br />

implemented. For instance it would facilitate the introduction of 40 Gbit/s systems, if they could work on the<br />

deployed fibre infrastructure and if no additional lengths limitations compared to 10 Gbit/s systems would be<br />

introduced.<br />

The required capacity increase is often used as an argument for 40 Gbit/s. At least in principle the total fibre<br />

capacity should be the same for WDM systems with single channel bit rates of 10 Gbit/s or 40 Gbit/s. The<br />

essential parameter for the overall capacity is the total available optical bandwidth (e.g. the width of the C (L, S)<br />

–band of optical fibre amplifiers) and the spectral efficiency. The spectral efficiency is determined by the<br />

channel spacing and given by (bit/s)/Hz.<br />

265<br />

Lucent's LambdaXtreme Transport: http://www.lucent.com/products/solution/0,,CTID+2021-STID+10482-SOID+100175-<br />

LOCL+1,00.html<br />

266<br />

Marconi’s Multihaul 3000:<br />

http://www.marconi.com/Home/customer_center/Products/Access/Optical%20Multservice%20Edge/Multihaul<br />

%203000/Multihaul3000_ds.pdf<br />

267<br />

R. DeSalvo et al., “Advanced Components and Sub-Systems Solutions for 40 Gb/s Transmission”, J. Lightwave Technol., vol. 20, pp.<br />

2154-2181, Dec. 2002<br />

268<br />

B. Mikkelsen et al., “Real-world issues for high capacity and long-haul transmission at 40 Gbit/s, Proceedings ECOC 2003, Symposium<br />

“Is it the right time for 40 Gbit/s systems”<br />

<strong>Annex</strong> 2 - Page 164 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The achievable spacing is determined by the bandwidth, stability of optical filters, and also the width of the<br />

optical signal spectrum, which can be made narrower by using bandwidth efficient modulation methods. No<br />

great difference in spectral efficiency between 10 Gbit/s and 40 Gbit/s can be expected.<br />

But indeed, retrofitting of 40 Gbit/s channels into an existing 10 Gbit/s WDM transmission system can be an<br />

economically advantageous capacity upgrade scenario, if the conditions (e.g. sufficiently narrow optical<br />

spectrum of the 40 Gbit/s signals or coarse enough wavelength grid spacing) are properly accounted for 269<br />

Subsystems for 40 Gbit/s<br />

The basic building blocks operating at the serial line rate are the electrical multiplexer and the optoelectronic<br />

transmitter, the optical receiver, a clock and data recovery, and an electric demultiplexer. Today, all these key<br />

modules are available from several suppliers in a mature form that allow volume manufacturing and thus the<br />

promise for low cost 270 .<br />

Generation of 40 Gbit/s signals<br />

An important difference between 10 and 40 Gbit/s systems is that at 40 Gbit/s no direct modulation of laser<br />

sources is possible, whereas at 10 Gbit/s direct modulation can be applied for links up to 40 km. For 40 Gbit/s<br />

systems external modulation is necessary, due to the speed limits and chirp characteristics of directly modulated<br />

lasers. External modulators, which are in common use, are either LiNbO3 Mach-Zehnder-Modulators (LN-<br />

MZM) or electro-absorption modulators (EAM) based on InP. LN-MZM exploit the change of the refraction<br />

index in response of an electric field (electro-optic effect ) whereas EAMs utilise the change of absorption<br />

coefficient in presence of an electric field.<br />

LiNbO3 modulators are used for both 10 Gbit/s and 40Gbit/s. Because of the different material of light source<br />

(InP based laser) and modulator no integrated solutions are possible. This is different for EAMs, which can be<br />

monolithically integrated with DFB lasers, which is an inherently cheaper solution, because optical coupling and<br />

packaging is easier. Due to interaction of laser and modulator, which is inevitable in this configuration,<br />

additional chirp is introduced (the frequency of the laser is different for high or low absorption of the<br />

modulator). This limits the transmission span over dispersive fibres. Currently 10Gbit/s integrated laser<br />

modulators are used for 40-80 km transmission. The integrated laser modulators have also been used for 40<br />

Gbit/s transmission, but usually stand-alone EAMS in a two fibre package are used. This may change in the<br />

future, but in EAMs insertion loss, chirp, and extinction ratio cannot be optimised independently but are<br />

inherently linked together. The driving voltage of the modulator are about 6 Vp-p for LN-MZM and only 3 Vp-p<br />

for OAMs.<br />

So in conclusion, especially for medium range applications, 10 Gbit/s systems can be implemented using direct<br />

modulation or integrated EAMs, whereas for 40 Gbit/s LN-MZMs have to be used.<br />

Modulator drivers<br />

The comparison of different modulators can only be performed including the modulator driver. Naturally it<br />

becomes more and more difficult and expensive at higher bit rates to provide high power signal. So the design of<br />

modulator drivers, that are able to generate 6 Vp-p at 40 GHz, is a real challenge. The bandwidth has to range<br />

from 10kHz to 40 GHz. Flat frequency response and linear phase are required over many decades. Therefore it<br />

is very important to have modulators with as low driving voltages as possible.<br />

Modulation formats<br />

The achievable transmission length also depends on the applied modulation scheme. The most simple<br />

modulation is the NRZ format. This has the lowest speed requirements, and just simple direct detection. The best<br />

modulation scheme for 40 Gbit/s in terms of performance seems to be RZ-DPSK. This modulation is nearly<br />

exclusively used for the experimental demonstrations of records of capacity and span length presented at the<br />

post deadline sessions of the last major conferences OFC and ECOC. Modulation formats like RZ-DPSK are<br />

efficient for achieving very high transmission length, but need rather complex modulator and receiver structures.<br />

RZ-DPSK offers a 5dB sensitivity improvement and allows a 1 dB higher launch power for the same non-linear<br />

transmission penalty 270 .<br />

269 L. Ceuppens et al., “Economically Efficient Capacity upgrades with Spectrally Efficient 40 Gb/s Modulation Formats”, Proceedings<br />

ECOC 2003, Symposium “Is it the right time for 40 Gbit/s Systems”<br />

270 B. Mikkelsen et al., “Deployment of 40 Gb/s systems: Technical and cost issues”, Proceedings OFC 2004, ThE6<br />

<strong>Annex</strong> 2 - Page 165 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Receivers<br />

The same energy per bit is needed for all bit rates for the same signal to noise ratio, provided all other conditions<br />

are the same. This means that 6 dB more optical power has to be provided. Forward error correction can be used<br />

to work at lower SNR values and lower powers, in order to obtain the same span lengths. One can naturally<br />

argue that one can do the same for the 10 Gbit/s system, and work at ever lower powers or achieve longer span<br />

length using the same power. This discussion is admittedly not quite correct because the role of nonlinear<br />

impairments should be included.<br />

The technical challenges of receivers for 40 Gbit/s are presented in great detail in the paper of DeSalvo et al 271 .<br />

The challenges of high speed receiver design go well beyond the necessity of obtaining faster components. For<br />

instance the interoperation of the photo detector and the following preamplifier has to be carefully managed. But<br />

these challenges seem to be solvable, and receivers are available commercially from several sources.<br />

To achieve high sensitivity of the receiver an optical pre-amplification is a good option. The sensitivity is then<br />

determined by the signal- ASE beat noise of the optical amplifier and no longer by the thermal noise of the<br />

electronic pre-amplifier. Low noise optical amplifiers are essential in this case.<br />

Forward error correction<br />

As already mentioned, an important prerequisite for the implementation of 40 Gbit/s systems is that the same<br />

transmission lengths as that for 10 Gbit/s systems which are deployed now, can be achieved. Optical Raman<br />

amplifiers, tuneable dispersion compensation, low PMD fibres serve to achieve this goal. Nevertheless these<br />

measures are not sufficient to achieve sufficiently high margins. Therefore forward error correction (FEC) is<br />

used to achieve a sufficiently low bit error rate even if signal quality (signal to noise ratio) is poor. FEC adds<br />

some redundancy to the signal, and is then able to detect and correct bit errors. The implementation of advanced<br />

FEC algorithms results in a coding gain in excess of 8.5 dB. This gain (same BER with lower signal to noise<br />

ratio) could also be achieved with 10 Gbit/s signals, but today 10 Gbit/s usually operate without FEC. The aim<br />

here is not to increase the span length, but to achieve the same span length with the higher bit rate. SDH/SONET<br />

framers and FEC (G.709) devices provide also the necessary performance monitoring and error correction to<br />

realize robust transmission. However, 40 Gbit/s framers and in particular 40 Gbit/s FEC devices are not readily<br />

available at present, forcing line card suppliers to implement the FEC functions at the 10 Gbit/s level 272 . The<br />

main consequence is additional chips and hence added cost.<br />

Transmission impairments<br />

Chromatic dispersion<br />

Chromatic dispersion describes the wavelength dependence of the group velocity of a light wave propagating<br />

through the fibre. As the different wavelength components of a signal travel at different speeds, dispersion<br />

results in pulse broadening. Standard single mode fibres have a dispersion of about 17 ps/km/nm. This results in<br />

a maximum transmission length (at a 1dB receiver sensitivity penalty) of 60 km for 10 Gbit/s signals. The<br />

maximum acceptable amount of the dispersion for a certain bit error rate penalty is proportional to the square of<br />

the bit time. This means residual dispersion has to be lower by a factor 16 for 40 Gbit/s compared to 10 Gbit/s.<br />

There are several techniques for dispersion compensation. Use of dispersion compensating fibre (DCF) is the<br />

technique which is widely deployed in systems today. Other techniques yet to achieve mass penetration include:<br />

• Higher order mode (HOM) fibres also have the capability for broadband dispersion and dispersion slope<br />

compensation.<br />

• Chirped fibre Bragg gratings, which are among the most popular tuneable compensators now available<br />

• All pass filters and virtual imaged phased arrays (VIPA). These have a periodic spectral response and allow<br />

for broadband dispersion compensation. All pass filters with 100 GHz free spectral range (30 GHz usable<br />

bandwidth) have been built with tuning ranges of +/- 500ps/nm. VIPA are bulk optic devices. A tuning<br />

range of +/-800ps/nm has been achieved.<br />

Dispersion management counteracts dispersion and to some extent fibre nonlinearities, by providing low overall<br />

dispersion while maintaining sufficiently high local dispersion. In combination with optical amplification it<br />

serves to increase the capacity of a fibre link and the span length without the need of signal regeneration.<br />

271<br />

R. DeSalvo et al., “Advanced Components and Sub-Systems Solutions for 40 Gb/s Transmission”, J. Lightwave Technol., vol. 20, pp.<br />

2154-2181, Dec. 2002<br />

272<br />

B. Mikkelsen et al., “Deployment of 40 Gb/s systems: Technical and cost issues”, Proceedings OFC 2004, ThE6<br />

<strong>Annex</strong> 2 - Page 166 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Because a perfect compensation over the whole wavelength range of a WDM system is difficult (and<br />

uneconomic), there will be residual values of dispersion which can change slowly with temperature changes and<br />

component ageing. The residual dispersion tolerances are more stringent as the bit rate increases. Dispersion<br />

maps will need to be accurate to within less than 50 ps/nm at 40 Gbit/s. Tuneable dispersion compensation at the<br />

receiver is claimed to be necessary for bit rates of 40 Gbit/s and higher 272 . This will provide a fine “tweak” to<br />

the static methods described above.<br />

For flexibility reasons tuneable dispersion compensation could complement or even partly replace fixed<br />

dispersion compensation using dispersion compensating fibre, for 10 Gbit/s where sufficiently exact fixed<br />

compensation is possible. But in this case no active control would have to be implemented.<br />

In conclusion one can state: at 2.5 Gbit/s no compensation is necessary, at 10 Gbit/s a well designed dispersion<br />

map has to be implemented, and for 40 Gbit/s the static dispersion map has to be complemented or replaced by<br />

tuneable dispersion compensation, which has to be actively adjusted in response to environmental and other<br />

changes. An import issue is, whether the active control is necessary only at the end points of the transmission<br />

span or whether the dispersion compensators along the span, which built up the dispersion map, have to be<br />

actively adjusted. This could occur because dispersion management is not only an issue of dispersion but also of<br />

fibre nonlinearity. In literature it is assumed that it is sufficient to actively control the residual dispersion only at<br />

the end points of transmission.<br />

Polarisation mode dispersion<br />

Single mode fibres, in practical use, are actually two mode fibres with two degenerated polarisation modes,<br />

having the same propagation constant. Effects like fibre bending or stress or a fibre core slightly deviating from<br />

a perfect circular geometry generate small local differences of the propagation constants of the modes (the fibre<br />

becomes birefringent), resulting in slight group velocity differences of the two modes. This polarisation mode<br />

dispersion (PMD) is the next transmission limit after dispersion has been defeated. The natural solution is to<br />

fabricate fibres, where this effect is so small, that it is no limiting effect in any practical system. There has been<br />

considerable success in achieving this goal.<br />

PMD shows up as a broadening and deforming of the transmitted optical pulses. An optical pulse with well<br />

defined polarisation at the fibre input is to the first order transformed to two pulses of orthogonal polarisation<br />

and a delay (differential group delay, DGD) between each other, defining the broadening. As the total effect<br />

adds up statistically from local contributions of the fibre, the polarisation of the pulses as well as the delay<br />

depend statistically on the distribution of birefringence (amplitude and main axis) over the fibre. Small local<br />

changes of birefringence may result in drastic changes of the accumulated effect. It should be mentioned that the<br />

simple view of two delayed pulses is only valid for rather low PMD and the effect is much more complicated.<br />

Statistical models have been derived in order to calculate a statistical distribution of differential group delays<br />

from a mean birefringence of the fibre 273 . The statistical distribution then determines the probability that the<br />

value of DGD is higher than a certain value. This value can be chosen in such a way that it is the limit of<br />

acceptable performance of the system (higher values define an outage) and the probability determines (under<br />

some statistical assumptions) which part of time (minutes in a year) PMD has a high value inducing an outage.<br />

From all this, it is obvious that only PMD compensators with an active control scheme can counteract PMD.<br />

This means that the actual PMD has to be measured, a control signal derived, and a suitable compensator device<br />

has to be reconfigured accordingly. Apparently PMD compensation is not necessary for 10 Gbit/s (with the<br />

exception of systems with old fibres with especially high PMD). The major question is if such a PMD control<br />

scheme is necessary at 40 Gbit/s. As optical realisations seem to be rather costly and complex, it would be a<br />

rather strong argument against 40 Gbit/s systems, if PMD compensation would be necessary for long haul<br />

systems. An overview over optical and electronic methods of PMD compensation can be found in 273 .<br />

Measurements have been performed at Nortel 274 of PMD for a larger number standard single mode fibres and<br />

non zero dispersion shifted fibres. There is a clear indication that fibres deployed before 1994 will show much<br />

higher PMD than fibre deployed after 1994. For fibres deployed after 1994, above 96% of non-zero dispersion<br />

shifted fibres and 30% of standard single mode fibres can support transmission of 40 Gbit/s over 1000 km with<br />

negligible PMD effects. For older fibres (here only SMF and dispersion shifted fibres were investigated) only<br />

below 20% of the fibres allow transmission of 40 Gbit/s over 1000 km.<br />

273<br />

H. Bülow, “System outage probability due to first and second order PMD”, IEEE Photon. Technol. Lett., vol. 10, 1998, pp. 696-698,<br />

274<br />

P. Noutsios, S. Poirier, “PMD Assessment of installed fiber plant for 40 Gbit/s transmission”, Proceedings NFOEC 2002, 2002, pp. 1342<br />

- 1347<br />

<strong>Annex</strong> 2 - Page 167 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

For old fibre infrastructure even at 10 Gbit/s problems might occur. As the production of better fibres has<br />

proceeded in the meantime, we assume that 40 Gbit/s transmission over 1000km will be possible over spans<br />

consisting of only new fibres will be possible. This assumption is also supported by system experiments by<br />

Suzuki et al 275 , who claim that with using new fibres and proper system optimisation PMD compensation will<br />

be not necessary below 80 Gbit/s.<br />

If it would turn out that PMD compensation is necessary, this could eventually be a killer argument against 40<br />

Gbit/s systems. As long as tuneable dispersion compensation (in contrast to dispersion compensating fibre or<br />

adjustable compensators without dynamic control) are only needed at the end points of the transmission spans,<br />

this seems to be tolerable and not a great barrier against 40 Gbit/s systems.<br />

<strong>A2.</strong>8.3.2 Future Technology - Research Trends<br />

Research in this area has contracted since the bubble, however it is still vibrant with sessions in OFC 2005<br />

including titles of “40Gb/s and beyond”, “Modulation techniques” and “Ultra Long Haul” along with other<br />

contributory areas. Thus the transmission technologists are still researching into how to send more and more<br />

bandwidth along greater distances before regeneration.<br />

OFC 2005 highlights include:<br />

• Spectral efficiency improvement by demodulation of 40 Gbit/s poloarization multiplexed signals with 16<br />

GHz spacing 276<br />

• Spectral efficency improvement by using a super-continuum source to generate 1022 X 2.5Gbit/s with 6.25<br />

GHz spacing over 116km 277<br />

• Transmission of 18 X 40Gbit/s over 6250km of conventional NZ-DSF 278<br />

• A field demonstration of a 160Gbit/s OTDM signal over 200 km 279<br />

• A field demonstration of 8 X 160Gbit/s transmission over 430km and investigation into PMD compensation<br />

280<br />

• A 320 Gbit/s OTDM transmission over 80km of fibre 281<br />

Interestingly there is a comment from a 2004 OFC workshop reported in JLT 282 , that at bit rates above 2.5Gbit/s<br />

the interaction between dispersion and nonlinearities becomes crucial and the way the dispersion is managed<br />

along a link becomes as important as the dispersion itself. Thus it is likely that a move to higher and higher bit<br />

rates will conflict with the desire for all-optical networking described below. Certainly the move to higher bit<br />

rates will limit the transparent diameter of the signal.<br />

Other themes appear to be how to take cost out of the network with several papers replacing fibre based<br />

dispersion compensation systems along the transmission fibre, with electronic based methods 283 284 , optical<br />

phase conjugation 285 or chirped lasers 286 .<br />

275<br />

M. Suzuki et al., “High Speed (40-160 Gbit/s) WDM Transmission in Terrestrial Networks”, OFC 2003, Vol. 2 (2003), pp 741-742<br />

276<br />

S. Tsukamoto et al., “Coherent Demodulation of 40-Gbit/s Polarization-Multiplexed QPSK Signals with 16-GHz Spacing after 200-km<br />

Transmission”, OFC 2005, Paper PDP29, 2005<br />

277<br />

T. Ohara et al.. “Over 1000 channel, 6.25 GHz-spaced ultra-DWDM transmission with supercontinuum multi-carrier source”, OFC 2005,<br />

Paper OWA6, 2005<br />

278 J.-X Cai et al., “Transmission of 40Gbit/s WDM Signals over 6,250 km of Conventional NZ-DSF with >4 dB FEC Margin”, OFC 2005,<br />

Paper PDP26, 2005<br />

279<br />

T. Miyazaki et al., “Field Demonstration of 160-Gb/s OTDM Signal Using Eight 20-Gb/s 2-bit/symbol Channels over 200 km”, OFC<br />

2005, Paper OFF1, 2005<br />

280<br />

R. Leppla et al., “ PMD Tolerance of 8x170 Gbit/s Field Transmission Experiment over 430 km SSMF with and without PMDC”, OFC<br />

2005, Paper OFF2, 2005<br />

281 A.I. Siahlo et al., “320 Gb/s Single-polarization OTDM Transmission over 80 km Standard Transmission Fiber”, OFC 2005, Paper<br />

OFF3, 2005<br />

282 J. Livas, “Optical Transmission Evolution: From Digital to Analog to ? Network Tradeoffs Between Optical Transparency and Reduced<br />

Regeneration Cost”, ”, J. Lightwave Technology Vol. 23, pp. 219 - 224, January 2005<br />

283 A.H. Gnauck et al., “Linear Microwave-Domain Dispersion Compensation of 10-Gb/s Signals using Heterodyne Detection”, OFC 2005,<br />

Paper PDP31, 2005<br />

284 D. McGhan et al., “5120 km RZ-DPSK transmission over G652 fiber at 10 Gb/s with no optical dispersion compensation”, OFC 2005,<br />

Paper PDP27, 2005<br />

285 S.L. Jansen et al., “10,200km 22x2x10Gbit/s RZ-DQPSK Dense WDM Transmission without Inline Dispersion Compensation through<br />

Optical Phase Conjugation”, OFC 2005, Paper PDP28, 2005<br />

286 S. Chandrasekhar et al., “Flexible transport at 10-Gb/s from 0 to 675km (11,500ps/nm) using a chirp-managed laser, no DCF, and a<br />

dynamically adjustable dispersion-compensating receiver”, OFC 2005, Paper PDP30, 2005<br />

<strong>Annex</strong> 2 - Page 168 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Private conversations with Prof. Takis Hadjifotiou have identified the following additional research trends:<br />

• Electronic muxing/demuxing using III-V semiconductors to achieve approximately 80Gbit/s<br />

• The need for optical muxing/demuxing to attain 160 Gbit/s use of fibre optics for muxing generates big<br />

problems on polarisation control and coherent mixing;<br />

• Pulse generation<br />

• also problematic with mode locking being the probable technique for pulse generation<br />

• Optical demultiplexing at the receiver needs optical IC with optical gates (such as EAM) and an excellent<br />

clock extraction subsystem<br />

• Bit rate per tributary to 160 Gbit/s stream or higher can be 10, 40 or even 80 Gbit/s. 10 Gbit/s is preferable<br />

because of the low cost compared to 40 Gbit/s and straight through connection options<br />

• 160 Gbit/s is difficult to network because of the small tolerances required and the granularity. There is a<br />

virtual impossibility of dropping and injecting tributaries. This is in broad agreement with paragraph 2 above<br />

• Electronic signal processing is now possible at ~ 10 Gbit/s and this will represent a paradigm shift in the<br />

design of optical systems.<br />

<strong>A2.</strong>8.4 Optical Networking Technology<br />

<strong>A2.</strong>8.4.1 Deployed Technology – Pseudo-Agile optical networks<br />

Although field trials have been made of all-optical networks, the economic downturn and its effect on the<br />

various companies with interests on production and utilisation of these technologies have prevented deployment.<br />

So far operators are just beginning to deploy primitive optical networking techniques.<br />

The truly agile optical networks described in the section below, require complete reconfigurability of channel<br />

provisioning. This means that they require full band tuneable lasers, tunable channel filters, reconfigurable<br />

OADMs, OXCs, and an optical control plane. A commentator has expressed that he feels that this type of<br />

network is not yet deployed because of the cost and reliability of tunable lasers and tunable filters.<br />

A compromise between full agility optical network and a point-to-point fixed network is offered by several<br />

companies and has already been deployed. This is an all optical network, but with fixed filter structure in the<br />

add/drop tree. In other words, these networks will have OADM, & OXCs, but using a fixed filter structure<br />

providing add/drop functionality. Therefore, when adding or removing a channel, the transmitter/receiver will<br />

have to be manually placed at the right access wavelength on the filter structure. The network is a static<br />

wavelength routing network, where the path of a channel is pre-defined according to its wavelength, as opposed<br />

to a switched wavelength routing network, where the path of a particular wavelength can be dynamically<br />

assigned. Pseudo agile optical networks are typically deployed in metro networks where regeneration is less<br />

likely to be an issue.<br />

<strong>A2.</strong>8.4.2 Current and Available Technology – State of the Art Optical Cross Connects<br />

In the near term, a key network element in the development of flexible high capacity backbone networks is the<br />

optical cross-connects (OXCs). The main function of the OXC is to provide flexible connectivity between<br />

wavelengths on different fiber ports in a wavelength-routed network. Current technology characterises an OXC<br />

as a function of the transparency of the switching technology, e.g. opaque OXC and transparent OXC.<br />

The first (opaque) is a digital cross-connect equipped with optical interfaces; they are sometimes referred to as<br />

OEO (Optical-Electrical-Optical) switches and are widely available from many vendors. Opaque OXCs are<br />

either based on electrical switching technology or on optical switch fabrics surrounded by (expensive) OEO<br />

conversions. In OXCs using electrical switching, depending on the technology and architecture, sub-wavelength<br />

switching granularities can be supported providing edge and intermediate grooming capabilities for more<br />

efficient bandwidth utilisation. Opaque OXCs also offer regeneration, wavelength conversion and bit-level<br />

monitoring.<br />

In transparent OXCs the incoming signals are routed transparently through an optical switch fabric without the<br />

requirement of optoelectronic conversions. The switching granularity may vary and support switching at the<br />

fibre, the wavelength band or the wavelength channel level.<br />

<strong>Annex</strong> 2 - Page 169 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Although some years ago vendors were focusing on transparent OXCs (Diamond Wave, Lucent LambdaRouter,<br />

Calient) to date more vendors offer opaque solutions (Tellium Aurora, Alcatel 1674 Lambda Gate) 287 288 289 290 .<br />

Lack of standardisation is still an issue for OXC deployment, although most of the vendors announce G-MPLS<br />

compliance, many require centralised management systems to set up and tear down connections.<br />

Today’s telecommunications networks deploy wavelength division multiplexing (WDM) to interconnect discrete<br />

points of network topology and offer high capacity and long reach transmission capabilities. The information is<br />

transmitted optically but transferred across the network through links that are terminated by SDH/SONET<br />

equipment forming ring and mesh network topologies 291 292 293 294 295 296 . Such a network requires many costly<br />

optoelectronic (OEO) conversions and complicated network management resulting in low scalability and slow<br />

service turn up with high installation, operation and maintenance cost 292 293 294 . Recent technology evolution gives<br />

the possibility of the WDM transport layer migrating from simple transmission links into an elaborate network<br />

providing switching, with higher manageability, lower complexity and cost 297 296 . In such network scenarios,<br />

optical routes form connections between discrete point network locations through optical add/drop and crossconnect<br />

nodes 295 296 297 and provide traffic allocation, routing and management of the optical bandwidth. They also<br />

facilitate network expansion, traffic growth, churn and network resilience. Optical cross-connects are located at<br />

nodes cross-connecting a number of fibre pairs and also support add and drop of local traffic providing the<br />

interface with the service layer. To support flexible path provisioning and network resilience, OXCs will<br />

normally utilise a switch fabric to enable routing of any incoming channels to the appropriate output port and<br />

access to the local client traffic. Several designs have been proposed for achieving robust OXCs based on<br />

different switching technologies 295 298 299 300 301 302 303 .<br />

Meanwhile rapid advances in WDM technology have brought about a tremendous growth in the size of the OXC<br />

together with cost and the complexity and control 304 305 . As the DWDM technology has matured, it has brought<br />

about expansion of the number of ports of cross connects (optical and electronic). The number of network<br />

wavelengths has increased in order to meet capacity demands. In the future, WDM networks of up to 1000<br />

wavelengths may be deployed to fully exploit the available fibre spectrum 297 . In such a scenario an optical crossconnect<br />

for bands of wavelengths, i.e. a waveband cross-connect (WBXC) has been proposed 306 307 308 .<br />

287<br />

Ciena’s CoreDirector http://www.ciena.com/products/coredirector/coredirector.htm<br />

288<br />

Cisco ONS 15454: http://www.cisco.com/en/US/products/hw/optical/index.html<br />

289<br />

Alcatels Lambda Gate: http://www.alcatel.fr/products/productsummary.jhtml?relativePath=/x/opgproduct/1674.jhtml<br />

290<br />

Diamond Wave: http://www.calient.net/products or http://www.calient.net/solutions<br />

291<br />

R. Ramaswami and K. N. Sivarajan, Optical Networks, A Practical Perspective. San Fransisco, CA: Morgan Kaufmann, 1998.<br />

292<br />

R. Ramaswami, “Using All-Optical Crossconnects in the Transport Network”, (invited), WZ1-I, OFC2001.<br />

293<br />

A. Tzanakaki, I. Wright and S. S. Sian, “Wavelength Routed Networks: Benefits and Design Limitations”, SCI2002, Orlando, Florida,<br />

July 2002<br />

294<br />

A. Banerjee, J. Drake, P. Lang and B. Turner, “Generalized Multiprotocol Label Switching: An Overview of Routing and Management<br />

Enhancements”, IEEE Communications Magazine, pp 144-150, January 2001.<br />

295<br />

J. Lacey, “Optical cross-connect and add/drop multiplexers: technologies and applications”, (Tutorial), WT1, OFC2002.<br />

296<br />

A. Tzanakaki, I. Zacharopoulos and I. Tomkos: Optical Add/Drop Multiplexers and Optical Cross-Connects for Wavelength Routed<br />

Networks, ICTON 2002<br />

297<br />

IST OPTIMIST consortium: Technology Trend Documents, January 2004 (www.ist-optimist.org)<br />

298<br />

A. Tzanakaki, I. Zacharopoulos and I. Tomkos, “Near and longer term architectural designs for OXCs/OADMs/Network topologies”,<br />

Invited paper, “Photonics in Switching conference” Paris, October 2003<br />

299<br />

G. Wilfong, B. Mikkelsen, C. Doerr, and M. Zirngibl, “WDM Cross-Connect Architectures with Reduced Complexity”, J. Lightwave<br />

Technology Vol. 17, pp. 1732 - 1741, October 1999<br />

300<br />

E. Iannone, and R. Sabella, “Optical Path Technologies: A Comparison Among Different Cross-Connect Architectures”, J. Lightwave<br />

Technology Vol. 14, pp. 2184 - 2196, October 1996<br />

301<br />

P. B. Chu, S.-S. Lee, and S. Park, “MEMS: The path to large opticalcross-connects,” IEEE Commun. Mag., pp. 80–87, Mar. 2002.<br />

Photon. Technol. Lett., vol. 10, pp. 896–898, June 1998.<br />

302<br />

D. J. Bishop, C. R. Giles, and G. P. Austin, “The Lucent lambdarouter: MEMS technology of the future here today,” IEEE Commun.<br />

Mag., pp. 75–79, Mar. 2002.<br />

303<br />

P. De Dobbelaere, K. Falta, L. Fan, S. Gloeckner, and S. Patra, “Digital MEMS for optical switching,” IEEE Commun. Mag., pp. 88–95,<br />

Mar. 2002.<br />

304 X. Cao, V. Anand C. Qiao, Waveband Switching in Optical Networks, IEEE Comm. Magaz. April 2003, p 105-112<br />

305 L. Noirie, M. Vigoureux, and E. Dotaro, “Impact of intermediate grouping on the dimensioning of multi-granularity optical networks,” in<br />

Proc. OFC, 2001, p. TuG3-3.<br />

306 K. Harada, K. Shimizu, T. Kudou, and T. Ozeki, “Hierarchical optical path cross-connect systems for large scale WDM networks,” in<br />

Proc.—OFC, 1999, p. WM55-3.<br />

307 O. Gerstel, R. Ramaswami, and W. Wang, “Making use of a two stage multiplexing scheme in a WDM network,” in Proc.—OFC, 2000,<br />

p. ThD1-3.<br />

308 M. Lee, J. Yu, Y. Kim, C. Kang, and J. Park, “Design of hierarchical crossconnect WDM networks employing a two-stage multiplexing<br />

scheme of waveband and wavelength,” IEEE J. Select. Areas Commun., vol. 20, pp. 166–171, Jan. 2002.<br />

<strong>Annex</strong> 2 - Page 170 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

In 309 310 311 a multi-granularity OXC is designed on the basis that only a fraction of the input traffic needs to be<br />

switched in any particular node<br />

Ideally, the features of an OXC other than low cost, which can be used for each of the stages of OXC should list<br />

312 313<br />

:<br />

• Switching capability of any channel to any unused channel.<br />

• Variable switching and add/drop percentage up to 100%.<br />

• Dynamic reconfiguration supporting fast switching speed (∼ ms)<br />

• Transparency<br />

• Scaleable architecture in a modular fashion.<br />

• Minimum performance degradation in terms of noise, crosstalk, filtering etc for add, drop & switched paths<br />

and ideally uniform for all the channels<br />

• Strictly non-blocking connectivity between input and output ports<br />

• Span and ring protection as well as mesh restoration capabilities<br />

• Minimum operational expenditure (minimised footprint, number of active components etc)<br />

• Simplified control (e.g. minimum number of active components)<br />

λ-Demux<br />

1<br />

R<br />

T<br />

1<br />

λ -Mux<br />

λ-Demux<br />

1<br />

1 1<br />

R<br />

m Electronic<br />

Switch<br />

T<br />

m<br />

1<br />

R<br />

Matrix<br />

NM×NM T<br />

1<br />

N<br />

Management,<br />

Signaling O(D)XC<br />

Controller<br />

R<br />

m<br />

T<br />

m<br />

Receiver<br />

Transmitter<br />

N N<br />

Management,<br />

Signaling O(D)XC<br />

Controller<br />

1<br />

R T<br />

R T<br />

M<br />

1<br />

R T<br />

R T<br />

M<br />

Receiver<br />

OXC<br />

optical switch matrix<br />

no wavelength conversion<br />

Optical<br />

Switch<br />

Matrix<br />

NM×NM<br />

1<br />

R T<br />

R T<br />

M<br />

1<br />

R T<br />

R T<br />

M<br />

Transmitter<br />

OXC<br />

optical switch matrix<br />

wavelength conversion<br />

<strong>Annex</strong> 2 - Page 171 of 282<br />

λ -Mux<br />

OXC<br />

electric switch matrix<br />

Data rate/format transparent yes no (depends on converter) no<br />

Transparent switch matrix yes yes no<br />

Max. switched bit rate No practical limit Limited by converter Limited by switch matrix<br />

(40 Gbit/s)<br />

(2,5-10 Gbit/s)<br />

Switching timet 10 ms 10 ms µs<br />

Signal regeneration no yes (depends on converter) yes<br />

Signal quality monitoring difficult yes (depends on converter) yes<br />

Drop and continue/multicast Broadcast&select Broadcast&select yes<br />

Figure 68 OXC technologies: (a) Opaque OXC using electrical switch fabric (b) Opaque OXC using optical<br />

switch fabric (c) Transparent OXC (with wavelength converters) d) Table with comparison characteristics 312<br />

309<br />

X. Cao, Y. Xiong, V. Anand, and C. Qiao, “Wavelength band switching in multi-granular all-optical networks,” in SPIE Proc.<br />

OptiComm’02, vol. 4874, Boston, MA, 2002, pp. 198–210.<br />

310<br />

R. Izmailov, S. Ganguly, Y. Suemura, I. Nishioka, Y. Maeno, and S. Araki, “Waveband routing in optical networks,” presented at the<br />

IEEE Int. Conf. on Communications (ICC’02), New York, 2002.<br />

311<br />

X. Cao, V. Anand C. Qiao, A Waveband Switching Architecture and Algorithm for Dynamic Traffic IEEE COMMUNICATIONS<br />

LETTERS, VOL. 7, NO. 8, AUGUST 2003p. 397<br />

312<br />

IST OPTIMIST consortium: Technology Trend Documents, January 2004 (www.ist-optimist.org)<br />

313<br />

A. Tzanakaki, I. Zacharopoulos and I. Tomkos: Optical Add/Drop Multiplexers and Optical Cross-Connects for Wavelength Routed<br />

Networks, ICTON 2002<br />

1<br />

N<br />

Management,<br />

Signaling<br />

1<br />

N<br />

1<br />

OXC<br />

Controller<br />

1<br />

M<br />

Optical<br />

Switch<br />

Matrix<br />

M<br />

1 NM×NM<br />

(MEMS)<br />

1<br />

λ-Demux<br />

M M<br />

Wavelength<br />

converter<br />

λ -Mux<br />

1<br />

N


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Depending on the switching technology and the architecture used OXC designs are commonly divided into<br />

opaque and transparent 312 313 314<br />

. Opaque OXCs incorporate either an electrical switch fabric (Figure 68a) or<br />

optical ones with OEO conversions, & thus costly optoelectronic interfaces (Figure 68b) 312 313 . Figure 68 shows a<br />

comparison of different architectures:<br />

• In the first sub-wavelength switching granularities can be supported hence efficient bandwidth utilisation.<br />

They also offer inherent regeneration, wavelength conversion and bit-level monitoring. Multi-casting is<br />

possible if required 315 316. Switching time however is limited to ~ µs.<br />

• If an optical switch fabric is used in an opaque OXC, signal monitoring and regeneration can still be<br />

implemented but the solution adds complexity, bit rate limitation and cost. The switch fabric can be<br />

optimised for different wavelengths than the transmission wavelengths 315 316. Many vendors are designing<br />

opaque OXC 317 318 319.<br />

• On the other hand transparent OXCs (Figure 68c) deploy an optical switch fabric without requirements for<br />

OEO conversions. All-optical solutions offer transparency to a variety of bit-rates and modulation formats.<br />

The switching granularity may vary and support switching at the fibre, the wavelength band or the<br />

wavelength channel level. Monitoring is complicated here and in-band management information is hard to<br />

add. Functions like add/drop and multicasting require special configurations. Many vendors invested in early<br />

2000 in all optical OXC but to date only a few still offer these products 320 321.<br />

Furthermore transparent OXCs do not necessarily offer regeneration capabilities 315 316 . This may significantly<br />

impact the scalability of the solution as present WDM networks introduce penalties on optical signals due to<br />

amplifier noise and wavelength dependent gain spectrum, dispersion, nonlinear effects, polarisation mode<br />

dispersion (PMD) etc. Transparent OXCs introduce impairments on the signals such as noise, crosstalk, filtering<br />

effects etc 315 . To overcome these partially regenerating architectures have been proposed in the literature 322 . In<br />

addition, full and partial wavelength conversion 323 324 325 can be also applied to reduce wavelength blocking and<br />

facilitate wavelength re-use on a link-by-link basis 315 . Wavelength conversion may take place at the input or<br />

output of the node as shown in Figure 68c .The degree of transparency & thus the maximum bit-rate is dictated<br />

by the converter bandwidth.<br />

A variety of optical switch fabric technologies have been proposed and developed to date 316 326 327 that exhibit<br />

different performance as far as features like polarization dependent loss, extinction ratio etc are concerned. In<br />

Table 15: Switching Fabrics commercially available or under development until September 2004 the basic<br />

characteristics of the main switch technology options are outlined 316 326 . Note that 2D MEMS are digital (on-off)<br />

switches, 3D MEMs are analogue in their positioning and waveguide switches (thermo-optical or electrooptical)<br />

have to comprise configurations like interferometric, directional couplers or optical digital switches to<br />

build a 2×2 switch 315 . Switching time is a very significant feature and governs the switching time of the node. If<br />

tunable components, e.g. filters, are deployed then OXC switching time may be dictated by their tuning times<br />

328 .<br />

314<br />

R. Ramaswami, “Using All-Optical Crossconnects in the Transport Network”, (invited), WZ1-I, OFC2001.<br />

315<br />

IST OPTIMIST consortium: Technology Trend Documents, January 2004 (www.ist-optimist.org)<br />

316<br />

A. Tzanakaki, I. Zacharopoulos and I. Tomkos: Optical Add/Drop Multiplexers and Optical Cross-Connects for Wavelength Routed<br />

Networks, ICTON 2002<br />

317<br />

Ciena’s CoreDirector http://www.ciena.com/products/coredirector/coredirector.htm<br />

318<br />

Cisco ONS 15454: http://www.cisco.com/en/US/products/hw/optical/index.html<br />

319<br />

Alcatels Lambda Gate: http://www.alcatel.fr/products/productsummary.jhtml?relativePath=/x/opgproduct/1674.jhtml<br />

320<br />

Diamond Wave: http://www.calient.net/products or http://www.calient.net/solutions<br />

321<br />

Glimmerglass [Online]. Available: http://www.glimmerglass.com<br />

322<br />

These provide the ability to selectively regenerate individual wavelength channels of degraded signal quality when forming paths that<br />

exceed the transparency distance i.e. the length an optical channel can traverse without the need for optoelectronic conversion. To<br />

achieve this, a set of regenerators that can be selectively accessed by any of the incoming wavelength channels is used<br />

323<br />

P. Öhlén, “Noise and Crosstalk Limitations in Optical Cross-Connects with Reshaping Wavelength Converters”, Journal of Lightwave<br />

Technology, vol. 17, no. 8, pp. 1294-1301, August 1999.<br />

324<br />

A. Tzanakaki, K. M. Guild, D. Simeonidou, M. J. O'Mahony, “Error-Free Transmission through 30 Cascaded Optical Cross-Connects<br />

Suitable for Dynamically Routed WDM Networks”, Electronics Letters, vol. 35, no. 20, pp. 1755-1756, Sept. 1999.<br />

325<br />

A. Tzanakaki, D. Simeonidou, K. M. Guild and M.J . O'Mahony, “Suppression of Non-linear Impairments due to Wavelength<br />

Conversion in All-Optical Transport Networks”, ECOC'99, Nice, vol. I, pp. I-408-I-409, Sept. 1999.<br />

326<br />

G. I. Papadimitriou, C. Papazoglou, C. A. S. Pomportsi,: Optical switching: switch fabrics, techniques, and architectures Lightwave<br />

Technology, Journal of , Volume: 21 , Issue: 2 , Feb. 2003 Pages:384 – 405<br />

327<br />

Mouftah, J. M. H. Elmirghani, “Photonic Switching Technology”, IEEE press, USA, 1999<br />

328<br />

A. Iocco, H. G. Limberger, R. P. Salathe, L. A. Everall,K. E. Chisholm, J. A. R. Williams, and I. Bennion, “Bragg Grating Fast Tunable<br />

Filter for Wavelength Division Multiplexing”, J. Lightwave Technology Vol. 17/5 pp. 1217 - 1221, July 1999<br />

<strong>Annex</strong> 2 - Page 172 of 282


Technology<br />

Loss<br />

(dB)<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Crosstalk<br />

(dB)<br />

PDL<br />

(dB)<br />

Acousto-optic 330 6 35 Low 332<br />

<strong>Annex</strong> 2 - Page 173 of 282<br />

Switching<br />

time 329<br />

Dimension<br />


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

A number of OXC architectures have been reported in the literature 340 341 , depending on the number of stages that<br />

are required. The most straightforward solution is based on a central switch fabric, which can potentially support<br />

a high port count, corresponding to the single stage crossbar switch. A scaleable optical switching fabric for a<br />

very high port count OXC would be based on 3-D MEMS 342 343 344 . However, most other more mature fabrics are<br />

limited to significantly smaller fabrics, typically 32×32. Therefore, various multistage optical switch structures<br />

have been suggested (e.g. 345 ) and among those an interesting one is the wavelength selective (WS) switch<br />

architecture 346 347 . The principle of operation is shown in Figure 69. Another architecture is the broadcast and<br />

select 348 . The B&S architecture can offer low loss OXC solutions for nodes that support a limited number of<br />

fibers, as the loss is significantly affected by the splitting ratio of passive splitters.<br />

<strong>A2.</strong>8.4.3 Future Technology - Research Trends Optical Packet and Burst Switching<br />

WDM is deployed to support the ever-increasing capacity demand. Even if circuit switched optical networks are<br />

introduced, access to the optical bandwidth will still be provided with waveband/wavelength granularity. In<br />

particular, future optical networks should be able to serve a client layer that includes packet based networks with<br />

highly dynamic connection patterns such as the internet 349 350 . Data-related traffic is already taking over from<br />

voice-related traffic in the network, and due to demand burstiness a transport network based on circuit switching<br />

may not be able to offer the required flexibility 351 352 .<br />

Also the range of future services will be very diverse in terms of required channel capacity channel occupancy,<br />

duration set-up time and frequency. 353 354 355 356 357 358 359 360 361 362 363 .<br />

Optical packet (and burst) switching like its electronic counter part has been suggested as a switching paradigm<br />

that will efficiently utilise the available fibre bandwidth, by statistically multiplexing information from different<br />

sources on the same channel 351 .<br />

340<br />

A. Tzanakaki, I. Zacharopoulos and I. Tomkos: Optical Add/Drop Multiplexers and Optical Cross-Connects for Wavelength Routed<br />

Networks, ICTON 2002<br />

341<br />

G. I. Papadimitriou, C. Papazoglou, C. A. S. Pomportsi,: Optical switching: switch fabrics, techniques, and architectures Lightwave<br />

Technology, Journal of , Volume: 21 , Issue: 2 , Feb. 2003 Pages:384 – 405<br />

342<br />

P. De Dobbelaere, K. Falta, L. Fan, S. Gloeckner, and S. Patra, “Digital MEMS for optical switching,” IEEE Commun. Mag., pp. 88–95,<br />

Mar. 2002.<br />

343<br />

T.-W. Yeow, K. L. E. Law, and A. Goldenberg, “MEMS optical switches,” IEEE Commun. Mag., vol. 39, pp. 158–162, Nov. 2001.<br />

344<br />

Xuezhe Zheng et al: Three-Dimensional MEMS Photonic Cross-Connect Switch Design and Performance, IEEE Journal of Sel. Topics<br />

in Q. Electronics, VOL. 9, NO. 2, MARCH/APRIL 2003, p/ 571<br />

345<br />

C. Clos, The Bell System Technical Journal, pp 406-424, March 1953.<br />

346 Diamond Wave: http://www.calient.net/products or http://www.calient.net/solutions<br />

347 L. Gillner, C. P. Larsen and M. Gustavson, “Scalability of Optical Multiwavelength Switching Networks: Crosstalk Analysis”, Journal<br />

of Lightwave Technology, vol. 17, no 1, pp. 58-67, Jan. 1999.<br />

348 M. Sharma, M. Soulliere, A. Boskovic and L. Nederlof, “Value of Agile Transparent Optical Networks”, (invited paper), TuV1, LEOS<br />

annual meeting, 2002<br />

349 S. Dixit ed. : IP over WDM, Building the Next Generation Optical Internet, J. Willey and Sons, 2003<br />

350 Moises R. N. Ribeiro: Traffic Prioritisation in Photonic Packet Switching, PhD Thesis, University of Essex, June 2002<br />

351 IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004 (www.ist-optimist.org)<br />

352 R. Inkret, A. Kuchar, B. Mikac: Advanced Infrastructure for Photonic Networks: Extended Final Report of Cost Action 266, 2003<br />

353 Blumenthal D. J., P. R. Pructnal, and J. R. Sauer, “Photonic packet switches: architecture and experimental implementations”,<br />

Proceedings of IEEE, Vol 82, No.11, Nov. 1994.<br />

354<br />

D J Blumenthal et al, “All-Optical Label Swapping Networks and Technologies”, JLT Vol 18, No 12, Dec 2000<br />

355<br />

Tucker, RS et al, “Photonic Packet Switching: An Overview”, IECE Trans Commun, Vol E82-B, Feb 1999<br />

356<br />

S Yao et alia, “Advances in Photonic Packet Switching”, IEEE Comms Magazine, Feb 2000<br />

357<br />

MJ.OMahony, D.Simeonidou, D Hunter, A Tzanakaki, "The Application of Optical Packet Switching in Future Communication<br />

Networks", IEEE Comms Magazine, pp128-135, March 2001<br />

358<br />

T. S. El-Bawab, S. Jong-Dug: Optical packet switching in core networks: between vision and reality<br />

Communications Magazine, IEEE, Volume: 40, Issue: 9, Sep 2002, Pages: 60 – 65<br />

359<br />

X. Lisong H. G. Perros, G. Rouskas,: Techniques for optical packet switching and optical burst switching Communications Magazine,<br />

IEEE , Volume: 39 , Issue: 1 , Jan. 2001, Pages:136 – 142<br />

360<br />

D. Hunter, I. Andonovic: Approaches to optical Internet packet switching Communications Magazine, IEEE, Volume: 38, Issue: 9, Sept.<br />

2000 Pages: 116 – 122<br />

361 S. Yao, B. Mukherjee, S. Dixit: Advances in photonic packet switching: an overview, Communications Magazine, IEEE , Volume: 38<br />

, Issue: 2 , Feb. 2000 Pages:84 – 94<br />

362 D. Blumenthal, P. Prucnal, J. Sauer, “Photonic packet switches-architectures and experiment implementations”, IEEE Proceedings,<br />

82, 1650-1667, November 1994<br />

363 M. Renaud eta al : Network and system concepts for optical packet switching<br />

Communications Magazine, IEEE, Volume: 35, Issue: 4, April 1997, Pages: 96 – 102<br />

<strong>Annex</strong> 2 - Page 174 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

As packet technology offers high granularity it facilitates the convergence of electronic and optical technologies<br />

in an IP-centric networks. Improved networked economics can be achieved via efficient bandwidth utilization<br />

but also through simplification of management and control 364 .<br />

hese arguments for the use of photonic packet switching are from the network point of view 365 366 . There are also<br />

important roles to be played by optics in electronic nodes that may possibly point in the direction of photonic<br />

switching nodes. Until recently port speed of electronic nodes had to follow the increase in transmission bit rate.<br />

After the advent of WDM, node capacity could be increased not only by upgrading bit-rate per port but the<br />

number of wavelengths. Unfortunately this solution does not solve every problem for terabit router design as<br />

node dimension will possibly reach a three figure number which will in turn bring in the problems of multi stage<br />

interconnection (power consumption, footprint, electromagnetic interference 365 ). Consequently optical parallel<br />

interconnection is already in the majority terabit routers as a means to interconnect the multiple stages and to<br />

reduce the number of ports 365 . Once interconnections are made optically, it is reasonable to imagine an optical<br />

transparent central stage without OEO conversions and ultimately transparent photonic switching from inlets to<br />

outlets. Some router vendors have already introduced optical switching for capacity expansion 367 . It is evident<br />

that this evolution scenario depends, at least, on the availability of robust fast, i.e. ns, and preferably completely<br />

transparent optical switches and feasibility of contention resolution techniques 368 .<br />

The above roadmap towards optical packet switching was discussed from the early 90’s with demos of simple<br />

architectures of e.g. 2×2 switches 369 . More elaborate designs 370 followed using fibre-delays for buffering. Some<br />

European projects developed the most comprehensive investigation into photonic packet switching. The<br />

ATMOS project first demonstrated feasibility of a 4×4, 2.5Gbps optical (ATM) switch with wavelength<br />

conversion 371 , 372 . The follow-up project KEOPS demonstrated optical packet switching with functions including<br />

synchronisation and regeneration 373 374 . Recently DAVID investigated the interconnection of optical packet<br />

switched MANs and WANs 373 374 375 376 377 378 . The IST project STOLAS is coming to an end with a demonstrator<br />

of an optical burst switched network 379 . Presently IST NOBEL is looking into optical packet and burst switching<br />

to identify the evolution of core and metro optical networks 380 while IST LASAGNE investigates all-optical<br />

functions for packet switched networks 381 . The Network of Excellence e-Photon ONE has dedicated<br />

workpackages in optical packet and burst switch architectures 382 . In the meantime in Europe nationally-funded<br />

projects are involved with demonstrating optical packet and burst switched networks.<br />

364<br />

M. O’Mahony: Optical Packet Switching, Short Course, ECOC 2004<br />

365<br />

Moises R. N. Ribeiro: Traffic Prioritisation in Photonic Packet Switching, PhD Thesis, University of Essex, June 2002<br />

366<br />

Q. Yang, K. Bergman: ‘Performances of the Data Vortex switch architecture under nonuniform and bursty traffic Lightwave<br />

Technology, Journal of , Volume: 20 , Issue: 8 , Aug. 2002 Pages:1242 – 1247<br />

367<br />

Chiaro [Online]. Available: www.chiaro.com<br />

368<br />

Steinar Bjornstad: Packet Switching in optical Networks, Doctoral Thesis at NTNU, 2004:101<br />

369<br />

Blumenthal D. J., K. Y. Chen, J. Ma, F. R. J., and J. R. Sauer, “Demonstration of a deflection routing 2x2 photonic switch for computer<br />

interconnects," IEEE Photonics Technology Letters, Vol. 4, No .2, pp. 169-173, February 1992<br />

370<br />

Haas Z., “The staggering switch: an electronically controlled optical packet switch”, IEEE/OSA Journal of Lightwave Technology, vol.<br />

11, No. 5/6, pp. 925 36, May/June 1993<br />

371<br />

M. Renaud eta al : Network and system concepts for optical packet switching, Communications Magazine, IEEE, Volume: 35, Issue: 4,<br />

April 1997, Pages: 96 – 102<br />

372 D. Boettle et al: ATMOS ATM Optical Switching – System Perspective, Fiber and Integrated Optics, 15, pages: 267-279, 1996<br />

373 Gambini P. et al, "Transparent optical packet switching: network architecture and demonstrator in the KEOPS project", J. on Selected<br />

Areas in Comm., vol. 16, pp.1245-1259, Sep. 1998<br />

374 Chiaroti D. et al, "Physical and logical validation of a network based on all-optical packet switching system", IEEE, Journal of<br />

Lightwave Tech. Vol. 16, pp 2117-2132, December 1998<br />

375 Jourdan A., et al “The perspective of optical packet switching in IP-dominat Backbone and Metropolitan Networks”, IEEE Comm.<br />

Magazine, pp. 136-141, March 2001<br />

376 D. Chiaroni et al: First demonstration of an asynchronous optical packet switching matrix prototype for Multi-Terabit-class<br />

routers/switches, postdeadline paper in ECOC 2001 proceedings , Oct. 2001<br />

377 Lars Dittman, Dominique Chiaroni, "DAVID-an approach towards MPLS based optical packet switching with QoS support",<br />

Photonics in Switching 2001, Paper ThD1<br />

378 L. Dittmann et al: The European IST project DAVID: a viable approach toward optical packet switching Selected Areas in<br />

Communications, IEEE Journal on , Volume: 21 , Issue: 7 , Sept. 2003 Pages:1026 – 1040]<br />

379 T Koonen et al, “Optical packet routing in IP over WDM networks deploying two level optical labelling”, ECOC 2001, Paper Th 1.2.1<br />

380 IST Nobel Project: Next Generation Optical Networks for Broadband Europe Leadership [Online]. Available: www.ist-nobel.org<br />

381 IST Lasagne Project [Online]. Available: www.ist-lasagne.org<br />

382 IST e-Photon ONE Network of Excellence. [Online]. Available: http://www.e-photon-one.org/ephotonone/servlet/photon.Generar<br />

<strong>Annex</strong> 2 - Page 175 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The French ROM project studied the feasibility of an optical packet switch architecture 383 with emphasis on the<br />

Quality of Service through aggregation at the edge and wavelength conversion at the core switch. The Italian<br />

RINGO and now WONDER projects are looking in packet switching in a ring MAN 384 .<br />

The German TRANSINET and MULTITERANET are studying the paradigm shift from circuit switched to<br />

optical burst switched networks 385 . In UK among others WASPNET 386 and OPSNET/OPORON 387388389 are<br />

two of the main projects that have effectively demonstrated synchronous and asynchronous packet switching<br />

correspondingly. Also in 390 , an OPS demo on OCDM and in 391 392 electro-optic switching has successfully been<br />

achieved. Plenty of work is focused also on the different burst switching technologies 393 . In Japan pioneer work<br />

has been done with the FRONTIERNET 394 but also more recently with optical packet switch demonstrations of<br />

OPS and OBS with the impressive work of code based label switching from CRL/NICT and Osaka University<br />

395 396 397 398 399 .<br />

Also work on photonic packet switching with electronic switching after fast serial to parallel conversion<br />

402 403 404 405 406<br />

. In USA HORNET in Stanford & also CORD , OPERA nd other more recent demonstrators and<br />

383 P. Gravey et al: Multiservice optical network: main concepts and first achievements of the ROM program, Lightwave Technology,<br />

Journal of, Volume: 19, Issue: 1, Jan 2001, Pages: 23 – 31<br />

384 R. Gaudino: RINGO: demonstration of a WDM packet network architecture for metro applications Transparent Optical Networks, 2002.<br />

Proceedings of the 2002 4th International Conference on , Volume: 1 , 21-25 April 2002 Pages:77 - 80 vol.1<br />

385 C. M. Gauger: Optimized combination of converter pools and FDL buffers for contention resolution in optical burst switching. accepted<br />

for publication in Optical Networks Magazine, 2003<br />

386<br />

Hunter D. et al., “WASPNET: a wavelength switched packet network”, IEEE Com. Magazine, pp. 120-129, Mar. 1999<br />

387<br />

D. Klonidis, C. Politi, M. O'Mahony, D. Simeonidou: Fast and widely tunable optical packet switching scheme based on tunable laser<br />

and dual-pump four-wave mixing Photonics Technology Letters, IEEE , Volume: 16 , Issue: 5 , May 2004, Pages:1412 – 1414<br />

388<br />

R. Nejabati et al: Demonstration of Ingress Edge IP-packet packet router in wavelength routed optical packet switched networks, Th.<br />

1.6.3, ECOC 2004<br />

389<br />

D. Klonidis et al: ‘Demonstration of a Fully Functional and Controlled Asynchronous Optical Packet Switch at 40Gb/s, Post Deadline<br />

Paper, ECOC 2004, Th. 4.5.5<br />

390 P.C.Teh, B.C.Thomsen, M.Ibsen, D.J.Richardson : Multi-wavelength (40 WDM X 10 Gbit/s) optical packet router based on<br />

superstructure fibre Bragg gratings, IEICE Transaction: Special Issue on Recent Progress in Optoelectronics and Communication 2003<br />

Vol.E86B(5) pp.1487-92 (Invited)<br />

391 S Yu, M Owen, R Varrazza, RV Penty and IH White, 'Demonstration of high-speed optical packet routing using vertical coupler<br />

crosspoint space switch array', Electron. Lett., Vol.36, No.6, pp.556-558, 2000<br />

392 R. Varrazza, I. B. Djordjevic, Y. Siyuan: Active vertical-coupler-based optical crosspoint switch matrix for optical packet-switching<br />

applications, Lightwave Technology, Journal of , Volume: 22 , Issue: 9 , Sept. 2004, Pages:2034 - 2042<br />

393 M. Duser,P. Bayvel: Analysis of a dynamically wavelength-routed optical burst switched network architecture, Lightwave Technology,<br />

Journal of , Volume: 20 , Issue: 4 , April 2002, Pages:574 – 585<br />

394 Yamada Y., et al., “Optical output buffered ATM switch prototype based on FRONTIERNET architecture”, IEEE Selected Areas in<br />

Communications, Vol. 16, No. 7, pp. 1298-1308, Sept. 1998<br />

395 N.Wada, H.Harai, W.Chujo, and F.Kubota, "80G bit/s variable rate photonic packet routing based on multi-wavelength label switch,"<br />

27th European Conference on Optical Communication(ECOC2001)(Amsterdam, The Netherlands), vol. 3, no. We-B-2-3, pp. 308-309,<br />

October 2001.<br />

396 N.Wada, W.Chujo, and K.Kitayama. "1.28 Tbit/s (160 Gbit/s x 8 wavelengths) throughput variable length packet switching using optical<br />

code based label switch," ECOC2001, (Amsterdam, The Netherlands), vol.6, no. PD-A-1-9, pp.62-63, October 2001<br />

397 N.Wada, H.Harai, W.Chujo, and F.Kubota, "Multi-hop, 40Gbit/s variable length photonic packet routing based on multi-wavelength<br />

label switching, waveband routing, and label swapping," Optical Fiber Communications Conference and Exhibit 2002 (OFC2002), vol.<br />

OFC2002 Technical Degest, no. WG3, pp. 216-217, March 2002<br />

398 N. Wada, R. Takemori, N. Kataoka, F. Kubota and K. Kitayama, "200G-Chip/s, 128-Chip Hierarchical Optical BPSK Labels Processing<br />

and its Networking Application,"29th European Conference on Optical Communication (ECOC 2003), vol. 2, Tu4.4.2, pp. 304-305,<br />

September 2003.<br />

399 N. Wada, H. Harai and F. Kubota, "40Gbit/s, multi-hop optical packet routing using optical code label processing based packet switch<br />

prototype," Optical Fiber Communication Conference 2004 (OFC 2004), FO7, FO-62 - FO-64, February 2004.<br />

400 T. Nakahara, R. Takahashi,H. Takenouchi, H. Suzuki: Optical single-clock-pulse generator using a photoconductive sample-and-hold<br />

circuit for processing ultrafast asynchronous optical packets, Photonics Technology Letters, IEEE , Volume: 14 , Issue: 11 , Nov. 2002<br />

Pages:1623 – 1625<br />

<strong>Annex</strong> 2 - Page 176 of 282<br />

400 401


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

testbeds in UC Santa Barbara with CISCO 407 408 409 and UC Davies 410 411 412 have provided with important results<br />

on the implementation of optical packet switched networks.<br />

The remainder of this section summarises the design considerations for a core optical packet switch network and<br />

the technology approaches related to packet format, switch architectures, contention resolution mechanisms<br />

from literature through the effort these projects mentioned above.<br />

Design Considerations for a Core Packet Switch Network:<br />

Packet Generator: This function is related to many issues like the packet format, synchronous or asynchronous<br />

network and aggregation or fragmentation. Many aggregation methods have been proposed e.g. 413 414 . In 415 IP<br />

packets are fragmented to fill in packet slots. In Figure 70 an abstract model shows a simple design of the packet<br />

generator 413 . Different packet formats are discussed below along with the complexity of the header generator,<br />

e.g. SCM headers add complexity to the generator 416 417 .<br />

401<br />

K. Takahata, T. Nakahara, H. Takenouchi, R. Takahashi, H. Suzuki: Electrical parallel-to-serial converter using MSM-PDs and<br />

application to bypass/drop self-routing, Electronics Letters, Volume: 39, Issue: 1, 9 Jan. 2003, Pages:105 – 107<br />

402<br />

R. Takahashi, T. Yasui, N. Kondo, H. Suzuki: 40 Gbit/s 16-bit label recognition using planar-lightwave-circuit-based all-optical serialto-parallel<br />

converter, Electronics Letters, Volume: 39, Issue: 15, 24 July 2003, Pages: 1135 – 1136<br />

403<br />

Wonglumsom et alia, “Experimental Demonstration of an Access Point for HORNET-A Packet Over WDM Multiple Access MAN”,<br />

JLT Vol 18, No 12, Dec 2000<br />

404<br />

I. Chlamtac et al, “Cord: contention resolution by delay lines”, IEEE Journal of Selected Areas in Communications, 14, 1014-1029, June<br />

1996<br />

405 R. Cruz, J. T Tsai, “Cord: alternative architectures for high speed packet switching”, IEEE/ACM Transactions on Networking, 4, 11-21,<br />

February 1996<br />

406 A.Carena, M.D.Vaughn, R.Gaudino, M.Shell, D.J.Blumenthal, "OPERA: An Optical Packet Experimental Routing Architecture with<br />

Label Swapping Capability" , JLT, Vol16,No12, Dec1998<br />

407<br />

H. N. Poulsen et al: Demonstration of a true layer-3 IP to optical label switching adaptation layer with rapid ingress wave-length tuning<br />

and variable length packets, Photonics in Switching 2003, PS.Mo.B4, Versailles 2003<br />

408<br />

Wei Wang, L. Rau and D. J. Blumenthal: All-Optical Label Switching/Swapping of 160 Gbps Variable Length Packets and 10 Gbps<br />

Labels using a WDM Raman Enhanced-XPM Fiber Wavelength Converter with Unicast/Multicast Operation, Post Deadline, PDP8,<br />

OFC 2004.<br />

409<br />

S. Rangarajan, et al: ‘All-optical contention resolution with wavelength conversion for asynchronous variable-length 40 Gb/s optical<br />

packets, Photonics Technology Letters, Volume: 16/ 2, Feb. 2004 , Pages:689 – 691<br />

410<br />

Min Yong Jeon et al: Demonstration of All-Optical Packet Switching Routers With Optical Label Swapping and 2R Regeneration for<br />

Scalable Optical Label Switching Network Applications, J. Lightwave Tech. Vol. 21, No. 11, Nov. 2003 p.2723<br />

411<br />

L. Tancevski,A. Bononi, A.L. A. Rusch,: Output power and SNR swings in cascades of EDFAs for circuit- and packet-switched<br />

optical networks, Lightwave Technology, Journal of , Volume: 17 , Issue: 5 , May 1999 Pages:733 – 742<br />

412<br />

R. T. Hofmeuster et al: CORD: Optical Packet Switched Network TestBed, Fiber and Integrated Optics, 16, p:199-219, 1997<br />

413<br />

R. Nejabati et al: Demonstration of Ingress Edge IP-packet packet router in wavelength routed optical packet switched networks, Th.<br />

1.6.3, ECOC 2004<br />

414<br />

H. N. Poulsen et al: Demonstration of a true layer-3 IP to optical label switching adaptation layer with rapid ingress wave-length tuning<br />

and variable length packets, Photonics in Switching 2003, PS.Mo.B4, Versailles 2003<br />

415<br />

Lars Dittman, Dominique Chiaroni, "DAVID-an approach towards MPLS based optical packet switching with QoS support",<br />

Photonics in Switching 2001, Paper ThD1<br />

416<br />

I. Chlamtac et al, “Cord: contention resolution by delay lines”, IEEE Journal of Selected Areas in Communications, 14, 1014-1029, June<br />

1996<br />

417 S. J. Ben Yoo: High-Performance Optical-Label Switching Packet Routers and Smart Edge Routers for the Next-Generation Internet<br />

Journal of Sel Areas in Communications, VOL. 21,/ 7, September 2003 p. 1041<br />

<strong>Annex</strong> 2 - Page 177 of 282


IP packets<br />

Aggregation<br />

unit<br />

Wavelength<br />

Table<br />

Packet<br />

Construction<br />

Header<br />

Assignment<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Packet Generation<br />

Transmitter<br />

λι<br />

Figure 70 Optical packet generator<br />

<strong>Annex</strong> 2 - Page 178 of 282<br />

Optical packet<br />

Packet Transmission: Optical fibre transmission is an analogue process. The signal quality degrades due to<br />

nonlinearities, chromatic dispersion, polarization mode dispersion and EDFA noise accumulation 418 . When the<br />

transmission of data packets is performed, issues like power and phase fluctuations on a packet - by - packet<br />

basis arise (Figure 71). Non-linearities (FWM, XPM) are affected by the peak power, so their effects depend on<br />

the variable number of simultaneous packets. Packets arriving from different destinations are also degraded at a<br />

different level. Additionally amplifiers suffer from dynamic affects. The burstiness of the traffic, impacts both<br />

the signal power and the noise properties, which are changed on a packet level 419 . Finally, depending on the<br />

packet format, cross-talk effects may arise 420 421 . Thus it is clear that unless proper measures are taken (e.g.<br />

periodic regeneration) effects at the packet level may affect the quality of the data within the packet.<br />

418<br />

IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004 (www.ist-optimist.org)<br />

419<br />

L. Tancevski,A. Bononi, A.L. A. Rusch,: Output power and SNR swings in cascades of EDFAs for circuit- and packet-switched<br />

optical networks, Lightwave Technology, Journal of , Volume: 17 , Issue: 5 , May 1999 Pages:733 – 742<br />

420<br />

I. Chlamtac et al, “Cord: contention resolution by delay lines”, IEEE Journal of Selected Areas in Communications, 14, 1014-1029, June<br />

1996<br />

421<br />

N.Wada, H.Harai, W.Chujo, and F.Kubota, "80G bit/s variable rate photonic packet routing based on multi-wavelength label switch,"<br />

27th European Conference on Optical Communication(ECOC2001)(Amsterdam, The Netherlands), vol. 3, no. We-B-2-3, pp. 308-309,<br />

October 2001.


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

packet #2<br />

Clock packet #2<br />

Figure 71 Packet transmission<br />

Packet forwarding and switching: In Figure 72 the main functions of an optical packet switch node 422 423 are<br />

illustrated. Packets from different fibers are demultiplexed at the input and arrive at an input-processing unit.<br />

The main functions that should be performed in the input processing unit of node are: header extraction 424 , 425<br />

(and deletion), and if required synchronisation 426 427 428 429 430 and power equalisation 431 432 or possibly 2R or<br />

3R regeneration. The header, according to the information it carries and after it is compared with the routing<br />

table info, sets the control and the switch fabric is configured in a way that the input packet is directed to the<br />

requested output port 422 433 . Then packets go through the output-processing unit where functions like: header<br />

reinsertion 433 and possibly re-synchronisation, wavelength conversion, 2R or 3R regeneration 434 435 or power<br />

equalization takes place on a packet-by-packet basis. Contention resolution should be performed at the same<br />

time 436 .<br />

422 IST OPTIMIST consortium: EU Photonic Roadmap, Key Issues for Optical Networking, January 2004 (www.ist-optimist.org)<br />

423 Gambini P. et al, "Transparent optical packet switching: network architecture and demonstrator in the KEOPS project", J. on Selected<br />

Areas in Comm., vol. 16, pp.1245-1259, Sep. 1998.<br />

424<br />

Hunter D. et al., “WASPNET: a wavelength switched packet network”, IEEE Com. Magazine, pp. 120-129, Mar. 1999<br />

425<br />

Chiaroti D. et al, "Physical and logical validation of a network based on all-optical packet switching system", IEEE, Journal of<br />

Lightwave Tech. Vol. 16, pp 2117-2132, December 1998<br />

426<br />

Gambini P. et al, "Transparent optical packet switching: network architecture and demonstrator in the KEOPS project", J. on Selected<br />

Areas in Comm., vol. 16, pp.1245-1259, Sep. 1998.<br />

427<br />

C. Qiao, M. Yoo, “Optical burst switching (OBS)-A new paradigm for an optical Internet”, Journal of High Speed Networks, 8(1), 69-<br />

84, January 1999<br />

428<br />

R. T. Hofmeuster et al: CORD: Optical Packet Switched Network TestBed, Fiber and Integrated Optics, 16, p:199-219, 1997<br />

429<br />

D. Boettle et al: ATMOS ATM Optical Switching – System Perspective, Fiber and Integrated Optics, 15, pages: 267-279, 1996<br />

430<br />

A.Carena, M.D.Vaughn, R.Gaudino, M.Shell, D.J.Blumenthal, "OPERA: An Optical Packet Experimental Routing Architecture with<br />

Label Swapping Capability" , JLT, Vol16,No12, Dec1998<br />

431<br />

M. O’Mahony: Optical Packet Switching, Short Course, ECOC 2004<br />

432<br />

I. Chlamtac et al, “Cord: contention resolution by delay lines”, IEEE Journal of Selected Areas in Communications, 14, 1014-1029, June<br />

1996<br />

433 R. Inkret, A. Kuchar, B. Mikac: Advanced Infrastructure for Photonic Networks: Extended Final Report of Cost Action 266, 2003<br />

434 L. Dittmann et al: The European IST project DAVID: a viable approach toward optical packet switching Selected Areas in<br />

Communications, IEEE Journal on , Volume: 21 , Issue: 7 , Sept. 2003 Pages:1026 – 1040]<br />

435 Zhong Pan et al: Packet-by-Packet Wavelength, Time, Space-Domain Contention Resolution in an Optical-Label Switching Router With<br />

2R Regeneration, Photon. Techn. Letters, VOL. 15, NO. 9, Sept. 2003 p. 1312<br />

436 Hunter D. et al., “WASPNET: a wavelength switched packet network”, IEEE Com. Magazine, pp. 120-129, Mar. 1999<br />

#1<br />

<strong>Annex</strong> 2 - Page 179 of 282<br />

#2 #1<br />

Phase jump<br />

packet #3<br />

clock Paket #3<br />

t


IP packets<br />

1<br />

N inputs (wavelength channels)<br />

N<br />

Segregation<br />

unit<br />

.<br />

.<br />

.<br />

Header<br />

Extraction<br />

Input<br />

Processing<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 72 Optical packet switch functions<br />

Payload<br />

.<br />

.<br />

.<br />

Packet Detection<br />

Data and clock<br />

recovery unit<br />

Contention<br />

Resolution<br />

Switching<br />

Electronic Control<br />

Routing Table<br />

Power<br />

equalisation<br />

/Threshold<br />

acquisition<br />

Figure 73 Optical Packet Receiver<br />

<strong>Annex</strong> 2 - Page 180 of 282<br />

Optical packet<br />

Packet Receiver 437 438 439 : There are many burst-mode receivers that have been proposed for asynchronous and<br />

synchronous networks 440 441 442 . Figure 73 shows one possible design, with some major functionalities such as<br />

power equalisation and/or threshold acquisition; and clock extraction and can be performed either electronically<br />

437<br />

T. Nakahara, R. Takahashi,H. Takenouchi, H. Suzuki: Optical single-clock-pulse generator using a photoconductive sample-and-hold<br />

circuit for processing ultrafast asynchronous optical packets, Photonics Technology Letters, IEEE , Volume: 14 , Issue: 11 , Nov. 2002<br />

Pages:1623 – 1625<br />

438<br />

H. Nishizawa et al.; NTT Network Innovation Laboratories; “Packet-by-packet power-fluctuation and packet-arrival timing-jitter<br />

tolerance of a 10-Gbit/s burst-mode optical packet receiver”, ECOC’2000, Proc. Vol. 4; pp 75-77<br />

439<br />

D. Chiaroni et al. First demonstration of an asynchronous optical packet switching matrix prototype for Multi Terabit-class<br />

routers/switches, ECOC’2001, Proc. Vol. 6 Post-Deadline Papers, pp 60-61<br />

440<br />

A.Carena, M.D.Vaughn, R.Gaudino, M.Shell, D.J.Blumenthal, "OPERA: An Optical Packet Experimental Routing Architecture with<br />

Label Swapping Capability" , JLT, Vol16,No12, Dec1998<br />

441<br />

I. M. White et al: ‘Demonstration and system analysis of the HORNET architecture, Lightwave Technology, , Volume: 21 /11 pp. 2489 –<br />

2498, 2003<br />

442<br />

I. M. White et al: A summary of the HORNET project: a next-generation metropolitan area network’ Selected Areas in Communications,<br />

IEEE Journal on , Volume: 21 , Issue: 9 , Nov. 2003 Pages:1478 – 1494<br />

.<br />

.<br />

.<br />

Output<br />

Processing<br />

.<br />

.<br />

.<br />

1<br />

N outputs (wavelength channels)<br />

N


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

or optically. Header and payload separation and payload segregation are functions required an edge node, where<br />

IP packets are aggregated or disaggregated.<br />

Optical packet and optical burst switching: OPS and OBS are two switching modes that are based on the<br />

idea of separating the functions of forwarding and switching in a network node 443 . In OBS the out-of-band burst<br />

control packet will make forwarding related decisions while in OPS the attached optical header plays this role 444<br />

445 . Many papers have focused on analysing the differences between packet and burst-switched networks (e.g. 446<br />

447<br />

). One of the main differences is related to the packet duration and format. Most OPS analyses assume 0.1-1<br />

us packet duration 448 . The header is usually in band with the payload and always temporally attached to it. In<br />

OBS studies, IP packets are aggregated forming optical payloads of tens of ms. Out-of-band or control and large<br />

off set times between control and payload provide an OBS network with flexibility in terms of QoS etc 444 . OPS<br />

is more stringent in terms of switching times, contention resolution requirements and header processing<br />

functions as it requires an optical receiver per wavelength at each node.<br />

Definition of optical burst switching (OBS)<br />

During the past years, the definitions of burst and packet switching networks in the optical domain and their<br />

differences has become less clear, due to the large number of new proposals claiming either name.<br />

Burst and packet switching (BS and PS) architectures both provide sub-wavelength granularity by employing<br />

asynchronous time division multiplexing. In the case where switching is performed all-optically and data stays<br />

in the optical domain until the destination edge node the concepts are referred to as optical burst switching<br />

(OBS) and optical packet switching (OPS).<br />

The following characteristics either individually or in combination can be regarded defining for burst switching<br />

449<br />

in contrast to packet switching:<br />

• Client layer data is aggregated and assembled into larger variable length optical bursts in edge nodes.<br />

• Control information is signaled out-of-band, processed electronically in all core nodes and used to set up the<br />

switch matrix before the data bursts arrive.<br />

While the first characteristic is motivated by implementation complexity for burst processing in the optical<br />

domain, the second can be motivated by the advantage that only the signaling channel has to be terminated in<br />

all-optical core nodes.<br />

In (i) aggregating larger data volumes makes the preamble overhead required for burst/packet detection at the<br />

receiver in burst-mode transmission insignificant and thus a possible bit-wise processing unattractive.<br />

Aggregation of very large data volumes could even motivate to setup end-to-end paths for individual bursts, c.f.<br />

wavelength routed OBS below. The advantage indicated in (ii) of terminating only one or a small subset of<br />

wavelength channels for signaling could reduce cost of otherwise optical core nodes significantly.<br />

In burst switching, signaling of burst control information can be performed as one-pass reservation or by end-toend<br />

setup. In one-pass reservation, burst transmission is not delayed until an acknowledgment of successful endto-end<br />

path setup is received but is initiated shortly after the burst was assembled and the control packet was<br />

sent out 450 451 449 . In contrast, wavelength routed OBS 452 employs full end-to-end setup. While the previous can<br />

reduce setup delay in networks with large round-trip-times the latter can avoid loss in the core network as data<br />

waits in edge nodes until successful path setup. In burst switching with one-pass reservation, communication can<br />

be either connectionless (each burst is a datagram) or connection-oriented using a virtual circuit. With end-toend<br />

setup, communication is connection-oriented circuit-switched.<br />

443 In IP routers the functions of routing is related to the determining the path from each source destination pair, forwarding the process of<br />

sending a packet from an input to an output, scheduling the selection of which outgoing packets are going to be send to the next router<br />

etc.<br />

444 S. Dixit ed. : IP over WDM, Building the Next Generation Optical Internet, J. Willey and Sons, 2003<br />

445 R. Inkret, A. Kuchar, B. Mikac: Advanced Infrastructure for Photonic Networks: Extended Final Report of Cost Action 266, 2003<br />

446 J. S. Turner, “Terabit burst switching”, Journal of High Speed Networks, 8(1), 3-16, January 1999<br />

447 C. Qiao, M. Yoo, “Choices, features, and issues in optical burst switching”, Optical Networks, 1(2), 36-44, April 2000<br />

448 Internet traffic today can assumed 40 to 1500 bytes length 445<br />

449<br />

Dolzer, K.; Gauger, C.M.; Spaeth, J.; Bodamer, S.: Evaluation of reservation mechanisms for optical burst switching, AEÜ International<br />

Journal of Electronics and Communications, Vol. 55, No. 1, January 2001<br />

450<br />

J. S. Turner, “Terabit burst switching”, Journal of High Speed Networks, 8(1), 3-16, January 1999<br />

451<br />

C. Qiao, M. Yoo, “Optical burst switching (OBS)-A new paradigm for an optical Internet”, Journal of High Speed Networks, 8(1), 69-<br />

84, January 1999<br />

452<br />

M. Duser,P. Bayvel: Analysis of a dynamically wavelength-routed optical burst switched network architecture, Lightwave Technology,<br />

Journal of , Volume: 20 , Issue: 4 , April 2002, Pages:574 – 585<br />

<strong>Annex</strong> 2 - Page 181 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

In packet switching, several architectures assume synchronous operation and often the store-and-forward<br />

principle is copied by applying fine-grain optical buffers, e.g. large fiber delay line buffers.<br />

In order to avoid or recover from contention situations in core nodes of optical burst and packet switching<br />

networks one or a combination of following schemes are often proposed: wavelength conversion, fiber delay<br />

line buffers, deflection routing.<br />

Figure 74 depicts the impact of characteristic (i), one-pass-reservation vs. end-to-end setup, technology and<br />

typical network sizes on burst or packet granularity (in terms of time duration). Following relationships are<br />

illustrated:<br />

• Granularity (blue) determines switching technology (red) and vice versa as switching time has to be<br />

significantly shorter than the typical granularity switched.<br />

• Granularity (blue) and network size (round-trip-time in green) determine whether one-pass reservation or<br />

end-to-end setup is advantageous. End-to-end signalling defined by network size should be less than the<br />

typical granularity transported, otherwise, one-pass reservation should be preferred.<br />

• Granularity is also determined by the access rate as this has an impact on assembly delay. Typically, access<br />

rate is lower than core rate and thus assembly delay at the edge is always larger than the transmission delay.<br />

From those arguments and characteristic (i), it can be derived that packets typically have a duration less than 1<br />

micro second, bursts for one-pass reservation have one between a few and a few hundred micro seconds and<br />

bursts for end-to-end setup have one in the microsecond range.<br />

Granularity<br />

Switching<br />

Technology<br />

End-to-end<br />

Signaling<br />

SOAs<br />

TWCs<br />

packet<br />

burst<br />

campus<br />

metro<br />

MEMS<br />

nation<br />

1 10 100 1 10 100 1 10 100<br />

nano sec micro sec milli sec<br />

<strong>Annex</strong> 2 - Page 182 of 282<br />

world<br />

dynamic circuit<br />

Burst Assembly edge delay assumption:<br />

core rate approx.<br />

1 10 100<br />

second<br />

10*access rate<br />

Figure 74 Overview over key parameters determining burst/packet granularity and signalling scheme 453<br />

Packet/Burst handling schemes: Synchronous vs. Asynchronous (slotted or unslotted) 454<br />

455 456<br />

There are two options regarding how data packets can be handled through the network. Synchronous (as in<br />

457 458 459 460 461 462 463 ) or asynchronous (as in ) handling is referring to the mode in which the core switch operates.<br />

453 IST Nobel Project, Deliverable 4, WP 3, “Requirements for burst/packet networks in core and metro supporting high quality broadband<br />

services over IP”<br />

454<br />

R. Inkret, A. Kuchar, B. Mikac: Advanced Infrastructure for Photonic Networks: Extended Final Report of Cost Action 266, 2003<br />

455<br />

Chiaroti D. et al, "Physical and logical validation of a network based on all-optical packet switching system", IEEE, Journal of<br />

Lightwave Tech. Vol. 16, pp 2117-2132, December 1998<br />

456<br />

Lars Dittman, Dominique Chiaroni, "DAVID-an approach towards MPLS based optical packet switching with QoS support",<br />

Photonics in Switching 2001, Paper ThD1<br />

457<br />

Hunter D. et al., “WASPNET: a wavelength switched packet network”, IEEE Com. Magazine, pp. 120-129, Mar. 1999<br />

458<br />

I. Chlamtac et al, “Cord: contention resolution by delay lines”, IEEE Journal of Selected Areas in Communications, 14, 1014-1029, June<br />

1996<br />

459 A.Carena, M.D.Vaughn, R.Gaudino, M.Shell, D.J.Blumenthal, "OPERA: An Optical Packet Experimental Routing Architecture with<br />

Label Swapping Capability" , JLT, Vol16,No12, Dec1998<br />

460 I. M. White et al: A summary of the HORNET project: a next-generation metropolitan area network’ Selected Areas in Communications,<br />

IEEE Journal on , Volume: 21 , Issue: 9 , Nov. 2003 Pages:1478 – 1494<br />

461 D. Klonidis et al: ‘Demonstration of a Fully Functional and Controlled Asynchronous Optical Packet Switch at 40Gb/s, Post Deadline<br />

Paper, ECOC 2004, Th. 4.5.5


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

This means that the switch can reconfigure incrementally (setup the connection each time a packet arrives) or<br />

jointly for all the input packets 464 . As the name implies clock extraction is not required for the asynchronous<br />

case as even for the header processing the node is locked to a distributed clock 465 . Asynchronous operation is<br />

similar to the IP implementation 461 . Its implementation seems more challenging but the related node design is<br />

simplified, as there is no need for synchronisation units. A synchronous node requires synchronisation units<br />

placed at input and outputs not shown in Figure 75.<br />

The handling scheme is usually referenced together with the packet duration. There three different options as far<br />

as the packet duration is concerned: variable 461 , fixed length 466 460 , or multiples of a specific unit 467 460 . Usually<br />

for slotted networks fixed length packets are assumed 468 , 469 while for unslotted networks variable 470 or<br />

multiples of a specific slot 471 are used. The choice between synchronous fixed length packets versus<br />

asynchronous variable length packets is an ongoing debate 472 .<br />

Intuitively one can argue that fixed length packets this will lead to fragmentation of packets longer than the unit<br />

and half-full packets in the network, hence higher overhead. Contention resolution and switch fabric<br />

configuration are the most important issues. Only switch fabrics that allow independent switching between input<br />

and output ports must be incorporated for asynchronous variable length packets. Additionally a node that<br />

handles variable length packets is more likely to block packets 472 , while buffer designs are more complicated 461<br />

as all the different lengths must be accommodated by the buffer.<br />

462<br />

N. Wada, H. Harai and F. Kubota, "40Gbit/s, multi-hop optical packet routing using optical code label processing based packet switch<br />

prototype," Optical Fiber Communication Conference 2004 (OFC 2004), FO7, FO-62 - FO-64, February 2004<br />

463<br />

Wei Wang, L. Rau and D. J. Blumenthal: All-Optical Label Switching/Swapping of 160 Gbps Variable Length Packets and 10 Gbps<br />

Labels using a WDM Raman Enhanced-XPM Fiber Wavelength Converter with Unicast/Multicast Operation, Post Deadline, PDP8,<br />

OFC 2004<br />

464<br />

S. Dixit ed. : IP over WDM, Building the Next Generation Optical Internet, J. Willey and Sons, 2003<br />

465<br />

Gambini P. et al, "Transparent optical packet switching: network architecture and demonstrator in the KEOPS project", J. on Selected<br />

Areas in Comm., vol. 16, pp.1245-1259, Sep. 1998.<br />

466<br />

M. Renaud eta al : Network and system concepts for optical packet switching<br />

Communications Magazine, IEEE, Volume: 35, Issue: 4, April 1997, Pages: 96 – 102<br />

467<br />

L. Dittmann et al: The European IST project DAVID: a viable approach toward optical packet switching Selected Areas in<br />

Communications, IEEE Journal on , Volume: 21 , Issue: 7 , Sept. 2003 Pages:1026 – 1040]<br />

468<br />

Lars Dittman, Dominique Chiaroni, "DAVID-an approach towards MPLS based optical packet switching with QoS support",<br />

Photonics in Switching 2001, Paper ThD1<br />

469<br />

Hunter D. et al., “WASPNET: a wavelength switched packet network”, IEEE Com. Magazine, pp. 120-129, Mar. 1999<br />

470<br />

D. Klonidis et al: ‘Demonstration of a Fully Functional and Controlled Asynchronous Optical Packet Switch at 40Gb/s, Post Deadline<br />

Paper, ECOC 2004, Th. 4.5.5<br />

471<br />

L. Dittmann et al: The European IST project DAVID: a viable approach toward optical packet switching Selected Areas in<br />

Communications, IEEE Journal on , Volume: 21 , Issue: 7 , Sept. 2003 Pages:1026 – 1040]<br />

472<br />

S. Dixit ed. : IP over WDM, Building the Next Generation Optical Internet, J. Willey and Sons, 2003<br />

<strong>Annex</strong> 2 - Page 183 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 75 Example of optical synchroniser. (a) Input (coarse). (b) Output (fine) The former aligns packets with<br />

node time reference in order for them to be switched. This compensates for phase changes and fluctuations<br />

imposed by physical layer connecting nodes therefore it can be provide coarse time shifts with resolution<br />

around a fraction of the guard time. The latter, however, must account for phase adjustments requiring,<br />

therefore, resolution finer than its bit duration. The approach used in KEOPS project to perform the tasks<br />

described above is in shown in figure where header and payload are demarked with keywords.<br />

Packet Formats<br />

Packet header usually starts with a delineation word and includes control information about the payload like<br />

source-destination pair, priority information, packet duration etc 473 474 475 476 . There are numerous<br />

implementations choices (see Table 15: Switching Fabrics commercially available or under development until<br />

September 2004 ) with main considerations: payload/header bit rate, header/payload positioning, header format<br />

etc. In Figure 76 a packet with an in-band serial header is shown.<br />

//<br />

tp tG1 tH tG2<br />

<strong>Annex</strong> 2 - Page 184 of 282<br />

Overhead= (tG1+ tH +tG2)/ <br />

Figure 76 Definition of packet header and payload duration, duration of guard bands 1 & 2, and overhead.<br />

One issue related to the packet format is the overhead. To minimise the overhead (tG1+ tH +tG2) should be<br />

minimised. This is related to the many parameters like the header positioning (tH=0 is header is parallel with<br />

payload other wise it should balance the speed of control electronics and the overhead of the packet), switching<br />

time (tG2), jitter (tG2), header extraction method (tG1), header processing time etc.<br />

Other than overhead, effects like crosstalk for the parallel case may impose considerations 477 478 . In addition<br />

delays will be greater in the case of the parallel header. But the bit rate can be lower and thus cheaper electronics<br />

can be used. Synchronisation for the header re-insertion procedure is more relaxed in the parallel case than the<br />

serial one. Table 1 reviews implemented techniques, their deletion and reinsertion techniques. Low bit rates<br />

subcarrier multiplexing is a favourable technique 477 479 480 while OCDM and DPSK are very promising for faster<br />

473 Gambini P. et al, "Transparent optical packet switching: network architecture and demonstrator in the KEOPS project", J. on Selected<br />

Areas in Comm., vol. 16, pp.1245-1259, Sep. 1998.<br />

474 D J Blumenthal et al, “All-Optical Label Swapping Networks and Technologies”, JLT Vol 18, No 12, Dec 2000<br />

475 A.Carena, M.D.Vaughn, R.Gaudino, M.Shell, D.J.Blumenthal, "OPERA: An Optical Packet Experimental Routing Architecture with<br />

Label Swapping Capability" , JLT, Vol16,No12, Dec1998<br />

476<br />

I. Tafur Monroy et al: Techniques for Labeling of Optical Signals in Burst Switched Networks, The First International Workshop on<br />

Optical Burst Switching , WOBS2003, p. 103<br />

477<br />

A.Carena, M.D.Vaughn, R.Gaudino, M.Shell, D.J.Blumenthal, "OPERA: An Optical Packet Experimental Routing Architecture with<br />

Label Swapping Capability" , JLT, Vol16,No12, Dec1998<br />

478 S. J. Ben Yoo: High-Performance Optical-Label Switching Packet Routers and Smart Edge Routers for the Next-Generation Internet<br />

Journal of Sel Areas in Communications, VOL. 21,/ 7, September 2003 p. 1041<br />

479 Hunter D. et al., “WASPNET: a wavelength switched packet network”, IEEE Com. Magazine, pp. 120-129, Mar. 1999<br />

480 I. Chlamtac et al, “Cord: contention resolution by delay lines”, IEEE Journal of Selected Areas in Communications, 14, 1014-1029, June<br />

1996


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

networks 481 482 . The formats discussed here are related to electronic control. More futuristic approaches include<br />

optical recognition of the header and optical swapping 483 , 484 . Switch Fabrics<br />

Keeping the overhead low dictates switching times to be in the order of ns. The table outlines robust<br />

technologies that can switch in ~ns. A tunable wavelength converter combined with a wavelength selective<br />

device should be added to those. Few of these are scalable hence multistage, wavelength switching and<br />

broadcast and select architectures have been suggested. The latter intrinsically allows for packet broadcasting,<br />

but the splitting loss limits the size of nodes. In literature the most commonly deployed switching fabrics are<br />

SOA gates 485 , 486 , 487 , 488 489 , LiNbO3 gates 490 , electro-optic ones and combination of tunable wavelength<br />

converter with AWG 477 479 481 491 492 493 .<br />

All-optical switching for optical packets has been the focus of research for many years. There are two kinds of<br />

approaches: the self routed packets where the header carries the information that is required for routing through<br />

what is usually a 1×2 optical gate 494 495 496 and the case where an optical control signal is used to switch the<br />

through an optical gate 497 498 . All optical processing is a very interesting technology however there are still<br />

issues like scaling and also electronic control elimination. However, it seems the only fast switching technique<br />

for high bit rate packets.<br />

481 K. Vlachos et al STOLAS: Switching Technologies for Optically Labeled Signals IEEE Comm. Magaz. Nov 2003. Page: S9<br />

482 D. Klonidis et al: ‘Demonstration of a Fully Functional and Controlled Asynchronous Optical Packet Switch at 40Gb/s, Post Deadline<br />

Paper, ECOC 2004, Th. 4.5.5<br />

483 I. Tafur Monroy et al: Techniques for Labeling of Optical Signals in Burst Switched Networks, The First International Workshop on<br />

Optical Burst Switching , WOBS2003, p. 103<br />

484 Fjelde, T.; Wolfson, D.; Kloch, A.; Dagens, B.; Coquelin, A.; Guillemot, I.; Gaborit, F.; Poingt, F.; Renaud, M., Demonstration of 20<br />

Gbit/s all-optical logic XOR in integrated SOA-based interferometric wavelength converter, Electronics Letters, Vol. 36, No. 22, 26 Oct<br />

2000<br />

485 Chiaroni, D. “Semiconductor Optical Amplifier: A key technology to control the packet power variations, ECOC 2001, Amsterdam,<br />

Paper We.B.2.6<br />

486 Masetti, Fet al: High speed, high capacity ATM optical switches for future telecommunication transport networks, Selected Areas in<br />

Communications, IEEE Journal on , Volume: 14 , Issue: 5 , June 1996 Pages:979 - 998<br />

487 N.Wada, W.Chujo, and K.Kitayama. "1.28 Tbit/s (160 Gbit/s x 8 wavelengths) throughput variable length packet switching using optical<br />

code based label switch," ECOC2001, (Amsterdam, The Netherlands), vol.6, no. PD-A-1-9, pp.62-63, October 2001<br />

488 D. Boettle et al: ATMOS ATM Optical Switching – System Perspective, Fiber and Integrated Optics, 15, pages: 267-279, 1996<br />

489 L. Dittmann et al: The European IST project DAVID: a viable approach toward optical packet switching Selected Areas in<br />

Communications, IEEE Journal on , Volume: 21 , Issue: 7 , Sept. 2003 Pages:1026 – 1040]<br />

490 I. Chlamtac et al, “Cord: contention resolution by delay lines”, IEEE Journal of Selected Areas in Communications, 14, 1014-1029, June<br />

1996<br />

491<br />

D. Klonidis, C. Politi, M. O'Mahony, D. Simeonidou: Fast and widely tunable optical packet switching scheme based on tunable laser<br />

and dual-pump four-wave mixing, Photonics Technology Letters, IEEE , Volume: 16 , Issue: 5 , May 2004 Pages:1412 – 1414<br />

492<br />

Yamada Y., et al., “Optical output buffered ATM switch prototype based on FRONTIERNET architecture”, IEEE Selected Areas in<br />

Communications, Vol. 16, No. 7, pp. 1298-1308, Sept. 1998<br />

493<br />

Wei Wang, L. Rau and D. J. Blumenthal: All-Optical Label Switching/Swapping of 160 Gbps Variable Length Packets and 10 Gbps<br />

Labels using a WDM Raman Enhanced-XPM Fiber Wavelength Converter with Unicast/Multicast Operation, Post Deadline, PDP8,<br />

OFC 2004.<br />

494<br />

Toliver et alia, “Routing of 100 Gb/s Words in a Packet Switched Optical Networking Demonstration (POND) Node”, JLT Vol 16, No<br />

12, Dec 1998<br />

495<br />

Nakahara et alia, “100 Gb/s optical packet self routing by self serial-to-parallel conversion”, OFC 2002, paper WM5<br />

496<br />

Cotter,D, Lucek,JK, Shabeer,M, Smith K, Rogers, DC, Nesset D, Gunning P "Self Routing 100 Gb/s packets using 6 bit 'keyword'<br />

address recognition, Electronics Letters, 31(25), pp2201-2201, Dec<br />

497<br />

K. Vlachos et al : Ultrafast time-domain technology and its application in all-optical signal processing, Lightwave Technology, Journal<br />

of , Volume: 21 , Issue: 9 , Sept. 2003 , Pages:1857 – 1868<br />

498 H. Dorren ,M.T. Hill,Y. Liu,N. Calabretta, A. Srivatsa,F. M. Huijskens, H. de Waardt, G. D. Khoe, Optical packet switching and<br />

buffering by using all-optical signal processing methods IEEE Journal of Lightwave Tech, Vol: 21 , Iss.: 1 , Jan. 2003 Pages:2 – 12<br />

<strong>Annex</strong> 2 - Page 185 of 282


Contention Resolution in OPS<br />

Deflection routing<br />

and wavelength<br />

conversion<br />

Wavelength<br />

conversion in<br />

bufferless nodes<br />

Wavelength<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Space<br />

Deflection<br />

Routing.<br />

<strong>Annex</strong> 2 - Page 186 of 282<br />

Optical buffering<br />

and deflection<br />

routing .<br />

Optical<br />

buffering .<br />

Time<br />

Optical buffering<br />

and wavelength<br />

conversion<br />

Figure 77 Methods of contention resolution<br />

In OPS networks three different resources can be used to avoid discarding packets which simultaneously request<br />

the same output port: time (buffering), wavelength (conversion) and space (deflection routing).<br />

Wavelength conversion can be used for resolving contention in an optical packet switch, where two optical<br />

packets request the same output 499 500 501 . One of the packets is converted in another carrier wavelength so that<br />

both packets can exit simultaneously a specific port of the switch, if there is an empty slot at that wavelength.<br />

However networks with high load cannot rely on this method, as large number of wavelengths should be<br />

available.<br />

Optical buffering uses the time domain, delaying contending packets until empty time-slots are available 502 .<br />

Unlike their electronic counterparts, optical packets have to be processed on the fly, hence buffering is currently<br />

realised by fiber delay lines (FDLs). Buffers in optics are categorised according to the number of stages and<br />

according to the routing process the packet goes through, i.e. if they route the packet ‘forward’ or they send it<br />

through ‘feedback’ loops.<br />

In principle, buffers can be placed in different parts of a switching node as shown in 502 , for example<br />

architectures using buffer at inputs 503 , re-circulating topologies 504 and finally for output-buffered design.<br />

Currently a lot of attention has been focused on ways of ‘slowing’ light for example in quantum dots by<br />

electromagnetic induced transparency 505 . The buffering effect is achieved by slowing down the optical signal<br />

using an external control light source to vary the dispersion characteristic of the medium via electromagnetically<br />

induced transparency effect. That has a potential to radically change the potential of optical packet switching.<br />

499 Danielsen S. L, C. Joergensen, B. Mikkelsen, and K. E. Stubkjaer, "Analysis of a WDM packet switch with improved performance under<br />

bursty traffic conditions due to tuneable wavelength converters", IEEE Journal of Lightwave Tech. Vol. 16, pp 729-735, May 98.<br />

500 Danielsen S. L., B. Mikkelsen, C. Joergensen, T. Durhuus"WDM Packet Switch Architectures and Analysis of the Influence of Tunable<br />

Wavelength Converters on the Performance", IEEE, Journal of Lightwave Tech. Vol. 15, pp 219-226, February<br />

501 S. L. Danielsen, P. B. Hansen,K. E. Stubkjaer: Wavelength conversion in optical packet switching, Lightwave Technology, Journal of ,<br />

Volume: 16 , Issue: 12 , Dec. 1998 Pages:2095 – 2108<br />

502 D. Hunter,M. C. Chia, I. Andonovic.: Buffering in optical packet switches Lightwave Technology, Journal of , Volume: 16 , Issue: 12 ,<br />

Dec. 1998 Pages:2081 – 2094<br />

503 Zhong W. D., R. S. Tucker, “Wavelength routing-based photonic packet buffers and their application in photonic packet switching<br />

systems”, IEEE/OSA Journal of Lightwave Technology, vol. 16, No. 10, pp. 1737-1745, Oct. 1998.<br />

504 Chia M. C. et al, “Packet loss and delay performance of feedback and feed-forward arrayed-waveguide gratings-based optical packet<br />

switches with WDM inputs–outputs”, IEEE/OSA, Journal of Lightwave Tech. Vol. 19,No. 9, pp 1241-1254, Sep. 2001.<br />

505 C. J. Chang- Hasnain et al: Variable optical buffer using slow light in semiconductor nanostructures, Proceedings of the IEEE, Volume:<br />

91/ 11, pp. 1884 – 1897, 2003


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 78 Possible buffer location in a packet-switched network node 506507 .<br />

Deflection Routing<br />

Deflection Routing where contending packets are forwarded to alternative links. In IP networks packets are<br />

routed according to their priority and the shortest possible route. In the case of contention an alternative route<br />

can be chosen through the network, thus using the network as a buffer. Deflection routing must be carefully<br />

designed to avoid packets travelling endlessly in the network. Some propose the deflection of whole packet<br />

flows instead.<br />

Most of the demonstrated networks have used combination of the above techniques 508 509 . In Figure 77 the three<br />

dimensional space has been plotted. It must be noted that for the different characteristics of the node design the<br />

choice of combination will vary. For compact nodes, delay lines should be avoided. Full transparency may<br />

dictate avoidance of specific wavelength conversion techniques. Deflection routing implies complexity at the<br />

control level.<br />

<strong>A2.</strong>8.4.4 Key Issues<br />

Optical burst and packet switching are currently seen as the main routes towards a flexible dynamic networking,<br />

with sub-wavelength granularity. There are a number of key issues which must be addressed before its<br />

implementation:<br />

• The absence of optical RAM memory complicates the design of buffers, needed for both optical burst and<br />

packet switching, although in principle delay lines can be used.<br />

• The absence of reasonable sized space switches (100x100) which can reconfigure in ns (OPS) or us (OBS).<br />

Although the space switch functionality can be achieved through the use of wavelength conversion followed<br />

by wavelength selection, availability of a fast switch would be very helpful.<br />

• Studies need to be undertaken to understand the economic advantages of OPS/OBS, as the switching nodes<br />

are complex and likely to be expensive.<br />

• Control strategies for OPS/OBS have yet to be decided<br />

• Standardisation needs to be started. For example there is no agreement on optical packet size.<br />

Despite all of the above the integration of electronic and optical technologies together with high granularity and<br />

flexibility makes OPS/OBS very attractive for future networking<br />

506 Moises R. N. Ribeiro: Traffic Prioritisation in Photonic Packet Switching, PhD Thesis, University of Essex, June 2002<br />

507 Blumenthal D. J., P. R. Pructnal, and J. R. Sauer, “Photonic packet switches: architecture and experimental implementations”,<br />

Proceedings of IEEE, Vol 82, No.11, Nov. 1994.<br />

508 Lars Dittman, Dominique Chiaroni, "DAVID-an approach towards MPLS based optical packet switching with QoS support",<br />

Photonics in Switching 2001, Paper ThD1<br />

509 S. J. Ben Yoo: High-Performance Optical-Label Switching Packet Routers and Smart Edge Routers for the Next-Generation Internet<br />

Journal of Sel Areas in Communications, VOL. 21,/ 7, September 2003 p. 1041<br />

<strong>Annex</strong> 2 - Page 187 of 282


<strong>A2.</strong>8.5 Control Plane<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>8.5.1 Deployed Technology – SDH/SONET and Next Generation SDH/SONET<br />

Although a few networks are being deployed which use MPLS for data transport 510 511 , the vast majority of<br />

deployed networks currently use SDH/SONET to transport both data and voice through the network. Optical<br />

links are typically point-point and are terminated with SDH/SONET switching equipment. SDH/SONET<br />

manages the physical equipment and links within the network. Incoming data is assembled into frames, which<br />

also carry associated management data for transmission across the network. SDH/SONET sets-up the connection<br />

for the frames, maintains knowledge of the network status and provides protection and restoration during failure.<br />

The installed capital base of SDH/SONET ensures that the near term evolution of the network will be based on<br />

or work alongside SDH/SONET variants. Currently Next Generation SDH is gathering traction. This is a<br />

variation of SDH/SONET where the edge nodes are upgraded initially. Migration of the core elements of the<br />

network to NG SDH/SONET can occur later. It has been designed to solve some issues which occur in standard<br />

SDH/SONET associated with the slow circuit provision, the difficulty in providing a large contiguous capacity<br />

required by some connections, and the inefficient fragmentation of the network as some capacity can get<br />

stranded.<br />

The major evolutions in NG SDH/SONET are the introduction of:<br />

• virtual concatenation (VCAT). This allows services to be mapped onto several identically-sized noncontiguous<br />

low order circuits (e.g. VC4) rather than a large circuit (e.g. STM64). The separate frames may<br />

even traverse the network by different routes as they are re-assembled at a network element at the<br />

destination. This gives better utilization of the network as it minimizes isolated unusable capacity.<br />

• Link capacity adjustment scheme (LCAS). This is a signaling protocol that complements VCAT as it allows<br />

hitless in-service addition of circuits to the VCAT group as more resource is required i.e. it allows the group<br />

to grow. LCAS also dynamically removes failed circuits from the group and adds other circuits to maintain<br />

the overall group connection. LCAS is particularly attractive to carriers for protection and restoration. Loss<br />

of one VCAT member from the group can be accounted for and recovered from using the LCAS<br />

management. If the channel is transmitting at less than its peak rate, the surviving members may be able to<br />

maintain the connection. If the capacity in the surviving group is not sufficient more members may be added.<br />

• Generic framing procedure which allows efficient mapping of any service/protocol onto the virtual<br />

containers, and avoids standardization delays for new services.<br />

• A new control and signaling plane called Automatic Switched Transport Network (ASTN) allows real time<br />

provisioning and tear down of circuits<br />

• New signaling protocols will allow automatic discovery of networks elements and will allow<br />

defragmentation of the network along with automated provision<br />

Current and Available Technology – State of the Art G-MPLS and IP<br />

The current push in the control plane is to automate dynamic provisioning in the network 512 . From both operator<br />

and customer viewpoints there are clear benefits as this allows fast provisioning of circuits which bring on<br />

stream new services and revenue streams. Further traffic engineering can allow more efficient use of the<br />

networks and protection to be efficiently allocated. Automating the control function also removes the potential<br />

for human error. Other desired features that are included are automatic network discovery, and standardized<br />

interfaces between networks for inter-operability.<br />

There are two solutions being developed via standards bodies. These solutions use different approaches to form<br />

the control plane in line with their origins. The two are not mutually exclusive, although much work will need to<br />

be done on the details to allow them to work together 513 .<br />

The Automatically Switched Optical Network (ASON) originates in the ITU. This uses a top down approach to<br />

define the architecture of the control plane. The top level requirements are defined and these then cascade down<br />

to the requirements for the individual components.<br />

510<br />

BT Global Services: http://www.btglobalservices.com/business/global/en/about_us/our_network/index.html<br />

511<br />

Interoute: http://www.interoute.com/<br />

512<br />

A.Jajszczyk, “Automatically switched optical networks: Benefits and requirements” IEEE Optical communications, pp S10-S15, Feb<br />

2005<br />

513<br />

N. Larkin, “ASON and GMPLS – The Battle of the Optical Control Plane”, White Paper,<br />

http://www.dataconnection.com/products/whitepapers.htm<br />

<strong>Annex</strong> 2 - Page 188 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The definitions cover the control plane architecture, signaling, routing and network discovery. Network<br />

discovery as well as indicating the finding of physical links also implies the detection of resource and service<br />

discovery in the higher layers of the network. The standard does not define protocols (although the ITU is<br />

working on compliant protocols). Protocols can be compared to the requirements to see if they are “ASON<br />

compliant”.<br />

The IETF has developed Generalised Multi-Protocol Label Switching (G-MPLS). This has it origin on MPLS,<br />

which is a forwarding scheme whereby a packet has a label associated with it. When it enters a switch its output<br />

port and output label is found from a look-up table according to its input port and input label. To originate a<br />

connection the relevant switches have to have suitable entries placed in their lookup tables, and when the<br />

connection is torn down these entries are removed. The resulting path is called a Label Switched Path (LSP).<br />

This scheme has a benefit when compared to schemes based on the destination IP address, as the lookup tables<br />

are smaller and therefore latency in the switch is reduced. G-MPLS expands these techniques in recognition that<br />

as well as data labels such properties as wavelength of light and slot position in a TDM signal can define the<br />

route to be taken and these are included in the technique. The set of protocol in G-MPLS includes ones for<br />

signaling routing and link management and discovery.<br />

Optical transport is an inherently analogue process, and care will need to be taken that an optical signal in a<br />

network does not degrade to be unrecoverable before it is regenerated. Neither technique specifically addressed<br />

this although it begun to be addressed by some researchers 514 .<br />

Neither technique need be limited to optical communication.<br />

<strong>A2.</strong>8.5.2 Future Technology - Research Trends<br />

The size of future core network dictates that upgrades are done gradually. Hence, future networks will be<br />

heterogeneous, which is the main challenge for development of core-network control platforms. Thus although<br />

the technical solutions for control are progressing well, the major challenges are related to the interfaces<br />

between different domains. The challenges can be summarized as:<br />

• Standards for interoperability<br />

• Mapping application need (QoS) to network requirements<br />

• Resiliency mechanisms.<br />

• Control plane processing requirements<br />

The two proposals for optical control platforms, ASON and GMPLS, are likely to coexist in the network and<br />

hence a scheme for interworking is required in order to dynamically deploy end-to-end connections. The figure<br />

illustrates one obvious scenario in which the different operators / administrates use different network platforms.<br />

Workstation<br />

Operator 1<br />

GMPLS<br />

Operator 2<br />

ASON<br />

<strong>Annex</strong> 2 - Page 189 of 282<br />

Operator 3<br />

GMPLS<br />

Figure 79 Interworking requirements between different operators’ domains<br />

Workstation<br />

As well as the interworking of the different platforms, related scenarios include:<br />

• Interconnection of different vendor’s implementation of the same platform<br />

• Interconnection of administrative domains using the same control platform<br />

• Interconnections of domains using identical platform but based on different transmission technologies (e.g.,<br />

different versions of NG SDH/SONET).<br />

This requires specification and standardization of suitable inter-platform-interfaces. The interconnection might<br />

require special gateways acting as control plane proxies.<br />

514 B. Peeters et al, “Optimal Routing in hybrid networks by decoupling the route calaculation from the assessment of optical route<br />

viability”, NOC 2004, Eindhoven, The Netherlands, June 29– July 1, 2004


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

QoS requirements will become more important and is an issue. This includes mapping the packet switched<br />

IP/MPLS traffic flows onto circuit switched ASON / GMPLS connections and ensuring that the application<br />

requirements are met for the end-end connection.<br />

Protection is also an issue. Dynamic protection schemes are widely used in ring network topologies but are<br />

mainly based on manually provisioned back-up capacity. Automating the setup of backup paths and enabling it<br />

to work on meshed network topologies will enable a much more efficient use of network /transmission &<br />

switching capacity and eliminate human errors in back-up path provisioning. However this will require<br />

development and verification of new control plane algorithms. Perhaps verification is the hardest issue here to<br />

ensure that critical connections are restored under all possible conditions.<br />

Finally dynamic provisioning of connections and dynamic protection mechanisms increases substantially the<br />

processing requirements to the control plane. Thus, processing power is a potential bottleneck in future,<br />

heterogeneous optical, core networks. Perhaps distributed techniques which parcel out the problem could be<br />

used. Certainly the networks are likely to be administered as separate independent domains.<br />

<strong>A2.</strong>8.6 Trends and issues to be developed in the course of BREAD<br />

• The overall optimisation of core and metro network required in order to achieve the necessary end-to-end<br />

Capacity and Quality of service<br />

• The structures of the main nodes<br />

• Technology Required<br />

• The evolution to optical burst and packet switching, and the potential impact of worldwide demonstrators in<br />

this area.<br />

• The issues surrounding the deployment of 40Gbit/s<br />

• The development of control plane strategies<br />

<strong>A2.</strong>8.7 Related technical initiatives<br />

<strong>A2.</strong>8.7.1.1 International organsations<br />

Optical Internetworking Forum (OIF)<br />

The OIF is an open industry organization of equipment manufacturers, telecom service providers and end users<br />

dedicated to promote the global development of optical internetworking products and foster the development<br />

and deployment of interoperable products and services for data switching and routing using optical networking<br />

technologies. The Technical Committee is divided into six working groups, focusing on specific areas. Currently<br />

the Working Groups are: Architecture Working Group, Carrier WG, Interoperability WG, OAM&P WG,<br />

Physical and Link Layer WG and Signalling WG.<br />

The goal was to design and test a resilient and managed transport network realised by an OTN carrying different<br />

clients (e.g. SDH, ATM, IP-based) with inter-working and interconnection between layer transport networks and<br />

domains. The identified requirements will be validated in a testbed where IP-routers and SDH equipment will be<br />

integrated over an OTN infrastructure.<br />

Project LION which has recently ended. It studied the implementation of an Intelligent Optical Network based<br />

on an Automatic Switched Optical Network (ASON) using Generalised MPLS as a control plane.<br />

The test-bed showed the interconnection of 3 optical network domains (constructed by each of the partners<br />

Siemens, Tellium and TILAB) together with Network Management Systems (by T-Nova, Cisco). The<br />

demonstrator used OXCs and OADMS, and showed the operation of signalling interfaces, demonstrating<br />

restoration and path set-up and tear down. The project studied successfully:<br />

• Traffic/demand model for ASON/ASTN networks<br />

• Definition of guidelines on the optimisation of the transport network evolution (flexible connection<br />

provisioning, control plane, management plane, resilience, …)<br />

• ASON for survivability/ resilience using the ASON flexibility<br />

• Study of ASON dimensioning and dynamic traffic conditions resulting in a joint optimisation planning<br />

scheme<br />

<strong>Annex</strong> 2 - Page 190 of 282


<strong>A2.</strong>8.7.1.2 European Projects<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

FP5 IST STOLAS<br />

Burst Switching represents a networking technology of growing interest across Europe, as it would appear to<br />

offer good network dynamics and granularity, whilst not stretching the technology too greatly. It also seems a<br />

good contender for Grid networking as it offers the ability to schedule time slots, associating them with<br />

particular applications. OBS projects are represented in the last EU research Framework-as represented by the<br />

STOLAS project, and in a number of national projects. The STOLAS project, currently running, looks at label<br />

switched networks and how best to implement the functions of routing, monitoring and control. Key<br />

components and subsystems being developed here are label-controlled cross-connect/routers, edge routers<br />

(where the burst/packets are assembled) and OADMs. The STOLAS project aims to improve the throughput of<br />

packet-switched networks by novel optical routing techniques based on stacked optical labels.<br />

STOLAS’ specific objectives are:<br />

• To assess the networking possibilities of optical label switching theoretically<br />

• To demonstrate high-speed modulation and fast wavelength switching of widely tuneable lasers<br />

• To develop multi-channel 2R regenerators in a hybrid technology<br />

• To develop optical-label-controlled cross-connect and add/drop nodes<br />

• To study monitoring and control aspects of optical label-routed networks<br />

• To build a limited network testbed and to validate the key system functionality<br />

• To contribute to standardisation processes<br />

FP5 IST Fashion<br />

A number of projects (funded by EU and National Governments) continue to focus on very high speed<br />

transmission. The general objective of the FASHION project is to assess the techno-economical potential of<br />

optical time-domain multiplexing (OTDM) in high speed flexible optical networks. OTDM point-to-point<br />

transmission and time-domain routing will be investigated for single-channel data rates of 160 Gbits/s and<br />

higher. The transmission reach is planned to be extended to 500-1000 kms allowing wide all-optical networks.<br />

Supported by the analysis of physical system limitations, network concepts including economical considerations<br />

will be developed for mixed wavelength-division multiplexed (WDM) and OTDM multi-terabit systems. Timedomain<br />

add-drop multiplexers will be developed including an assessment of their impact on the transmission<br />

performance. System operation and capabilities will be evaluated in both laboratory and field experiments. For<br />

realistic networking applications, particular emphasis will be put on compact and reliable modules by exploiting<br />

and enforcing advances in component technology.<br />

Project FASHION (lead by Siemens) has recently demonstrated a very successful trial at 160 Gbit/s over<br />

installed fibre (in the BT UK network). FASHION implemented an OTDM network comprising 16 x 10 Gbit/s<br />

channels; with multiplexers, demultiplexers and OADMS.A possible application of this technology might be in<br />

the metro layer, where huge growths in traffic foreseen by the rapid deployment of broadband access makes the<br />

technology interesting to consider and evaluate.<br />

FP5 IST DAVID<br />

The main objective of the project was to propose a packet-over-WDM network solution, including traffic<br />

engineering capabilities and network management, and covering the entire area from MAN to WAN. The<br />

project will utilise optics as well as electronics in order to find the optimum mix of technologies for future veryhigh-capacity<br />

networks. On the metro side the project will focus on a MAC protocol for optical MANs. The<br />

WAN is a multilayered architecture-employing packet switched domains containing electrical and optical packet<br />

switches as well as wavelength-routed domains. The network control system is derived from the concepts<br />

underlying MPLS and ensures a unified control structure, covering both MAN and WAN. The project is now<br />

completed.<br />

FP6 NOBEL<br />

Next generation Optical network for Broadband European Leadership (NOBEL) is an Integrated Project (IP) in<br />

the 6th Framework Programme. The NOBEL project runs for two years starting in January 2004. The NOBEL<br />

consortium consists of 32 industrial and academic partners and is lead by Telecom Italia.<br />

The main goal of the IST Integrated Project NOBEL is to find and to validate experimentally innovative<br />

network solutions and technologies for intelligent and flexible optical networks, thereby enabling broadband<br />

services for all. Specifically, the main objectives of NOBEL are:<br />

<strong>Annex</strong> 2 - Page 191 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• to define network architectures, evolutionary guidelines and a roadmap for core and metro optical transport<br />

networks towards intelligent data centric solutions (based on optical and electrical switching, e.g.<br />

ASON/GMPLS);<br />

• to identify main drivers for the evolution of core and metro optical networks supporting end-to-end<br />

broadband services, and to derive technical requirements in accordance to this;<br />

• to study efficient traffic/network engineering and resilience strategies in multi-layer/domain/service networks<br />

and interworking issues;<br />

• to assess and describe social and techno-economic aspects regarding the deployment of network solutions<br />

and technologies for intelligent and flexible optical networks;<br />

• to evaluate solutions for providing end-to-end Quality of Service;<br />

• to identify network architectures, concepts and solutions for advanced packet/burst switching;<br />

• to propose simplified strategies for the end-to-end management and control of intra/inter-domain connections<br />

in multi-layers networks (e.g. IP over Optics);<br />

• to find enhanced solutions and technologies for physical transmission in transparent optical networks;<br />

• to identify the key functional requirements from the architectural, management, control and transmission<br />

viewpoints and translate them into specifications, feasibility studies and prototype realizations for multiservice/multi-layer<br />

nodes with flexible client and adaptable transport interfaces;<br />

• to assess existing technologies, components and subsystems in terms of efficiency and cost-effectiveness,<br />

deriving requirements and specifications for next generation components and subsystems, with respect to the<br />

network solutions identified;<br />

• to integrate the prototype solutions of for multi-service/multi-layer nodes into existing test beds for<br />

experiments on advanced functionality.<br />

FW6 LASAGNE<br />

All-optical LAbel-SwApping employing optical logic Gates in NEtwork nodes (LASAGNE) is a Specific Target<br />

Research Project (STREP) in the 6th Framework Programme. The LASAGNE project runs for three years<br />

starting in January 2004. The LASAGNE consortium consists of 10 academic and industrial partners and is lead<br />

by Universidad Politécnica de Valencia.<br />

The LASAGNE project aims at studying, developing and testing All-Optical Label Swapping (AOLS)<br />

techniques. Similarly as the FP5 projects STOLAS, DAVID and LABELS, the subject of the LASAGNE project<br />

will be Optical Packet Switching (OPS)/Optical Burst Switching (OBS) networks. However, the main difference<br />

is that in those FP5 projects the header is still processed electronically – while the payload is switched alloptically<br />

–, whereas in the LASAGNE project also the header will be processed all-optically. The main<br />

objectives of the LASAGNE project are:<br />

• Designing and realising new optical logic gate architectures to implement all-optical label-swapping and<br />

packet routing functionality in network nodes. To implement the necessary intelligence the LASAGNE<br />

project intends to design and realise optical gates integrating commercially available subsystems such as<br />

Mach-Zehnder interferometers (MZIs) incorporating semiconductor optical amplifiers (SOAs).<br />

• Demonstration of an all-optical routing node capable of managing high-speed labelled packets. The basic<br />

functionality of this node to be demonstrated are: label extraction, label erasure, label comparison,<br />

wavelength conversion, label generation and insertion<br />

• Studying optical networking aspects of AOLS and disaster recovery strategies, in accordance with the<br />

definition of the architecture of the AOLS node. Not only technical but also economical implications of the<br />

proposed AOLS node structure will be investigated in these network studies.<br />

FP6 e-Photon/ONe<br />

This European Framework Program 6 Network of Excellence (NoE) aims at integrating and focusing the rich<br />

know-how available in Europe on optical communication and networks, both in universities and in research<br />

centres of major telecom manufacturers and operators. The set of available expertise ranges from optical<br />

technologies, to networking devices, to network architectures and protocols, to the new services fostered by<br />

photonic technologies. The NoE will contribute to the Strategic Objective “Broadband for All”, with a particular<br />

focus on “low cost access equipment”, on “new concepts for network management, control and protocols”, and<br />

on “increasing bandwidth capacity in the access network as well in the underlying optical core/metro network<br />

(including in particular optical burst and packet switching)”.<br />

<strong>Annex</strong> 2 - Page 192 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The main technical focus of the NoE is to show which are the potential advantages of optical technologies in<br />

telecom networks with respect to electronic technologies. A strong integration among the participants to the NoE<br />

will favour a consensus on the engineering choices towards the deployment of cost-effective optical<br />

technologies in networking that will support the future Internet, hopefully providing inputs to the standardisation<br />

bodies and guidelines to the operators, as well as competitive advantages to European telecom equipment<br />

manufacturers.<br />

<strong>A2.</strong>8.7.1.3 National Projects<br />

OPSNET Project<br />

EPSRC funded project in collaboration with University of Cambridge and University of Strathclyde. The<br />

OPSnet project researches optical packet switching, in particular building upon the results obtained from the<br />

earlier EPSRC (Engineering and Physical Sciences Research Council UK) project WASPNET, which<br />

demonstrated a prototype switch within a network environment. As many of the networking issues facing the<br />

development of an optical backbone layer have come into focus during the past year, a clearer idea of the<br />

functions and performance needed from optical packet switching are starting to emerge. For example new<br />

technologies and technical approaches are required to enable operation at 40 Gbit/s, with scalability to >100<br />

Gbit/s. The networking issues associated with the integration of the optical backbone layer and the IP layer<br />

require effective solutions. Fundamental choices between synchronous and asynchronous packet operation have<br />

to be made, which have major impacts on the hardware solutions. Expected outcomes are concerned with<br />

understanding:<br />

• the relative merits of asynchronous and synchronous packet operation, and the impact of asynchronous<br />

operation on switch and network performance.<br />

• the impact of data traffic statistics on switch design<br />

• the design of an asynchronous optical packet switch for 40 Gbit/s operation<br />

• a network demonstrator supporting an end-to-end connection across the electronic and optical domains will<br />

support these activities.<br />

A number of industrial partners are associated with this proposal namely; BT Laboratories Fujitsu<br />

Telecommunications and Marconi Communications<br />

MultiTeraNet<br />

The MulitTeraNet program is the major national German research initiative on optical communication<br />

technologies in Germany. It was launched in June 2002 and will be finished in September 2006. 14 companies, 3<br />

Fraunhofer institutes and 9 universities are participating in the research program. The total volume is about 50<br />

million Euro, with public funding of about 60%.<br />

Part of the research activities are devoted to the wide area core network.<br />

The 39 projects of the MultiTeraNet program are divided in for main research areas.<br />

• Flexible optical networks<br />

• Usage of fibre capacity<br />

• Access network technologies<br />

• Key components<br />

In the “flexible optical network” projects cluster the main objectives are modelling and design of optical and<br />

optoelectronic transport networks and also laboratory experiments and field trials. Projects in the fibre usage<br />

projects cluster will be working on e.g. increase of spectral efficiency, adaptive impairment compensation<br />

techniques, and new modulation techniques.<br />

These two thematic areas and the “key components” project cluster will develop advanced solutions for the core<br />

and metro networks<br />

<strong>A2.</strong>8.7.1.4 Most important standards for Optical Metro / CWDM networking.<br />

ITU-T<br />

• G.695 Optical interfaces for coarse wavelength division multiplexing applications This<br />

Recommendation provides optical parameter values for physical layer interfaces of coarse wavelength<br />

division multiplexing (CWDM) applications with up to 16 channels and up to 2.5 Gbit/s. Applications are<br />

<strong>Annex</strong> 2 - Page 193 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

defined using two different methods, one using multichannel interface parameters and the other using single<br />

channel interface parameters. Both unidirectional and bidirectional applications are specified.<br />

• G.694.2 Spectral grids for WDM applications: CWDM wavelength grid This Recommendation provides<br />

the wavelength grid for coarse wavelength division multiplexing (CWDM) applications. This wavelength<br />

grid supports a channel spacing of 20 nm.The wavelength grid in this version of this Recommendation has<br />

been moved by 1 nm to align it with current industry practice while maintaining symmetrical nominal central<br />

wavelength deviations.<br />

• G.7042 / Y.1305 Link capacity adjustment scheme (LCAS) for virtual concatenated signals This<br />

Recommendation specifies a methodology for dynamically and hitlessly change (i.e. increase and decrease)<br />

the capacity of a container that is transported in a generic transport network (e.g. over SDH or OTN network<br />

using Virtual Concatenation). In addition, the methodology also provides survivability capabilities,<br />

automatically decreasing the capacity if a member experiences a failure in the network, and increasing the<br />

capacity when the network fault is repaired.<br />

• G.7041 / Y.1303 /Generic Framing Procedure (GFP) This Recommendation specifies interface mapping<br />

and equipment functions for carrying packet oriented payloads including IP/PPP, Ethernet, Fibre Channel,<br />

and ESCON (Enterprise Systems Connection) payloads over optical and other transport networks. This<br />

recommendation, together with ITU-T Recommendation G.709 on Interfaces for Optical transport networks<br />

provide the full set of mappings necessary to carry IP traffic over DWDM systems.<br />

• G.707 / 2000 / Virtual Concatenation (VCX) VCX enables several individual standard SPEs to be<br />

combined to form a single higher capacity link. VCX allows a more dynamic bandwidth provisioning in<br />

SDH/SONET networks.<br />

• G.872 (1999), Architecture of optical transport networks;<br />

• G.8080, Architecture for the Automatic Switched Optical Network (ASON)”;<br />

Draft Rec. Ethernet over Transport Network Architecture (ETNA);<br />

• G.7042/Y.1305 "Link Capacity Adjustment Scheme for Virtual Concatenated Signals"<br />

• G.7041/Y.1303 "Generic Framing Procedure";<br />

• G. 871, Framework of Optical Transport Network Recommendations<br />

• SG13 “Multi-protocol and IP-based networks and their interworking”<br />

• G.807, “Requirements for the Automatic Switched Transport Network (ASTN)”<br />

• G.709 “Interfaces for the optical transport network (OTN)”<br />

IEEE RPR WG<br />

Standard IEEE 802.17/ Resilient Packet Ring<br />

• Support for dual counter rotating ring topology<br />

• Full compatibility with IEEE 802 architecture as well as 802.1D, 802.1Q and 802.1f<br />

• Protection mechanism with sub 50 ms fail-over<br />

• Destination stripping of packets<br />

• QoS differentiation (MAC protocol, different priority queues)<br />

• Adoption of existing physical layer medium (SONET/SDH)<br />

<strong>Annex</strong> 2 - Page 194 of 282


<strong>A2.</strong>9 GRID NETWORKS<br />

<strong>A2.</strong>9.1 Introduction<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

A recent technique, which has the potential to drive large amounts of data transfer over broadband networks, is<br />

grid computing. This is a technique where a computing problem is divided into many small pieces each of which<br />

can then be calculated using spare processor power on many disparate platforms spread globally. The concept<br />

here is that the computing power is a resource that can be turned on when needed in an analogous way to the<br />

supply of electrical power to a consumer.<br />

Typically up to now the applications have been non-commercial such as the SETI@home project 515 , where the<br />

processing power has been donated for free by willing individuals. However as grid development moves<br />

forward it is likely that the applications will become more commercial such as investigations of new molecules<br />

for pharmaceutical purposes, and processing power will be brokered in a market.<br />

The main requirement for applications is that they must be able to be split into parallel elements which can be<br />

individually calculated before being aggregated to give the overall solution. One bonus of the “on tap” nature of<br />

the processing power is that processes that previously took long periods to compute, can now be calculated in<br />

virtual real time, without access to prohibitively expensive super computers.<br />

A good example of this is analysis of astronomical data. Astronomers have access to telescopes for limited<br />

periods of time. Historically, they would make their measurements and then analyse them over the months<br />

following. This was inefficient as they had to measure blind. Grid networks give the potential for the analysis to<br />

be completed while measurements are still occurring and for the telescopes to thus be able to be directed to<br />

interesting features during the user’s time allocation.<br />

<strong>A2.</strong>9.2 Enablers and drivers for grids<br />

Since the 1960’s we have progressed when only a few organisations could afford computers, and these had<br />

limited power, to the point where most Western individuals have more personal computing power at their<br />

disposal than the Apollo programme had. . The processing power of computers still doubles every 18 months<br />

following the exponential curve described Moore’s Law, a corollary of this is that the cost of processing<br />

approximately halves every 18 months.<br />

The past when calculations had to use log tables and slide rules is truly a foreign land. It’s not a case of doing<br />

things differently there, but of not being able to do things we take for granted at all.<br />

One driver for progress is never being satisfied. The evolution we have made to solve previously intractable<br />

problems whets our appetite to solve yet more complex challenges. Indeed the challenges in some way are<br />

provided by the increased data provided by advanced technology that needs to be analysed.<br />

Foster states that processing power (cost) doubles (halves) every 18 months 516 . In contrast storage power (cost)<br />

doubles (halves) every 12 months. Certain areas generate lots of data. Particle accelerators for the investigation<br />

of nuclear physics and telescopes for the investigation of the universe are obvious examples. Petabyte data stores<br />

are already planned. This data has to be analysed and the relative rates of growth of the two sectors means that<br />

this is becoming infeasible at a single site. Finally he states that the power (cost) of the communications network<br />

is doubling (halving) every 9 months. If this continues, data communication will become essentially free. At this<br />

point it becomes economic to analyse such data remotely (or to do equally computationally intensive activities).<br />

Thus the prime drivers for grid networks are the complexity of the computation required and the minimal cost of<br />

communications. Finally the model requires the provision of spare processing capability not being used by<br />

providers to be available to users. Until this point this has been charitably provided, this may change as<br />

commercial users begin to use the technology.<br />

An interesting consequence of the donation or sale of space processing cycles is that the processing resources<br />

used are working more efficiently. This in turn hints that grid computing is “green” in that less physical<br />

resources must be used world wide for the same result.<br />

This argument leads to a drive for distributed computing. There are several techniques outlined below.<br />

515 http://setiathome.ssl.berkeley.edu/<br />

516 I. Foster: “The Grid: A New Infrastructure for 21st Century Science”, Physics Today, February, 2002<br />

<strong>Annex</strong> 2 - Page 195 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>9.3 Distinction between peer-peer, cluster and grid networks<br />

It is worth spending some time understanding what is meant by a grid and comparing this with peer-peer and<br />

cluster networks. It seems in the literature that the consensus on the boundaries of the techniques is tenuous;<br />

however Foster again provides some useful insights 517 . He provides a checklist of three requirements that define<br />

a grid. It must:<br />

• Co-ordinate resources that are not centrally controlled<br />

• Use standard, open, general-purpose protocols and interfaces<br />

• Deliver non-trivial qualities of service<br />

Cluster networks consist of a set of computing resources (e.g. terminals) typically owned by a single<br />

organisation. Although these provide a greater potential for computation than a single resource they do not<br />

qualify as grids as they are centrally controlled. The distributed resources in the grid may be globally scattered<br />

with localised ownership and management. Since it is reasonable to assume all the devices in a cluster network<br />

are running the same or at least similar software, there is no need for open, standardised interfaces and protocols.<br />

True grids where the resources used may be globally distributed, running different software sets on widely<br />

varying platforms drive the need for standardised interfaces and protocols. Since the communication<br />

infrastructure used is typically the internet, it also makes sense to use the standard protocols available (such as<br />

FTP and TCP/IP) wherever possible with the grid communication being encapsulated within these.<br />

Most authors e.g. 518 make a distinction between peer-peer and grid networks. Broadly speaking peer-peer<br />

network development has grown from the file-sharing community, whereas grid networks have originated in the<br />

research community concerned with solving computationally intense problems. Further this development has<br />

given the impression that peer-peer is concerned with individuals interacting with other individuals in small<br />

groups, requiring limited total resources, while grids are concerned with organisations interacting with large<br />

numbers of individuals/organisations and requiring large aggregate capacity.<br />

While the two groups have different origins, the goals of each are rapidly converging; indeed the middle ground<br />

is the area where the grid technology may move from being a niche of interest only to limited numbers of<br />

researchers to a ubiquitous technology with a large consumer base.<br />

During this formative phase there is an opportunity for the two groups to co-operate to define interoperable<br />

standards. It would be a shame if this possibility was lost due to cultural differences. Ledlie et al 519 are keen to<br />

foster this attitude and comment that many of the problems the two communities follow are common. Indeed the<br />

two groups may be able to learn from each other’s approach.<br />

There are some distinctions between peer-peer and grid networks however. Peer-peer are typically open<br />

distributed networks with no centralised control. The network members typically donate capacity to the other<br />

members. Grid networks operate more as a hub with the originator controlling the distribution. Although<br />

historically the research topics have tended to be non-commercial such as the search for extra terrestrial<br />

intelligence SETI@home 520 project and individuals have donated their resources for free, it is likely that the<br />

some applications will be industrial and individuals will be less charitable and expect some reward for use of<br />

their resources. There will also be security and confidentiality implications which the grid developers will need<br />

to address. Requirements are discussed in more detail below.<br />

<strong>A2.</strong>9.4 Grid Network Required Elements<br />

Baker 521 lists the elements that comprise a grid:<br />

• Grid fabric – this comprises all the physical resources available over the internet worldwide, and could<br />

comprise individual PCs, unix workstations, sensors such as telescopes providing data, storage devices and<br />

on-line databases. Although Baker does not specifically state it the physical communications equipment used<br />

to link the grid would also be included in this element. The widely varying nature of the components of this<br />

gives a strong drive that the grid middleware and applications should not be platform specific. In broad terms<br />

this can be thought of as the commodity traded<br />

517<br />

I. Foster: “What is the Grid? A Three Point Checklist.” GRIDToday, July 20, 2002<br />

518<br />

http://www.buyya.com/talks/P2PPanel.ppt<br />

519<br />

http://iptps03.cs.berkeley.edu/final-papers/scooped.pdf<br />

520<br />

http://setiathome.ssl.berkeley.edu/<br />

521<br />

M. Baker et al “Grids and Grid technologies for wide-area distributed computing”, Software – Practice and Expereince, 2002, John<br />

Wiley and Sons Ltd<br />

<strong>Annex</strong> 2 - Page 196 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Core grid middleware – this element is concerned with the brokering of resources, distribution of<br />

requirements and quality assurance. Factors that must be dealt with here include resource discovery and<br />

management, access negotiation and trading, security, allocation, & quality of service. This is the merchant<br />

doing the trading and providing value by maintaining a good service through the supply chain.<br />

• User level grid middleware – These are the software tools that the developer can use to adapt their problem<br />

for a grid based solution. The components include languages, libraries and broker clients that allow the user<br />

to define their requirements from the transaction. This is the agent that helps a customer to specify his<br />

requirements, in a way that is understandable by the merchant and then goes into the market to find a cost<br />

efficient solution<br />

• Grid applications and portals – the application is simply the problem that the user wants to solve such as a<br />

simulation or data analysis. Portals are web based services that users can submit jobs to and collect jobs<br />

from. In other words the portal acts as an interface to the grid that the user would otherwise have to provide.<br />

This is the user and his requirement<br />

It is informative to look at these sections in a little more detail.<br />

<strong>A2.</strong>9.4.1 Grid Fabric<br />

Perhaps the most important thing to note about this section is that the user and the user’s application have to<br />

work with what is provided on the grid. The computing power provided may sit on many different platforms<br />

with different operating systems and software. While it may be possible for the application to use a subset of<br />

desirable resources with the desired profile, this is not attractive. Similarly the user would require as much<br />

flexibility in forming his solution, so the grid should not specify languages or similar constraints to be used.<br />

The grid requires contributors to provide resources, so it is paramount that their experience is positive. Factors to<br />

consider here are<br />

• that their participation should not impact their own use of their resources or communication i.e. they have<br />

first call on the resources. The ideal situation is that the participation does not impinge on the owner’s<br />

experience<br />

• their security is not compromised in any way. Use of the grid is likely to involve the pushing of software out<br />

to the resources. Middleware will need to guarantee protection by application validation<br />

• they should be free to join or leave at any time. Note that this implies a need for the application to be stable<br />

under unpredictable and sudden changes in the grid environment<br />

• they should only have to install simple software on their platforms to participate. They certainly should not<br />

have to change operating system or install new languages<br />

• they should be able to refuse access for some user classes, while allowing access to others. For example<br />

while a user might cheerfully donate processing resource for an academic astronomical research application,<br />

they may feel uncomfortable doing the same for a commercial weapon development. 522<br />

Most of the necessary work for communication has already been done. Grid designers have no desire to reinvent<br />

the wheel and thus will transmit the data using existing widely dispersed standard protocols such as<br />

TCP/IP. The main issue here is the connection speed to the contributor. Again it would be unwise for the grid to<br />

try to hog the connection so that the contributor experiences a bottleneck in the connection for his own use.<br />

Examples of methods to overcome this include monitoring the connection and avoid using it when the<br />

contributor is, and to design applications to work autonomously once they start and only to communicate results<br />

or errors to minimising unnecessary messaging. Standard techniques can be used for encoding the<br />

communications to keep them secure from external snooping.<br />

The desirable properties described here have been evolved using a top down approach. There is perhaps scope<br />

for a social research project investigating what each group interacting from the gird desire and expect from the<br />

experience.<br />

522 While Class of Service is a well known term to describe the experience of the application owner, I would propose a new term Class of<br />

Customer or Class of Application to describe the type of application owner / application content so that the resource owner can pre-set<br />

policies for the software to automatically decide as to whether to accept a request or even how to price an allocation if a cost brokering<br />

method is used as described in the Core Grid Middleware section<br />

<strong>Annex</strong> 2 - Page 197 of 282


<strong>A2.</strong>9.4.2 Core grid middleware<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

This section is responsible for dissemination of the available resources to the higher layers and brokering<br />

between the applications and the resources. Thus it must provide some of the desirable functions described in<br />

the section above such as application screening. However it could potentially contain a lot more functionality.<br />

The most basic functions that this software must provide are:<br />

• Resource discovery and database. While this is called discovery, in reality it is far more likely that the client<br />

software on the resource will be the initiator and advertise its willingness to be included in the environment.<br />

However the middleware will have to maintain a network database and will need to constantly monitor the<br />

state and connectedness of its resources.<br />

• Security of communications with resource and verification of non-malicious nature of application<br />

• Allocation of resources to application<br />

• Gateway for all communications. While it is possible to envisage models where the middleware broker<br />

simply acts as an introduction agency, and the partners then interact directly, Buyya suggests that all<br />

interaction should occur through the broker 523<br />

It is likely that as applications move from the non-profit research areas into commercial research areas and<br />

beyond that resource providers will want a share of the wealth generated. At this point the broker becomes far<br />

more powerful negotiating price and contracts between resource and application owners (see footnote 522 for<br />

suggestion of class of customer to complement class of service). The dynamic for these interactions may change from<br />

that for the altruistic current uses with application owners requiring improved class of service for their dollars.<br />

Further the resource provider’s characteristics could change as well. One could envisage resource farms which<br />

exist solely to sell resources in an open market to the highest bidder.<br />

<strong>A2.</strong>9.4.3 User level grid middleware<br />

This is perhaps the most important part of the grid. While specialist users have the resources to translate their<br />

problems into formats suitable for parallel processing e.g. SETI can send processes and data (areas of sky) to<br />

individual computers for analysis, if high penetration is to be achieved, software must make it easy for the user<br />

to translate their problem into a format suitable for parallel processing. This may not be trivial.<br />

Consideration of security and confidentiality may be necessary. This could be achieved by the simple fact of<br />

breaking the problem down into small pieces so only the originator has a view of the whole, or by encoding the<br />

data even during processing so that the host cannot understand the results his machine generates.<br />

If customer service classes are implemented this software must provide a certificate vouching for the customer<br />

and the application.<br />

<strong>A2.</strong>9.4.4 Grid applications and portals<br />

The opportunities here are as broad as the users’ imaginations. It is notoriously difficult to forecast how new<br />

technologies such as this will be used. Who would have forecast that 70% + of the internet’s traffic would be<br />

peer-peer 10 years ago?<br />

One likely benefit is the availability of resources to those in developing countries. Massive computing power<br />

will now be available to anyone with a terminal connected to the internet, rather than just those with the capital<br />

resources to invest in supercomputers.<br />

<strong>A2.</strong>9.5 Grid Research Trends<br />

There are several groups working on grid development. Some of the more notable ones are:<br />

• Grid Computing and Distributed Systems (GRIDS) Laboratory at the University of Melbourne developing<br />

the Gridbus project. A detailed list of achievements is described in 524, in summary all the elements required<br />

as described above are at some release status, and they have also developed a tool for modelling global grid<br />

operation<br />

523 http://www.buyya.com/talks/P2PPanel.ppt<br />

524 http://www.gridbus.org/papers/gridbus2004.pdf<br />

<strong>Annex</strong> 2 - Page 198 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Globus Alliance, a community of organisations and individuals who have released a toolkit for building grid<br />

systems and applications 525. The collaborators are primarily at European and American organisations. The<br />

team state, “The Globus toolkit includes software for security, information infrastructure, resource<br />

management, data management, communication, fault detection, and portability. The packaged suite of<br />

components can be used either independently or together to develop applications.” The claim that there are<br />

over 22,000 downloads per month from their web-site<br />

• Coregrid, is a network of excellence funded by the EC under the 6th Framework program 526. It started in<br />

Sept 2004 and is due to last 4 years. The program of activities is based around 6 complementary research<br />

areas:<br />

- knowledge & data management;<br />

- programming models;<br />

- system architecture;<br />

- Grid information and monitoring services;<br />

- resource management and scheduling;<br />

- problem solving environments, tools and GRID systems.<br />

Similarly there are several groups trying to organise large grid networks. These include:<br />

• The Global Data-Intensive Grid Collaboration 527 organised by the GRIDs laboratory. This is intended to<br />

demonstrate achievement of the two originating HPC challenges “Most Data-Intensive and Geographically<br />

Distributed Applications”.<br />

• Teragrid528 launched by the National Science Foundation in the US with multiple supercomputing sites<br />

connected 529. The components are linked by a 40Gbit/s network and can provide 20 Teraflops of<br />

processing power and over 1 Petabyte of storage. Stated applications for the Teragrid include:<br />

- the study of drug interactions with cancer cells, to thereby develop better cancer drugs<br />

- the study the human genome and how the brain works,<br />

- the analysis of weather data so quickly that they will be able to create real-time weather forecasts that can<br />

predict down to the kilometer where a tornado or other severe storm is likely to hit.<br />

- the design of better aircraft by allowing realistic simulations of new designs<br />

- the understanding the properties of our universe and how it formed.<br />

• The Large Hadron Collider Computing grid (LCG) 530. This is being built to deal with the anticipated<br />

computing needs of the Large Hadron Collider under construction at CERN. It includes more than 100 sites<br />

in 31 countries which contribute >10,000 CPUs and nearly 10,000,000 Gbytes of storage. The group claim<br />

this is the largest international scientific grid and they are achieving record breaking results for high speed<br />

data transfer. However the current processing capacity of this Grid is estimated to be just 5% of the longterm<br />

needs of the Large Hadron Collider. Therefore, the LCG will continue to grow rapidly.<br />

Of these grids the first is the closest approximation of a flexible grid as described in the introduction. The other<br />

two examples are primarily under administrative control of one body for defined applications.<br />

From analysis of the groups above, it is possible to identify two trends of development within the community:<br />

• The first is concerned with solving big scientific problems such as the date provided by the LHC. Teams<br />

working on this generally are building large grid networks (either with clusters of resources provided by well<br />

defined collaborators, or with those provided from a more loose collection of donators). The grids are based<br />

around single (or a small group of) applications and may not necessarily be easily transferable to other<br />

projects with the software possibly being bespoke. The aim here is to provide a big resource to a limited<br />

number of users<br />

• The second is more egalitarian with the thrust being to generate a devolved resource available to everyone.<br />

They propose a market system for allocation of resources and posit intelligent middleware allowing the<br />

penetration of the grid to be massive and available to those who are not computer science experts<br />

Both approaches are valid and the development of the grid will probably proceed along both paths in parallel.<br />

525<br />

http://www.globus.org/<br />

526<br />

http://www.coregrid.net/mambo/component/option,com_frontpage/Itemid,1/<br />

527<br />

http://gridbus.cs.mu.oz.au/sc2003/<br />

528<br />

Strictly speaking the defined nature of the resources and their ownership by a collaborating group means that Teragrid should be<br />

considered a cluster network, rather than a grid network using definitions available in the literature.<br />

529<br />

http://www.teragrid.org/about/index.html<br />

530 http://lcg.web.cern.ch/LCG/<br />

<strong>Annex</strong> 2 - Page 199 of 282


<strong>A2.</strong>10 SECURITY<br />

<strong>A2.</strong>10.1 Introduction<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The selected set of security criterias of security in broadband networks that needs to be addressed are:<br />

• The main functions of security are to protect the user data. These functions are well understood and are<br />

defined.<br />

- Data Authentication, Data export and import out of and into the system<br />

- Stored data integrity providing integrity for the user data stored in the system<br />

- User data confidentiality transfer protection providing confidentiality for user data in transit.<br />

Confidentiality is the ability to ensure that the content and meaning of communications between two<br />

parties does not become known to a third party.<br />

- User data integrity transfer protection providing integrity for user data in transit. Integrity is the ability<br />

to ensure that messages received are genuine and have not been tampered with or otherwise<br />

compromised.<br />

• identification and authentication to ensure that users are associated with the proper security attributes.<br />

Authentication is the ability to validate that a party involved in a transaction is who s/he claims to be, or a<br />

legitimate representation of that party.<br />

• non-repudiation includes non-repudiation of origin and non-repudiation of receipt and is the ability to<br />

ensure that once a party has voluntarily committed to an action it is not possible to subsequently deny that<br />

the commitment was given by that party. This guarantees that either one of the parties of a transaction cannot<br />

falsely claim that they did not participate in a specific transaction.<br />

• security management intends to specify the management of several aspects of the System Security<br />

Functions: security attributes, security function data and parameters.<br />

• privacy requirements which provide a user protection against discovery and misuse of identity by other<br />

users, which includes:<br />

- Anonymity to ensure that a user may use a resource or service without disclosing the user’s identity.<br />

- Pseudonymity to ensure that a user may use a resource or service without disclosing its user identity, but<br />

still being accountable for that use.<br />

- Unlinkability to ensure that a user may make multiple uses of resources or services without others being<br />

able to link these uses together.<br />

- Unobservability to ensure that a user may use a resource or service without others, especially third<br />

parties, being able to observe that the resource or service is being used.<br />

• access to the system requirements to ensure that the system can be only accessed by authorised users<br />

<strong>A2.</strong>10.1.1 Security functions overview<br />

<strong>A2.</strong>10.1.1.1 Public Key Infrastructures<br />

Public Key Infrastructures 531 are defined to provide Public Key Certificate (PKC) management to the group of<br />

security protocols designed to protect the Internet. These protocols, for instance IPsec, SSL, TLS or S/MIME,<br />

use public key cryptography to provide services such as confidentiality, data integrity, data origin authentication<br />

and non-repudiation. Users of public key based systems must have trust in a PKC. A PKC is a data structure<br />

which binds a public key to the user identity and other information, such as a validity period, serial number,<br />

issuer identity or extensions. This binding is achieved by making use of a trusted third party, known as a<br />

Certification Authority (CA) that verifies the subject identity and digitally signs each digital certificate.<br />

A PKI is defined as all of the supporting infrastructure behind a public key cryptography system. It includes<br />

software, policies, hardware, etc, allowing the creation, management, storage, distribution and revocation of<br />

public key certificates. There are four main components of a generic PKI. Certification Authorities (CAs) issue,<br />

validate, renew and revoke PKCs. Registration Authorities (RAs) authenticate off-line users and add certificate’s<br />

properties. The PKI’s end users or systems can encrypt, sign and validate digital documents from a known<br />

531 The PKIX IETF Working Group. <br />

<strong>Annex</strong> 2 - Page 200 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

public key of a trusted CA. And, finally, public repositories, who make available certificates and certificate<br />

revocation lists (CRLs).<br />

Basic services are offered in most PKIs to manage a certificate’s life cycle: certification requests, validation,<br />

publication, revocation and renewal requests. Certification requests are sent by end entities to obtain a PKC.<br />

Requests are usually generated as PKCS10 or CRMF 532 objects. These contain the user’s public key and<br />

personal data, and are signed with the user’s private key to provide proof of possession. Several Key<br />

Management Protocols (KMP) are defined to manage requests between entities: CMP (Certificate Management<br />

Protocol) 533 , CMC (Certificate Management over CMS) SCEP (Simple Certificate Enrolment Protocol) 534 , etc.<br />

Usually, users make their request by going to an RA with personal documents to prove their identity and,<br />

optionally, with a previously generated request. Requests can be done on-line using the above protocols or offline<br />

using the RA. These requests are given to the CA, to be signed and published.<br />

When the user wants to retrieve his or her own certificate or another user’s certificate to establish a trust<br />

connection, certificates should be retrieved from a public repository (i.e. LDAP server). If the user’s certificate<br />

has expired, or is about to be expired, the user can renew it, keeping the same public and private keys associated<br />

to the user’s identity. If the user’s certificate is stolen, lost or become invalid it should be revoked and published<br />

in a Certificate Revocation List (CRL).<br />

Before using a certificate, the user must validate it. Validating a certificate consists of checking the certificate’s<br />

validity period, whether it is stored in a CRL, whether it has been issued by a trusted CA and if it is compliant<br />

with the PKI’s policies. If the certificate validation fails, then the certificate is not trusted.<br />

CAs can establish trust relationships between themselves using cross-certification. The term cross-certification is<br />

applied when two or more CAs establish a trust relationship to allow easy and scalable trust establishment<br />

between their certified entities. The relationship is unidirectional and defined by one cross-certificate shared<br />

between the involved CAs. That is, Certification Authority A defines a trust relationship with B by signing a<br />

cross-certificate for B. And B defines a trust relationship with A by signing a cross-certificate for A.<br />

Relationships between A and B may be defined by one or two different cross-certificates. Two main models are<br />

defined: peer-to-peer or inter-domain cross-certification and hierarchical or intra-domain cross-certification.<br />

Hierarchical cross-certification defines trust relationships between CAs inside the same organisation or<br />

administrative domain. In this model the Root CA, the self-signed CA, is the top of the hierarchy. Peer-to-peer<br />

cross-certification defines a trust relationship between two autonomous CAs, which can be stand alone or<br />

hierarchical CAs. In this model every autonomous CA has a Root CA. An additional cross-certification method<br />

is the use of a Bridge CA (BCA). A Bridge CA is a trustworthy, independent node which can be used to<br />

establish a trust relationship between two unrelated CAs. Each CA shares a cross-certificate with the BCA, and a<br />

trust relationship is therefore established between the CAs.<br />

<strong>A2.</strong>10.1.1.2 Access control mechanisms<br />

Firewalls<br />

A Firewall is an access control mechanism deployed to protect a network or a device from potential external<br />

attacks. A Firewall is a bottleneck between two areas, a trusted one and an untrusted one, where incoming and<br />

outgoing data flows are monitored according pre-existing rules. The security decisions can be recorded to<br />

provide effective audit files and analysed by Intrusion Detection Systems.<br />

Network firewalls are composed of a router (or multiple routers), coupled or not with host systems, running<br />

filtering software and hardware services.<br />

Personal firewalls are composed of software scanning data flows with networks.<br />

There are two types of firewalls:<br />

• Packet filtering:<br />

Such a firewall is implemented on a router or dual homed gateway. The data flow is scanned at the packet<br />

level and eventually blocked according a list of security rules. This type of firewall usually checks the<br />

validity of the header parameters as the source IP address, the destination IP address, the protocol type<br />

(TCP/UDP…) the source port and the destination port.<br />

532 M. Myers, C. Adams, D. Solo, D. Kemp, “RFC 2511 Internet X.509 Certificate Request Message Format”. March 1999<br />

533 C. Adams & S. Farrell, “RFC2510: Certificate Management Protocols”, March 1999<br />

534 CISCO Simple Certificate Enrolment Protocol <br />

<strong>Annex</strong> 2 - Page 201 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 80: Firewall Concept<br />

• Application gateways/proxies:<br />

Such a firewall is implemented on a host system configured with two network interfaces. The data flow is<br />

scanned at the application level. This type of firewall can control the content of the data flow and the user<br />

activity.<br />

Functions<br />

Firewalls provide a central position for security management.<br />

Firewall enables security services as:<br />

• Security Policy Enforcement. Security decisions are applied according pre-existing rules defined in the<br />

security policy.<br />

• User authentication. The security domain administrator can control the access of services and resources by<br />

specific users.<br />

• Auditing. Data flow information is collected and stored for security analysis, by the security domain<br />

administrator or the IDS.<br />

Limits<br />

The key issue about firewalls is that the users have the feeling of living in network where security is guaranteed.<br />

In our pervasive computing world, where devices will be interconnected, firewalls seems to provide a safety<br />

area where users would be protected to any potential attacks.<br />

The following security issues are not solved by firewall:<br />

• Inside attacks. Any user or application located in the perimeter controlled by the firewall can access to any<br />

services or resources in the so-called protected area and potentially operate denied actions.<br />

• Network backdoors. The typical example is the user who installs a 802.11 card on his/her personal<br />

computer connected to the LAN to gain access to the Internet via external hot spot without any restriction.<br />

This user turns his/her computer in a Trojan horse for external attackers.<br />

• Viruses or malicious code. A packet filtering firewall operates at the network layer (layer 3) or the transport<br />

layer (layer 4) and can hardly stop attacks at the application level (layer 7). A proxy firewall blocks only<br />

known attacks and the network is vulnerable regarding new attacks until security rules updates.<br />

• Erroneous security policies. Firewalls are the point of security policy enforcement and have no recovery<br />

error functions against wrong decisions.<br />

Virtual private networking<br />

A Virtual Private Network is a way to simulate private network over a public network such as Internet. The<br />

Virtual Private Network (VPN) creates temporary secure connections or “tunnels ” between two machines, a<br />

machine and a network or two networks.<br />

<strong>Annex</strong> 2 - Page 202 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 81: The nomadic worker<br />

A Virtual Private Network (VPN) is the extension of a private network that encompasses links across shared or<br />

public networks. With a VPN, you can send data between two computers across a shared or public network in a<br />

manner that emulates a point-to-point private link.<br />

To emulate a point-to-point link, data are encapsulated, or wrapped, with a header that provides routing<br />

information, which allows the data to traverse the shared or public network to reach its endpoint.<br />

To emulate a private link, cryptographic protocols can be used to provide the necessary confidentiality<br />

(preventing snooping), sender authentication (preventing identity spoofing), and message integrity (preventing<br />

message alteration).<br />

The link in which the private data are encapsulated and encrypted is a VPN connection.<br />

VPN technologies may also be used to enhance security as a 'security overlay' within dedicated networking<br />

infrastructures.<br />

Now, many companies are creating their own VPNs to accommodate the needs of remote employees and distant<br />

offices.<br />

VPN types<br />

There are two common VPN types: remote access and site-to-site.<br />

• In remote access, the communications are encrypted between a remote computer (the VPN client) and the<br />

remote access VPN gateway (the VPN server) to which it connects. This is a User-to-LAN connection used<br />

by a company that has employees who need to connect to the private network from various remote locations.<br />

• In site-to-site (also known as router-to-router), the communications are encrypted between two routers (VPN<br />

gateways) that link two sites. Site-to-Site VPNs can be either:<br />

- Intranet-based: If a company has one or more remote locations that they wish to join in a single private<br />

network, they can create an intranet VPN to connect LAN to LAN.<br />

- Extranet-based: When a company has a close relationship with another company (for example, a partner,<br />

supplier or customer), they can build an extranet VPN that connects LAN to LAN, and that allows all of<br />

the various companies to work in a shared environment.<br />

Protocols used<br />

• Layer Two Tunnelling Protocol (L2TP) over IPsec using ESP transport mode (L2TP/IPsec).<br />

L2TP, defined in RFC 2661, is an IETF Proposed Standard, and the integration of L2TP with IPsec is<br />

defined in RFC 3193. There are implementations from Microsoft, Cisco, and Nortel Networks that have been<br />

demonstrated to interoperate.<br />

<strong>Annex</strong> 2 - Page 203 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

L2TP/IPsec tunnels traffic, preserving the full end-to-end semantics of communications conducted inside the<br />

tunnel. It supports password authentication using PAP, CHAP, MS-CHAP, and MS-CHAP v2, and it<br />

supports strong authentication using EAP.<br />

• IPsec Tunnel Mode<br />

The use of IPsec tunnel mode for VPNs, described in the Internet draft draft-ietf-IPsec-dhcp-13.txt, has been<br />

approved as an IETF Proposed Standard. It tunnels traffic, preserving the end-to-end semantics of the<br />

communications it is carrying. The trust model is defined as part of the IETF standard using either standard<br />

X.509 certificates or pre-shared keys. The standard supports both dynamic and static addressing. IPsec<br />

tunnel mode is appropriate for site-to-site VPN connections and has been demonstrated to be interoperable<br />

by Microsoft, Cisco, Nortel, and others. There is no standard for legacy user authentication within IPsec<br />

tunnel mode, which makes it unsuitable for use in remote access. It is implemented by most VPN gateways.<br />

There have been many public interoperability demonstrations and customer deployments using real products.<br />

• Point-to-Point Tunnelling Protocol (PPTP)<br />

PPTP (RFC 2637) provides a good level of security that is suitable for most companies, and it has benefits<br />

compared to L2TP/IPsec and other IPsec-based VPN solutions because of the security model it uses. While<br />

IPsec has powerful security, the deployments are usually more costly and have limitations.<br />

One of the benefits of PPTP is that it does not require a certificate infrastructure, which many organisations<br />

are not yet ready to deploy. Rather, it relies on a user's logon credentials to establish trust to connect the<br />

tunnel and to create the encryption keys for the session. Additionally, the management of user names and<br />

passwords is well known.<br />

If stronger security than user passwords is wanted, PPTP can be used with EAP so that smart cards or token<br />

cards can be used for authentication. This increases the strength of the encryption key generation and reduces<br />

the risk of dictionary attacks. In addition, PPTP can be used through most Network Address Translators<br />

(NATs) today with no modifications required for either the client or server. IPsec traffic, on the other hand,<br />

cannot traverse a NAT unless both the client and server support IPsec NAT traversal (IPsec NAT-T ).<br />

Until certificate infrastructure becomes ubiquitous and IPsec product implementations are updated to support<br />

IPsec NAT-T, PPTP will remain an important protocol choice.<br />

• MPLS VPN<br />

MPLS VPN’s (RFC2547) provide un-encrypted/un-authenticated virtual circuits, which are architecturally<br />

equivalent to a Layer 2 VPN (ATM/FrameRelay).<br />

Security is provided by hiding the MPLS core structure/cloud from customers by using filtering and<br />

separation of data streams. IP-Packets are confined to their respective VPNs and the VPN partitioning is<br />

done at the routing layer – there exist separate Virtual Routing and Forwarding instance (VRF) and routing<br />

tables for each VPN at the MPLS-router. Strict filtering (RFC3031) must be implemented at the Provider<br />

Edge (PE) router to prevent label spoofing (the insertion of wrong labels into the MPLS network). Address<br />

space separation between different VPN users and the core network is achieved through the addition of a 64bit<br />

route distinguisher (RD) to each IPv4 route, making VPN unique addressing also unique to the MPLS<br />

core.<br />

IPsec needs to be deployed if authentication and confidentiality is required. There exists an Internet draft for<br />

the deployment of IPsec for un-trusted MPLS cores (draft-guichard-ce-ce-ipsec-00.txt)<br />

The core MPLS network needs to be trusted, as there is no protection against attacks and misconfiguration<br />

from/within the core. A sniffer on a core-router can see all traffic of the different VPNs. Providers are<br />

currently linking their MPLS networks together so that it might be difficult for a customer to judge the<br />

trustworthiness of the traffic path.<br />

Different MPLS scenarios have different security implications (draft-behringer-mpls-security-06.txt):<br />

- Carriers’ Carrier:<br />

� Label maps controlled by PE must be used to prevent label spoofing<br />

� The interconnection of CE and PE using a point-to-point link, as shared Layer 2 medium<br />

introduces potential security risks<br />

<strong>Annex</strong> 2 - Page 204 of 282


ISP B - Site X LDP<br />

VP<br />

CE1 PE1<br />

N B<br />

LDP<br />

ASBR1, VP<br />

RR<br />

N B<br />

LDP<br />

VP<br />

N VP N VP N VP A<br />

N B<br />

iBGP<br />

LDP<br />

VP<br />

LDP<br />

N VP N VP N VP A VP<br />

N B N VP N VP N VP A<br />

N B<br />

VP<br />

N VP N VP N VP A<br />

VP<br />

N B<br />

N B<br />

ISP B’s Customers<br />

ISP A Carrier<br />

Backbone<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

MP- iBGP<br />

LDP<br />

PE2<br />

VP<br />

N B<br />

Figure 82: ISP interconnection<br />

<strong>Annex</strong> 2 - Page 205 of 282<br />

ISP B’s Customers<br />

ASBR2,<br />

RR<br />

CE2<br />

ISP B - Site Y<br />

- Inter Provider Backbone<br />

� Inter-Provider Backbones (RFC 2547bis) form a single zone of trust<br />

� A Service Provider can insert traffic into the wrong VPN<br />

� The interconnection of networks using a point-to-point link, as shared Layer 2 medium introduces<br />

potential security risks<br />

AS A<br />

PE1<br />

RR-A RR-B<br />

VPN<br />

AB<br />

LDP<br />

VPN<br />

B<br />

VPN<br />

B<br />

AS B<br />

CE1 LDP<br />

VPN<br />

A<br />

PE-ASBR1<br />

LDP<br />

PE-ASBR2<br />

VPN MP- eBGP<br />

PE2<br />

CE2<br />

A<br />

MP- iBGP<br />

MP- iBGP<br />

Figure 83: Inter Provider Backbone<br />

AAA, Radius, TACACS+ and Diameter<br />

AAA<br />

Sometimes referred to as "triple-A" or just AAA, authentication, authorisation, and accounting is a framework<br />

providing support for Service Providers to manage and control users.<br />

Authentication provides a vehicle to identify a client that requires access to some system and logically precedes<br />

authorisation. The mechanism for authentication is typically undertaken through the exchange of logical keys or<br />

certificates between the client and the server.<br />

Authorisation follows authentication and entails the process of determining whether the client is allowed to<br />

perform and/or request certain tasks or operations. Authorisation is therefore at the heart of policy<br />

administration.<br />

Accounting is the process of measuring resource consumption, allowing monitoring and reporting of events and<br />

usage for various purposes including billing, analysis, and ongoing policy management.<br />

Radius<br />

The Remote Access Dial In User Service (RADIUS) is an Authentication Authorisation and Accounting (AAA)<br />

protocol based on client-server architecture, as illustrated below. The client is a Network Access Server (NAS)<br />

which desires to authenticate and authorise its links.


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

On the other hand, the server is an entity which has access to a database containing the ID of all the registered<br />

users together with authentication, authorisation and accounting information for each one of them. A RADIUS<br />

server can act as a proxy client to other RADIUS servers or even to other kinds of authentication servers.<br />

RADIUS<br />

Database<br />

(4)<br />

(1)<br />

Database<br />

Access<br />

RADIUS<br />

Server<br />

A<br />

(3)<br />

Server<br />

Client<br />

Network Access Server<br />

(NAS)<br />

User User User User<br />

Figure 84: Generic RADIUS Architecture<br />

Protocol Description<br />

The Network Access Server (NAS) is connected to many users. When any of these users needs to access a<br />

service it will present its authentication information to the NAS. This might be done through a customisable<br />

login prompt, where the user enters its username and password, or the authentication packets will carry this<br />

information, if the user is connecting to the NAS through a framing protocol, such as PPP (Figure 84 - 1).<br />

The NAS needs to authenticate the information received by its user; in order to do so, acting as a client, it<br />

creates an Access-Request message containing several attributes, such as the user’s name and password, the<br />

client’s ID and the port ID which is being used by the user (Figure 84 - 2). The Access-Request is sent to the<br />

RADIUS server through the network (Figure 84 - 3). The password is hidden using a method based on the RSA<br />

Message Digest Algorithm MD5.<br />

Once the RADIUS server has received the information, it first validates the sending client (NAS). In the event<br />

that the client and the RADIUS server do not have a shared secret, the request is silently discarded. Next, the<br />

RADIUS Server consults a database of users to find the user’s name matching the request (Figure 84 - 4). A<br />

general entry in this database would consist of a list of requirements which must be met to grant access to the<br />

specific user. This always includes password verification, but the client and the port utilised by the user could<br />

also be checked against database information. The RADIUS server may make requests directly to other<br />

RADIUS servers to satisfy a demand; in this case the original server will act as proxy client, generating an<br />

Access-Request (Figure 84 - 5). Generally, a common use for proxy RADIUS is roaming. Roaming permits<br />

two, or more, administrative entities to allow each other's users to dial in to either entity's network for service.<br />

The NAS client sends its RADIUS access-request to the Proxy, "forwarding server", which forwards it to the<br />

RADIUS server of a different domain, "remote server". The remote server sends a response back to the<br />

forwarding server, which in turn sends it back to the NAS.<br />

If any condition is not met, the RADIUS server sends an “Access-Reject” response indicating that the user’s<br />

request is invalid. If all conditions are met but the RADIUS server wants to issue a challenge, then it will<br />

respond by sending an “Access-Challenge” message.<br />

(2)<br />

Peer<br />

<strong>Annex</strong> 2 - Page 206 of 282<br />

Peer<br />

(5)<br />

Database<br />

Access<br />

RADIUS<br />

Server<br />

B<br />

Generic PPP Connection<br />

RADIUS<br />

Database<br />

(4)


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

This kind of message may include a text that will be shown to the user, by the client, prompting for a response to<br />

a challenge. It may also include a State attribute. After the user as provided its response, the NAS client submits<br />

its original Access-request again but with a new request ID, where the User-Password Attribute is replaced by<br />

the user response to the challenge. The RADIUS server can respond to this message with an Access-Accept or<br />

an Access-Reject message, or ask for another Access-Challenge. The challenge/response authentication<br />

mechanism consists of the user being given an unpredictable number and challenged to encrypt it. Every<br />

authorised user will always be equipped with special devices such as smart cards or software that make possible<br />

the computing of the correct response to the challenge.<br />

If all conditions are met then all the configuration parameters and values necessary for the user session are<br />

placed into the Access-Accept message. Mainly these parameters refer to the type of service, e.g. PPP, Login<br />

User, SLIP, and the required values to establish and deliver that service. For SLIP and PPP, this may include<br />

values such the IP address, subnet mask, MTU, desired compression, and desired packet filter identifiers.<br />

Next the NAS sends an Accounting-Request message; this signals the RADIUS server that the accounting<br />

mechanism has started. The RADIUS server acknowledges the request by replying with an Accounting-<br />

Response message, after this information has been stored. When the user completes its session and logs out, the<br />

NAS sends another Accounting-Request notifying the RADIUS server that the session has stopped. This<br />

message contains information such as: total input and output octet, total input and output packets for the session,<br />

session time, reason why the user has disconnected from the network, etc... The RADIUS server replies with an<br />

Accounting-Response after all the accounting information about the user are stored. The same Accounting-<br />

Request message is sent by the NAS to start and stop the accounting process. Information about what action<br />

(start or stop) is to be performed is contained in the message attributes, specifically in the Acct-Status-Type<br />

field.<br />

Interoperation with PAP and CHAP<br />

A Network Access Client can interoperate with both PAP and CHAP. For PAP, the NAS takes the PAP ID and<br />

password and uses these to replace the User-Name and User-Password in an Access-Request packet to the<br />

RADIUS server.<br />

For CHAP the NAS itself presents a random challenge (preferably of 16 octets) to the user. Next the user returns<br />

a CHAP response along with its CHAP ID and CHAP username. The NAS sends an Access-Request packet to<br />

the RADIUS server substituting the User-Name with the CHAP username, the CHAP-Password with the CHAP<br />

ID and CHAP response. The random challenge presented to the user, is generally placed in the Request<br />

Authenticator field of the Access-Request packet. The RADIUS server looks up the password based on the<br />

User-Name, encrypts the challenge using MD5 on the CHAP ID octet, the password, and the CHAP challenge;<br />

then it compares the result to the CHAP-Password. If the RADIUS server is unable to perform the requested<br />

authentication it should return an Access-Reject. If the CHAP challenge value is longer than 16 octets, a specific<br />

attribute field is created instead of placing the value in the header Request Authenticator field.<br />

DIAMETER<br />

The DIAMETER basic protocol is designed to provide a framework for services requiring AAA support, at the<br />

access technology level. The protocol is intended to be flexible enough to allow services to add building blocks<br />

(or extensions) to the base DIAMETER protocol to meet their requirements. Unlike other AAA protocols for<br />

access technologies - such as PPP dial-in, Mobile IP and others -, DIAMETER uses a peer to peer architecture<br />

rather than a more classic client/server scheme. DIAMETER is recognised as a peer to peer protocol since any<br />

node is free to initiate a request at any time. Messages initiated by a server towards a client are usually requests<br />

to abort a service to a specific user.<br />

DIAMETER is also meant to operate both with local and with roaming situations. Since DIAMETER is not a<br />

complete protocol by itself, but it needs application-specific extensions from the technology, or architecture,<br />

used to access the network, it is not possible to describe or compare the protocol’s details regarding security and<br />

other aspects. Thus, the following discussion will deal mainly with the elements that are provided by the basic<br />

common DIAMETER framework: message format, message transport, error reporting, accounting and security<br />

considerations. DIAMETER is still a draft from the Authentication Authorisation and Accounting IETF group.<br />

Protocol Overview<br />

DIAMETER peers communicate exchanging a number of messages in order to provide the following facilities:<br />

• Delivery of Attribute Value Pairs (AVPes)<br />

<strong>Annex</strong> 2 - Page 207 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Capabilities Negotiation<br />

• Error Notification<br />

• Extensibility, through addition of new commands and AVPes<br />

• Basic services necessary for applications, such as handling of user sessions or accounting<br />

AVP is the most important object within the DIAMETER protocol; it is used to deliver all data. Certain AVPes<br />

are needed by DIAMETER itself to operate, while others deliver data associated with the applications exploiting<br />

DIAMETER. AVPes containing application specific information may be arbitrarily added to DIAMETER<br />

messages, as long as the required AVPes are present and the ones that are to be added are not explicitly<br />

forbidden by the protocol rules. AVPes needed by DIAMETER to support itself, in providing the required<br />

features, are used for:<br />

• Transporting of user authentication information, for the purposes of enabling DIAMETER servers to<br />

authenticate users.<br />

• Transporting of service specific authorisation information, between client and servers, allowing the peers to<br />

decide whether a user's access request should be granted or not.<br />

• Exchanging resource usage information, which may be used for accounting purposes, capacity planning, etc.<br />

• Relaying, proxying and redirecting of DIAMETER messages through a server hierarchy.<br />

• Given these AVPes, DIAMETER is capable of providing the minimum requirements needed to implement a<br />

solid AAA architecture.<br />

Entities Description<br />

A DIAMETER Client is a device at the edge of the network that performs access control, usually referred to as<br />

Network Access Server (NAS) or Foreign Agent (FA). DIAMETER clients usually generate DIAMETER<br />

messages requesting user authentication, authorisation and accounting services. A DIAMETER Agent is any<br />

node that is not authorising or authenticating users locally; examples of these Agents are DIAMETER proxies or<br />

relays.<br />

A DIAMETER Server is the entity that performs the actual authentication or authorization of remote users based<br />

on profiles. It is important to highlight that a DIAMETER node could be acting as a Server for certain requests<br />

while acting as an Agent for others.<br />

DIAMETERS agents are introduced to bring flexibility into the architecture; the main advantages are listed next:<br />

• They can distribute administration of systems to a configurable grouping, including the maintenance of<br />

security associations.<br />

• They can be used for concentration of requests from a number of co-located or distributed NAS equipment<br />

sets to a set of like user groups.<br />

• They can do value-added processing to the requests or responses.<br />

• They can be used for load balancing. A complex network will have multiple authentication sources; agents<br />

can sort requests and forward these towards the correct target.<br />

DIAMETER requires that Agents maintain transaction state, meaning that upon forwarding a request, the<br />

original hop-by-hop identifier of the message is saved for failover purposes. This field is then replaced, by the<br />

agent, with a locally unique identifier which is restored to its original when the corresponding answer is<br />

received.<br />

The first kinds of DIAMETER agent analysed are Relay agents. Relay agents deliver requests and route<br />

messages to other DIAMETER nodes based only on the information found in the message header, like the<br />

Destination-Realm. Relays are used to aggregate requests from multiple NASes within a common geographical<br />

area, POPs. Relays bring an advantage since they eliminate the need for NASes to be configured with the<br />

necessary information they would otherwise require to communicate with DIAMETER servers in other realms.<br />

Moreover, they reduce the configuration load on DIAMETER servers that would otherwise be necessary when<br />

NASes are added, changed or deleted. Using transaction states Relays agents are capable of routing a message<br />

reply exactly to the DIAMETER node that issued the request to the Relay. Relays manipulate messages only by<br />

inserting, and removing, routing information; they do not modify any other portion of a message.<br />

Similarly to Relays, Proxy agents route DIAMETER messages using their DIAMETER Routing Table.<br />

However, these are different since they substantially modify messages in order to implement policy<br />

<strong>Annex</strong> 2 - Page 208 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

enforcement. It is important to note that although proxies may provide a value-add function for NASes, they<br />

prevent access devices from using end-to-end security, since manipulating messages breaks authentication.<br />

Proxies may be used in call control centres or access ISPs that provide outsourced connections; they can monitor<br />

the number and types of ports in use, and make allocation and admission decisions according to their<br />

configuration.<br />

Next Redirect agents are examined. These are used to provide Server address resolution and User to Server<br />

resolution within a given Realm. The Redirect Agents make use of special DIAMETER routing tables, or of a<br />

user table, to determine where a specific request should be forwarded. These agents do not deal with request<br />

messages and answers directly; they only provide the information, to the originating node, as to where requests<br />

should be forwarded. An example is a redirect agent that provides services to all members of a consortium, but<br />

does not wish to be burdened with relaying all messages between realms. This scenario is advantageous since it<br />

does not require that the consortium provide routing updates to its members when changes are made to the<br />

member's infrastructure. Redirect agents are not required to keep transaction nor session state.<br />

Translation Agents are used to provide translation between two different protocols such as RADIUS and<br />

DIAMETER or TACACS+ and DIAMETER. Translation agents are designed to be used as aggregation servers<br />

to communicate with a DIAMETER infrastructure, while allowing for embedded systems to be migrated at a<br />

slower pace.<br />

AAA Functions<br />

The basic DIAMETER protocol does not include any specific authorisation request messages. In fact, these are<br />

largely application-specific and are defined in the DIAMETER application documents. At present only<br />

NASREQ and Mobile IP have been defined to work with DIAMETER extending the basic protocol to a fully<br />

functional AAA architecture. However, the base protocol defines a set of messages that is used to handle users<br />

sessions. These are intended to allow servers to maintain state information in order to free resources.<br />

When a user requests access to the network, the receiving NAS issues a request to the correct local DIAMETER<br />

server. As mentioned before, the details of the messages depend on the specific DIAMETER application.<br />

Nevertheless, messages must contain a Session-ID AVP, which will be present in all subsequent messages<br />

relating to the same user’s session. This AVP is a way for clients and servers to correlate a given message with<br />

the corresponding user session. The DIAMETER protocol places a lot of stress on the fact that Session-ID<br />

AVPes must be globally and eternally unique; this is wanted since Session-ID AVPes are meant to uniquely<br />

identify a user session without referencing to any other information. The protocol implementation suggests, in<br />

order to guarantee eternally uniqueness, that the DIAMETER node Identity and the current time, should be<br />

encoded in the Session-ID number.<br />

A DIAMETER node that receives a successful authentication and/or authorisation message from its Server must<br />

start collecting accounting information for the session. Accounting-Request messages are used to transmit the<br />

information to the Server, which must acknowledge them with Accounting-Answer messages. A Server that<br />

authorises a user for a given period of time must include the Lifetime AVP in the authorisation replay. This AVP<br />

defines the maximum number of seconds a user may utilise the resources before another authorisation request is<br />

expected by the server.<br />

The accounting protocol in DIAMETER is based on a server directed model with capabilities for real-time<br />

delivery of accounting information. This model implies that the node generating the accounting data gets<br />

information from either the authorisation server or an accounting server regarding the way accounting data shall<br />

be forwarded. DIAMETER client should be equipped with non-volatile memory for the safe storage of<br />

accounting records over reboots, extended network failures, and server failures. If the service that needs to be<br />

accounted is a one-time event, meaning that the start and stop of the event are simultaneous, the Accounting-<br />

Record-Type AVP must be set to Event_Record. On the contrary, if the event is of a certain measurable length,<br />

then the Accounting-Record-Type must use the values Start_Record, Stop_Record and possibly Interim_Record.<br />

To clearly identify a single record within an accounting session, the Accounting-Record-Number AVP is used.<br />

This AVP contains a globally unique 32-bit unsigned number; the combination of Session-ID and Accounting-<br />

Record-Number AVPes can be used in matching accounting records confirmation. If an accounting mechanism<br />

needs multiple accounting sub-sessions, DIAMETER allows such applications to send accounting messages<br />

with constant Session-ID AVP but different Accounting-Sub-Session-ID AVPes.<br />

Security<br />

DIAMETER protocol is not meant to be used without security mechanisms; nodes must support IP Security<br />

(IPsec) and TLS. The DIAMETER draft suggests that IPsec should be used primarily at the edges and for intra-<br />

<strong>Annex</strong> 2 - Page 209 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

domain traffic, using pre-shared keys, while TLS should be the privileged choice for inter-domain traffic<br />

exchanges.<br />

Thus, all DIAMETER implementations are to support IPsec ESP in transport mode with non-null encryption and<br />

authentication algorithms to provide per-packet authentication, integrity protection and confidentiality, and must<br />

also support the replay protection mechanisms of IPsec. DIAMETER implementations must also support<br />

Internet Key Exchange (IKE) for peer authentication, negotiation of security associations, and key management,<br />

using the IPsec Domain of Interpretation (DOI).<br />

Peer authentication must be done using pre-shared key; the usage of certificate-based peer authentication<br />

through digital signature may also be a possible solution. Moreover, configuring a peer to peer environment is<br />

not easy, the trust model within a DIAMETER peer is essential to security. A possible solution is the use of<br />

certificates. In this case, it is necessary to configure the root Certificate Authorities (CA) trusted by the<br />

DIAMETER peers. These root CAs should be unique to DIAMETER usage and distinct from the root CAs that<br />

might be trusted for other purposes such as Web browsing. In general, it is expected that those root CAs will be<br />

configured so as to reflect the business relationships between the organisation hosting the DIAMETER peer and<br />

other organisations. As a result, a DIAMETER peer will typically not be configured to allow connectivity with<br />

any arbitrary peer. When certificate authentication DIAMETER peers may not be known beforehand, peer<br />

discovery may be required.<br />

End-to-end security can be provided by an end-to-end security extension, which is not defined in the base<br />

protocol specification. In cases where no Proxies Agents are involved, the use of TLS or IPsec between<br />

DIAMETER peers may already be sufficient.<br />

<strong>A2.</strong>10.2 Security protocols and mechanisms<br />

<strong>A2.</strong>10.2.1 IKE<br />

The IKE (Internet Key Exchange) protocol is a protocol, defined by the IETF, that supports the management of<br />

security associations. IKE creates an authenticated secure tunnel between two entities and then negotiates (on<br />

this secure tunnel) the security association for IPsec.<br />

To be more secure, the IKE process requires that the two entities must be authenticated to each other. IKE can<br />

support several authentication methods. The two entities must agree on a common authentication protocol<br />

during the negotiation process.<br />

There are two different negotiation phases within IKE:<br />

• Phase 1: IKE tunnel creation and negotiation of IKE SA. For this phase, two modes are available: three twoway<br />

messages (Main Mode) for exchange of IKE parameters, creation of shared session key material and<br />

mutual authentication of the parties; or a quicker method without protection for identities (called Aggressive<br />

Mode)<br />

• Phase 2: IPsec tunnel creation and IPsec parameters negotiation. For this phase, one mode is available (called<br />

Quick Mode), composed of three messages.<br />

IKE provides a well known and widely deployed protocol to establish security associations between end hosts<br />

and / or gateways. It makes use of a variety of authentication methods, e.g. preshared keys, RSA public / private<br />

keys, or X.509 certificates. Furthermore, enhancements exist to also provide user authentication during the<br />

negotiation phase.<br />

<strong>A2.</strong>10.2.2 IPsec<br />

Description<br />

IPsec is a protocol family standardised by the IETF in several documents (e.g. rfc2401, rfc2411). IPsec is<br />

designed to protect the communication between two hosts, two networks (via their gateways) or a host and a<br />

gateway. IPsec can provide protection at the network layer (i.e. to IP packets) by implementing some or all of<br />

the following security services:<br />

• Data confidentiality: Ensures that data cannot be eavesdropped. To provide this service, encryption<br />

algorithms like DES, 3DES or AES are used.<br />

• Data origin authentication: Ensures that the sender is the system / person it claims to be. This service is<br />

provided by using algorithms like HMAC-MD5 or HMAC-SHA1.<br />

<strong>Annex</strong> 2 - Page 210 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Data integrity: Ensures that data received at the receiver is identical to that sent by the sender. This service is<br />

provided by using algorithms like HMAC-MD5 or HMAC-SHA1.<br />

• Protection against replay: Ensures that attempts to replay previous IP packets can be detected. This service is<br />

provided by utilizing sequence numbers.<br />

• Limited traffic flow confidentiality. Ensures that information cannot be inferred simply by monitoring<br />

network traffic (traffic analysis attacks).<br />

Most of these services are based on cryptographic mechanisms and are independent of the used Internet<br />

Protocol, i.e. IPv4 or IPv6. To provide these security services, some parameters have to be exchanged between<br />

the two hosts or gateways which want to establish a secure communication. The parameters could be, for<br />

example, the algorithms used for integrity protection (MD5, SHA…) or the algorithms used for encryption<br />

(3DES, AES….). The Security Associations (SAs) are the entities that contain these security parameters, and<br />

therefore define a secure relationship between peers. Note that in IPsec SAs are unidirectional. Therefore, a<br />

secure connection with IPsec usually consists of two SAs, one for each direction of data flow.<br />

IPsec provides several options in terms of protocols, modes of operation and type of implementation. These<br />

options are discussed below.<br />

Security protocols<br />

There are 2 security protocols available in IPsec. They differ in the types of security service that they implement,<br />

and are described below.<br />

• AH: This is a protocol that provides data integrity and data origin authentication, protection against replay of<br />

the IP packets but not confidentiality. AH is specified in rfc2402.<br />

• ESP: This is a protocol that provides confidentiality, data origin authentication, connectionless integrity and<br />

protection against replay. Limited traffic flow confidentiality can also be provided. ESP is not limited to<br />

specific algorithms. DES, MD5 and SHA-1 must be available in an IPsec implementation, but other<br />

algorithms may also be used. ESP is specified in rfc2406.<br />

Modes of operation<br />

There are 2 modes of operation available in IPsec :<br />

• Transport mode. With transport mode, each IP packet's payload is encrypted but the headers are left intact.<br />

This mode ensures privacy of content but does not protect against traffic analysis attacks.<br />

• Tunnel mode. With tunnel mode, the entire original IP packet is encrypted and becomes the payload in a new<br />

IP packet. This mode adds extra overhead in terms of the new header but can provide limited traffic flow<br />

confidentiality.<br />

These two modes are available with both the AH and ESP protocols.<br />

Implementation options<br />

There are 3 possible types of implementation of IPsec:<br />

• Native implementation. This involves integration of IPsec into the native IP implementation. It requires<br />

access to the IP source code and is applicable to both hosts and security gateways.<br />

• Bump in the stack (BITS) implementation. Here IPsec is implemented "underneath" an existing<br />

implementation of an IP protocol stack, between the native IP and the local network drivers. Source code<br />

access for the IP stack is not required in this context, making this implementation approach appropriate for<br />

use with legacy systems. This approach, when it is adopted, is usually employed in hosts.<br />

• Bump in the wire (BITW) implementation. This is the use of an outboard crypto processor and is a common<br />

feature in military and some commercial systems. Such implementations may be designed to serve either a<br />

host or a gateway (or both). Usually the BITW device is IP addressable. When supporting a single host, it<br />

may be quite analogous to a BITS implementation, but in supporting a router or firewall, it must operate like<br />

a security gateway.<br />

Analysis<br />

IPsec is designed to provide interoperable, high quality, cryptographically-based security for IPv4 and IPv6. The<br />

set of security services offered includes access control, connectionless integrity, data origin authentication,<br />

protection against replays (a form of partial sequence integrity), confidentiality (encryption) and limited traffic<br />

flow confidentiality (rfc2401).<br />

<strong>Annex</strong> 2 - Page 211 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

These objectives are met through the use of two traffic security protocols, the Authentication Header (AH) and<br />

the Encapsulating Security Payload (ESP), and through the use of cryptographic key management procedures<br />

and protocols. The set of IPsec protocols employed in any context, and the ways in which they are employed,<br />

will be determined by the security and system requirements of users, applications, and/or sites/organisations.<br />

The mechanisms of IPsec are designed to be algorithm-independent. This modularity permits selection of<br />

different sets of algorithms without affecting the other parts of the implementation. For example, different user<br />

communities may select different sets of algorithms (creating cliques) if required.<br />

As a means to provide end-to-end or gateway-to-gateway security (or any variant thereof) IPsec can be a<br />

valuable tool. However, the security given by IPsec is dependent on the operating environment in which it is<br />

deployed. If this environment is breached or keys are exposed, the security provided by IPsec can be severely<br />

degraded.<br />

<strong>A2.</strong>10.2.3 SSL/TLS<br />

Description<br />

This parts gives a description of SSL (SSL 3.0) and TLS (TLS1.0)<br />

TLS 535 is similar to SSL it includes all the general concepts of SSL because TLS is based on SSL. TLS is more<br />

clear, more generic than SSL (encapsulation) and the design of the protocol is independent from its use.<br />

Furthermore TLS does not impose methods of specific encodings.<br />

SSL was developed by Netscape for transmitting private documents via the Internet, using a private key to<br />

encrypt transferred data.<br />

Netscape navigator and Internet explorer support SSL, and many web sites use the protocol to obtain<br />

confidential user information. By convention, URLs that require an SSL connection start with https:// instead of<br />

http://<br />

Another protocol for transmitting data securely over the World Wide Web is Secure HTTP (S-HTTP) 536<br />

Whereas SSL creates a secure connection between a client and a server, over which any amount of data can be<br />

sent securely, S-HTTP is designed to transmit individual messages securely. SSL and S-HTTP, therefore, can be<br />

seen as complementary rather than competing technologies. Both protocols have been approved by the Internet<br />

Engineering Task Force (IETF) as a standard, but S-HTTP is rarely used, whereas SSL is commonplace.<br />

SSL uses TCP/IP on behalf of the higher-level protocols, and in the process allows an SSL-enabled server to<br />

authenticate itself to an SSL-enabled client, allows the client to authenticate itself to the server, and allows both<br />

machines to establish an encrypted connection.<br />

These capabilities address fundamental concerns about communication over the Internet and other TCP/IP<br />

networks:<br />

• SSL server authentication allows a user to confirm a server's identity. SSL-enabled client software can use<br />

standard techniques of public-key cryptography to check that a server's certificate and public ID are valid<br />

and have been issued by a certificate authority (CA) listed in the client's list of trusted CAs.<br />

• SSL client authentication allows a server to confirm a user's identity. Using the same techniques as those<br />

used for server authentication.<br />

• An encrypted SSL connection requires all information sent between a client and a server to be encrypted by<br />

the sending software and decrypted by the receiving software, thus providing a high degree of<br />

confidentiality. In addition, all data sent over an encrypted SSL connection is protected with a mechanism<br />

for detecting tampering--that is, for automatically determining whether the data has been altered in transit.<br />

The SSL protocol supports different cryptographic algorithms, or ciphers. SSL handshake protocol determines<br />

how the server and client negotiate which cipher suites they will use to authenticate each other, to transmit<br />

certificates, and to establish session keys.<br />

The cipher suite descriptions that follow refer to these algorithms:<br />

• DES. Data Encryption Standard, an encryption algorithm used by the U.S. Government.<br />

• DSA. Digital Signature Algorithm, part of the digital authentication standard used by the U.S. Government.<br />

• KEA. Key Exchange Algorithm, an algorithm used for key exchange by the U.S. Government.<br />

535 TLS, http://www.ietf.org/rfc/rfc2246.txt<br />

536 SHTTP, http://www.ietf.org/rfc/rfc2660.txt<br />

<strong>Annex</strong> 2 - Page 212 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• MD5. Message Digest algorithm developed by Rivest.<br />

• RC2 and RC4. Rivest encryption ciphers developed for RSA Data Security.<br />

• RSA. A public-key algorithm for both encryption and authentication. Developed by Rivest, Shamir, and<br />

Adleman.<br />

• RSA key exchange. A key-exchange algorithm for SSL based on the RSA algorithm.<br />

• SHA-1. Secure Hash Algorithm, a hash function used by the U.S. Government.<br />

• SKIPJACK. A classified symmetric-key algorithm implemented in FORTEZZA-compliant hardware used<br />

by the U.S. Government.<br />

• Triple-DES. DES applied three times.<br />

Key-exchange algorithms like KEA and RSA key exchange govern the way in which the server and client<br />

determine the symmetric keys they will both use during an SSL session. The most commonly used SSL cipher<br />

suites use RSA key exchange.<br />

The SSL protocol support overlapping sets of cipher suites. Administrators can enable or disable any of the<br />

supported cipher suites for both clients and servers. When a particular client and server exchange information<br />

during the SSL handshake, they identify the strongest enabled cipher suites they have in common and use those<br />

for the SSL session.<br />

Decisions about which cipher suites a particular organisation decides to enable depend on trade-offs among the<br />

sensitivity of the data involved, the speed of the cipher, and the applicability of export rules.<br />

Strengths of the SSL protocol<br />

Dictionary attack<br />

This attack works in situations where some plaintext is known to make up part of the original message. In the<br />

case of a protocol, the very fact that a rigid structure of data exists is often enough information. Similarly if the<br />

application data being transferred has a few often used commands, in the case of HTTP the "get" command for<br />

instance. The attack works by taking the known plaintext and precomputing the encrypted form of the plaintext<br />

using every possible key. Given an encrypted message, a search is performed looking for occurrences of the<br />

precomputed ciphertexts in the message. When one is found it is then known that the key which was used to<br />

give that ciphertext is the key which was used to encrypt the whole message.<br />

SSL is protected from this attack by having very large key spaces for all its ciphers. Even the export ciphers<br />

support 128 bit keys (albeit 88 bit transmitted in the clear) and this makes the dictionary prohibitively large. For<br />

the export ciphers a dictionary could be produced of size 2^40 entries, but this would have to be done for each<br />

session and this in effect makes it identical to a brute force attack.<br />

Replay attack<br />

A replay attack is one where a third party records an exchange of messages between a client and server and<br />

attempts to rerun the client messages at the server at a later date. SSL foils this possibility by introducing a<br />

nonce number. This is in the form of the connection-id, which the server randomly generates and sends to the<br />

client. Since the nonce differs for each connection, no two connections are likely to have the same nonce and<br />

thus the old set of client messages do not satisfy the server. SSL nonces are 128 bits in length for added security.<br />

Man-in-the-middle attack<br />

The Man-In-The-Middle attack occurs when an adversary is able to intercept client messages read them and pass<br />

them on to the server and vice versa. This attack is prevented by SSL forcing the server to use its private key to<br />

decrypt the master key if it is to continue to handshake with the client. To be able to do this requires that it has a<br />

valid certificate which the client can verify. An adversary would have to fake a certificate and break the<br />

certification authorities key in order to compromise this chain; a task which could be more difficult than doing a<br />

brute force attack on the cipher itself.<br />

Brute force attack against strong ciphers<br />

A brute force attack against the ciphers with 128 bits or more is completely impractical in the foreseeable future.<br />

The only non-export cipher which would be susceptible to this attack is the standard DES 56 bit cipher - it is not<br />

recommended that this cipher be used if at all possible.<br />

Weaknesses of the SSL protocol<br />

Brute Force Attack Against Weak Ciphers<br />

<strong>Annex</strong> 2 - Page 213 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The most obvious weakness of the protocol is the susceptibility of the ciphers which use small keys to brute<br />

force attack, in particular RC4-40, RC2-40 and to some extent DES-56.<br />

Renegotiation of Session Keys<br />

In the present situation, once a connection is established, the same session key (master key) is used throughout.<br />

If SSL were layered underneath a long running connection, such as a Telnet application, then this failure to alter<br />

session keys becomes a potential security hole.<br />

One of the best methods of increasing the security of a system such as this is to force renegotiation of the session<br />

key at regular intervals, thus multiplying the difficulty and cost of a brute force attack by the number times the<br />

session key is changed. This could actually be used to increase the security of the export ciphers. If an adversary<br />

were faced with the task of having to decrypt 100,000 records all encrypted using a different, albeit weak, key<br />

rather than the same key, then the task becomes 100,000 times more difficult.<br />

<strong>A2.</strong>10.2.4 Kerberos<br />

Description<br />

Figure 85: Kerberos architecture<br />

Kerberos is a system designed primarily to provide single sign-on authentication within a domain. In other<br />

words, users only have to authenticate once to be able to access multiple application servers.<br />

The basic architecture is shown in Figure 85. A user initially authenticates to the AS and obtains a Ticket<br />

Granting Ticket (TGT) in return (steps 1 and 2 in Figure 85). This is stored by the user's client software. Each<br />

time the user wishes to access a particular application server, their client software first of all presents the TGT to<br />

the Ticket Granting Service (TGS) (step 3 in the figure). The TGS checks the TGT, and if it is valid it returns a<br />

ticket (the authentication assertion) for that particular application to the user's client (step 4 in the figure). The<br />

client can then present this ticket to the application to authenticate the user (step 5 in the figure). Note that the<br />

only time users have to authenticate themselves is to the AS as the rest of the ticket handling process is handled<br />

transparently by their client software.<br />

Authentication in Kerberos is based on the use of secret key cryptography. Another property is that the user has<br />

to contact the TGS for every application it wishes to access. Revocation of tokens is not possible, but each token<br />

is timestamped to limit its lifetime. Finally, client software and all applications have to be modified (they have to<br />

be "Kerberized") in order to use the system.<br />

Analysis<br />

<strong>Annex</strong> 2 - Page 214 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Two major problems are raised by the use of Kerberos:<br />

• In order for the timestamps to be meaningful, there is a need for synchronised clocks on the network.<br />

Although not necessarily a problem, there is a potential for this to limit its applicability.<br />

• In order to authenticate users, the AS has to store all user passwords. Concentrating all user passwords in one<br />

location is a potential security issue, as well as an availability one.<br />

<strong>A2.</strong>10.3 Emerging Technologies<br />

<strong>A2.</strong>10.3.1 Emerging protocols and mechanisms<br />

<strong>A2.</strong>10.3.1.1 SIP Security<br />

SIP (Session Initiation Protocol) 537 based VoIP (Voice over IP) solutions and platforms are becoming widely<br />

adopted as a lower cost replacement for more traditional PSTN based voice networks. PSTN based networks are<br />

by design tightly secured networks for a number of reasons:<br />

• Most PSTN traffic traverses well-established and trusted networks. Most PSTN based connections can be<br />

controlled to take place over the one operators network or else over equally trusted and secured peer<br />

networks.<br />

• Limitations on what traffic can be carried over the network (i.e. voice and voice signalling) and on how that<br />

traffic is generated and transmitted. The PSTN is a single service network capable of switching limited<br />

traffic types.<br />

• Voice traffic can only be transmitted across the network once an end-to-end connection is established. An<br />

end-to-end connection can only take place between established end points. Also it is impossible to re-direct<br />

traffic maliciously to a third party.<br />

• Sophisticated monitoring of the whole network and high levels of resilience and redundancy are build-in.<br />

Malicious use of the network is most often traceable as end-points are uniquely identifiable by their<br />

connection point.<br />

Within a multi-service globally accessible network such as the Internet control of access is much more difficult<br />

to regulate. Therefore providing an equivalent voice network to the PSTN on the Internet provides much more<br />

scope for unsolicited access and abuse. Unlike a PSTN network there is no scope within the Internet for<br />

restricting packet origin, its flow through the Internet and any unsolicited access (however, firewalls on the edge<br />

of Intranets can restrict packet flow into and out of a private network). Ultimately then there exists many new<br />

security hurdles for a publicly based VoIP service.<br />

<strong>A2.</strong>10.3.1.2 Eavesdropping<br />

SIP messages are in the form of ASCII text. Any interception of a SIP message in transit will easily reveal the<br />

details of the calling and called party. In addition, once the details of the call are known it should be possible<br />

then to easily intercept and ‘listen-in’ to the media stream.<br />

End-to-end encryption of the signalling and media would ensure that eavesdropping could not occur. In<br />

particular Secure RTP 538 should be used to prevent listening in on media streams. However encryption schemes<br />

require a supporting public keying infrastructure (PKI) to be in place. Traversing NATs and Firewalls may be<br />

problematic unless these were included within the security infrastructure and had knowledge of the keys being<br />

used by the end hosts.<br />

It should be noted that the proposed security measures in this area for SIP – encryption – go far beyond what is<br />

performed on the PSTN i.e. no encryption whatsoever. This however does not mean that PSTN traffic is less<br />

secure as physical access to the network, and to the correct circuits, is likely to prove more difficult to achieve.<br />

<strong>A2.</strong>10.3.1.3 Authentication<br />

Authentication refers to the need to ensure that we are connected to the person we intend. To guarantee that this<br />

is so we need to clearly make certain that SIP Register messages are secured and that call set-up traffic is not<br />

537 SIP, http://www.ietf.org/rfc/rfc2543.txt<br />

538 SRTP, http://www.ietf.org/rfc/rfc3711.txt<br />

<strong>Annex</strong> 2 - Page 215 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

intercepted. For Register messages, these will usually involve the exchange of a password that is specific to the<br />

SIP customer. Protection of the password is usually via a hashing scheme e.g. HTTP Digest. This is probably<br />

sufficient to ensure that the end user is genuine.<br />

SIP also contains headers for authentication purposes. The authorisation header contains a signature used to<br />

verify communication between SIP proxy servers. In order to identify itself to a proxy, a SIP user agent uses the<br />

proxy-authorisation header.<br />

<strong>A2.</strong>10.3.1.4 Integrity<br />

How do we ensure that legitimate traffic is not tampered whilst in transit? Obviously encryption of the data will<br />

go a long way to ensure this cannot occur. Creating a secure association, again by a method such as HTTP<br />

Digest, between the two end hosts would provide additional protection for the media stream. A certificate-based<br />

scheme would, as mentioned previously, require a PKI for support.<br />

Replaying, or retransmitting, a genuine message can be used by an attacker to tie up the receiver as it will be<br />

bound by the need to process what it thinks is a genuine message. SIP messages can overcome this problem via<br />

the ‘Cseq’ and ‘Call-ID’ headers which can be used to ensure that messages are received in order and only<br />

processed once.<br />

<strong>A2.</strong>10.3.1.5 Availability<br />

The final concern centres on the availability of the SIP elements, in particular the SIP servers. Any denial of<br />

service attacks on the servers would render new call set-up and existing call clear down impossible. Protection<br />

against this type of attack will be essential for any public VoIP infrastructure as loss of service can inevitably<br />

lead to loss of revenue and a decrease in customer satisfaction! Careful configuration of equipment is necessary<br />

in order to prevent such attacks.<br />

The SIP protocol itself can be used to force SIP messages to be diverted from a series of unsuspecting SIP<br />

proxies to a single server under attack. This is achieved by modifying the ‘Via’ header in order to give the<br />

appearance that many messages have come by way of a particular server. This type of DDoS can be protected<br />

against by integrity checking each SIP message but obviously requires the extra processing of each response<br />

message at each server.<br />

<strong>A2.</strong>10.3.2 DNSSec<br />

Description<br />

DNS Security 539 technology is made of a set of extensions to the Domain Name System [DNS] protocol that<br />

provide data integrity and data origin authentication to security aware resolvers and applications, mainly through<br />

the use of public-key cryptography. Confidentiality is not required as the information stored in the DNS<br />

database is supposedly public.<br />

Analysis<br />

The general idea is that each node in the DNS tree is associated with a public key. Each message from DNS<br />

servers is signed under the corresponding private key associated with the public key of the domain. It is<br />

important to underline that each public key is associated with a domain (a node in the DNS tree), not with a<br />

specific DNS server.<br />

It is assumed that one or more authenticated DNS root public keys are publicly known. These keys are used to<br />

generate a digital signature that binds the identity information of each top-level domain to the corresponding<br />

public key. The top level domains sign the keys of their subdomains and so on in a process where each parent<br />

signs the public keys of all its children in the DNS tree.<br />

A brief description of the DNSSEC resource records follows:<br />

• SIG. The signature (SIG) resource record is defined to store signatures in the DNS. If a server supports<br />

DNSSEC and is thus security aware, it will attempt to return the relevant Resource Records and the<br />

corresponding SIG records in an answer to a query.<br />

• KEY. The KEY record is used to store a public key. The key is associated with DNS name. A resource<br />

record with the same name and same type can be associated with several KEY records. The KEY RR is<br />

authenticated by a SIG RR like other DNS resource records. It is possible to bind the key for use with TLS,<br />

539 Domain Name System Security Extensions. D. Eastlake. RFC 2535. March 1999.<br />

<strong>Annex</strong> 2 - Page 216 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

e-mail, DNSSEC and IPsec using the protocol field. A range of values are reserved for new protocols to be<br />

added in the future.<br />

• NXT. The purpose of the NXT resource record is to be able to provide data origin authentication of a nonexistent<br />

name or the non-existence of a certain resource record type for a DNS name. The NXT resource<br />

record contains the name of the next name in the zone, thus stating that there can be no resource records<br />

between the owner name and the next name.<br />

The last NXT record in a zone will contain the zone name, treating the name space as circular. As with KEY<br />

records, the NXT records are authenticated by a SIG record.<br />

• CERT. The CERT resource record can be used to store certificates in the DNS. The types of certificates<br />

currently defined are X.509, SPKI and PGP certificates. As the CERT record can contain a certificate, it is<br />

possible to use DNS for storage of public keys. It is intended that personal public keys should be stored in<br />

the DNS using the CERT record, and not by using the KEY record.<br />

Two different transaction security mechanisms are defined: transaction signatures (TSIGs) based on symmetric<br />

techniques, and public-key signatures which are abbreviated by SIG(0). Both of these signature types can be<br />

added to the end of an update packet, authenticating the complete packet. Transaction signatures (TSIG) are<br />

created using symmetric encryption methods, meaning that the parties involved in the communication need to<br />

have a shared secret. It is convenient to use TSIG to secure dynamic updates or zone transfers between master<br />

and slave servers. SIG(0) is similar to TSIG but employs public-key signatures. SIG(0) may not be practical to<br />

use on a large scale but it is useful in case integrity protection and authentication of the message as a whole are<br />

desired. SIG(0) could be used to authenticate requests when it is necessary to check whether the requester has<br />

some required privilege.<br />

<strong>A2.</strong>10.3.3 IKEv2<br />

Description<br />

IKEv2 is currently work in progress and has been published as WG draft in January 2004. The current version<br />

of IPsec (as defined in RFCs 2407, 2408, and 2409) has several deficiencies, which have been discussed by<br />

Niels Ferguson & Bruce Schneier, Tero Kivinen, Kaufman & Perlman for example. The criticisms can be<br />

summarised as:<br />

• The IKE protocol documentation is too complex as it is divided into at least three documents<br />

• The IKE protocol allows eight different initial exchanges<br />

• The IKE protocol introduces too high latency when setting up SAs. For example, it requires with Main Mode<br />

the exchange of nine messages<br />

• The IKE protocol uses a different cryptographic syntax for protecting its own communication<br />

• The IKE protocol has, because of its complexity, a high number of possible error states<br />

• The IKE protocol forces a responder to spend considerable processing power and to create state information<br />

before it is able to cryptographically authenticate the initiator<br />

• The IKE protocol uses hashes which do not cover all fields of the IKE message. This can lead to security<br />

problems<br />

• The IKE protocol does not offer flexible means to select the traffic which should be protected by the SA,<br />

other than overloading ID payloads<br />

• The IKE protocol has no standardised means to work in environments where, for example, NAT, extended<br />

authentication or remote address acquisition are required<br />

IKEv2 is currently being developed by the IETF to overcome the deficiencies of IKEv1. The current IKEv2<br />

document tries to address the criticisms made of IKEv1 in several ways. For example, the information given for<br />

IKEv1 in the RFCs 2407, 2408, and 2409 is now included in the IKEv2 document. The number of messages<br />

necessary to establish an IKE SA is reduced from six messages, as in IKEv1’s Main Mode, to only four<br />

messages, i.e. two exchanges. The flow of messages in IKEv2 always consists of request / respond message<br />

pairs, where one such pair forms an “exchange”.<br />

In the first exchange of an IKE session, called IKE_SA_INIT, security parameters for the IKE SA, nonces and<br />

Diffie-Hellman values are transmitted in request / response fashion.<br />

<strong>Annex</strong> 2 - Page 217 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The IKE_AUTH exchange, which is the second request / response, exchanges proof of the knowledge of the<br />

secrets which are bound to the identities of the two systems, and creates an SA for the first CHILD_SA, which<br />

can be AH and/or ESP.<br />

Several subsequent exchanges such as CREATE_CHILD_SA (creates a CHILD_SA) and INFORMATIONAL<br />

(deletes an SA, reports error conditions, or does other housekeeping operations) are defined. Every sent request<br />

requires a received response. Furthermore, with INFORMATIONAL requests without any additional payload, it<br />

is possible to check if the other system is still alive and responding. All the subsequent exchanges<br />

(CREATE_CHILD_SA, INFORMATIONAL) can only be used if the initial request / responses (IKE_SA_INIT,<br />

IKE_AUTH) have been successfully completed.<br />

The IKEv2 protocol can be used to establish Security Associations in several scenarios. The most common use<br />

is expected in the following scenarios, of which each has its special requirements:<br />

Security Gateway to Security Gateway Tunnel<br />

IPsec Tunnel<br />

Tunnel<br />

Tunnel<br />

Protected Subnet Endpoint<br />

Endpoint Protected Subnet<br />

Figure 86: Security Gateway to Security Gateway Tunnel<br />

In this scenario, the IPsec functionality is implemented by network nodes (Tunnel Endpoints resp. Security<br />

Gateways) between the IP connection endpoints (located in the Protected Subnet). This means that the traffic is<br />

not protected end-to-end but only on a part of its way through the network, that is the path between the Tunnel<br />

Endpoints. The protection of the traffic is transparent for the IP connection endpoints and depends on the<br />

ordinary routing. This means, only packets from one IP connection endpoint to the other which are routed<br />

through the Tunnel Endpoints will use the protection provided by the IPsec Tunnel. All packets which do not<br />

take this route will travel unprotected. The Tunnel Endpoints announce to each other which ranges of IP<br />

addresses are located “behind” them, thus can be routed through the IPsec Tunnel. The packets originated in the<br />

Protected Subnets contain the IP address of the actual sender. The packets travelling over the IPsec tunnel have<br />

the IP address of the Tunnel Endpoints as source and destination IP address of the outer IP header. The original<br />

source and destination IP address is contained in the inner IP header.<br />

Endpoint to Endpoint Transport<br />

Protected Endpoint<br />

IPsec Tunnel<br />

<strong>Annex</strong> 2 - Page 218 of 282<br />

Protected Endpoint<br />

Figure 87: Endpoint to Endpoint Transport<br />

In this scenario, the IPsec functionality is implemented in both IP connection endpoints. The endpoints can<br />

decide whether they use Transport mode or Tunnel mode to protect their communication. If Tunnel mode is<br />

used, the inner IP header will have the same IP source and destination address as the outer IP header. The two<br />

IKEv2 instances will negotiate only a single pair of addresses to be protected by this Security Association. It is


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

possible for both endpoints to implement application layer access controls based on the authenticated identities<br />

of the participants.<br />

Furthermore, it is possible in this scenario that one or both endpoints are located behind NAT nodes if they have<br />

to use IPv4. In this case, it is necessary that the tunnelled packets have to be encapsulated in UDP packets,<br />

where the port numbers included in the UDP header can used by the NAT nodes to identify individual endpoints<br />

behind them and to correctly forward the packets.<br />

Endpoint to Security Gateway Transport<br />

Protected Endpoint<br />

IPsec Tunnel<br />

<strong>Annex</strong> 2 - Page 219 of 282<br />

Tunnel<br />

Endpoint<br />

Figure 88: Endpoint to Security Gateway Transport<br />

Protected Subnet<br />

and / or Internet<br />

In this scenario, an endpoint which implements IPsec (protected endpoint) connects to a network through an<br />

IPsec protected tunnel. The endpoint could be a portable roaming computer which connects to its company<br />

network via the Tunnel Endpoint (i.e. Security Gateway) to access resources available there or to access the<br />

global internet, using the protection of its corporate infrastructure, e.g. firewalls. In both cases, the protected<br />

endpoint needs an IP address associated with the Tunnel Endpoint so that packets addressed to it will be routed<br />

to the Tunnel Endpoint and then tunnelled to it protected by the IPsec tunnel. The IP address assigned to it by<br />

the Tunnel Endpoint can either be static or dynamic. The second case is also supported by IKEv2 as it is<br />

possible for the initiator (i.e. the protected endpoint) to request an IP address which belongs to the Tunnel<br />

Endpoint for use during the lifetime of its Security Association.<br />

The packets in this scenario will be sent from the protected endpoint to the tunnel endpoint using Tunnel mode.<br />

Every packet from the protected endpoint will contain the IP address associated with its current location in the<br />

source address field of the outer IP header, while the IP address assigned by the Tunnel Endpoint will be used as<br />

source IP address in the inner IP header. The destination address of the outer IP header will be the IP address of<br />

the Tunnel Endpoint, while the IP address of the inner IP header will be the IP address of the final destination of<br />

the transmitted packet.<br />

Furthermore, it is possible in this scenario that the protected endpoint is located behind a NAT node when using<br />

IPv4. In this case it is necessary to encapsulate the packets into UDP packets as the IP address as seen by the<br />

Tunnel Endpoint will not be the same as the IP address the protected endpoint uses to send.<br />

Other scenarios are possible, for example nested combinations of the above described scenarios. One notable<br />

example combines aspects of “Security Gateway to Security Gateway Tunnel” and “Endpoint to Security<br />

Gateway Transport”. A subnet may make all external accesses through a remote security gateway using an IPsec<br />

tunnel. This means that the addresses on the subnet need to be routed to the security gateway by the rest of the<br />

Internet. A possible example would be a home network being virtually on the Internet with static IP addresses<br />

even though connectivity is provided by an ISP that assigns a single dynamically assigned IP address to the<br />

user's security gateway, while the static IP addresses and an IPsec relay are provided by a third party located<br />

elsewhere.<br />

Analysis<br />

IKEv2 is a component of IPsec providing security services. It performs mutual authentication and establishes<br />

and maintains security associations (SAs) between entities.<br />

The IKEv2 protocol consists of two phases:<br />

• An authentication and key exchange protocol, which establishes an IKE-SA,


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Messages and payloads, which allow negotiation of parameters in order to establish IPsec SAs.<br />

<strong>A2.</strong>10.4 Mobility and network access control<br />

<strong>A2.</strong>10.4.1 Solutions to the security issues raised in Mobile IP<br />

Description<br />

Contrary to Mobile IPv4, Mobile IPv6 has support for route optimisation, that is support for direct<br />

communication between mobile node (MN) and correspondent node (CN) without involvement of the home<br />

agent (HA), built in as part of the basic protocol specification. While this gives a significant improvement in<br />

protocol efficiency, the security concerns of route optimisation have been discussed for a long time in the MIP<br />

WG.<br />

The matter of concern has been the Binding Updates exchange between the MN to the CN, which inform the CN<br />

about the MN’s current point of attachment to the network. If an attacker would send malicious Binding Updates<br />

to an arbitrary CN, containing its own address as care-of address, but the address of someone else as home<br />

address, it can easily redirect traffic destined for this home address from the CN towards itself. Therefore it is<br />

absolutely required to authenticate Binding Updates, to be sure that they are really sent from the MN owning the<br />

home address contained in the Binding Update.<br />

The authentication of Binding Updates sent from the MN to the HA can be done using the IPsec protocol. As<br />

HA and MN belong to the same subnet and the HA acts as proxy for the MN, it can be assumed that they<br />

anyway have a kind of trust relationship among themselves. That is, in this case it is possible to initiate offline<br />

an exchange of some keying material to be used later for the IPsec based authentication of Binding Updates.<br />

Between the MN and an arbitrary CN no such trust relationship can be assumed. That is, there is usually no way<br />

to have any keying material exchanged in advance. On the other hand, exchanging keying material online when<br />

requested offers the possibility to attackers for man-in-the-middle attacks. The only solution would therefore be<br />

the use of a common PKI. As MN and CN could be arbitrary nodes, this PKI in principal would need to cover<br />

the whole global Internet. Such a PKI does not exist today, and probably will not exist in the coming years.<br />

To get out of this chicken-egg problem the MIP WG specified an alternative solution for the mutual<br />

authentication of the Binding Updates exchange between MN and CN. This solution is called Return<br />

Routability.<br />

Within the Return Routability procedure as illustrated in<br />

Figure 89, the MN as initiator of this mechanism sends two test packets to the CN, one directly (Care-of Test<br />

Init (CoTI)), the other one via the HA (Home Test Init (HoTI)), both containing a cookie generated by the MN.<br />

The test packet sent via the HA is protected by IPsec on its way between the MN and the HA. The CN replies to<br />

both of these packets, again on the direct way to the MN (Care-of Test (CoT)) and via the HA (Home Test<br />

(HoT)). In these replies it returns the cookies generated by the MN and includes additionally cookies generated<br />

by itself. From the returned cookies the MN generates the keying material used to authenticate the Binding<br />

Updates sent to the CN.<br />

<strong>Annex</strong> 2 - Page 220 of 282


home<br />

network<br />

HA<br />

� �<br />

FR<br />

� Home Test Init (HoT cookie)<br />

� Care-of Test Init (CoT cookie)<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Fixed<br />

Internet<br />

� Home Test (HoT cookie, home cookie, home nonce index)<br />

� Care-of Test (CoT cookie, care-of cookie, care-of nonce index)<br />

FR<br />

FR<br />

Figure 89: Overview of Return Routability<br />

Analysis<br />

The mechanism of Return Routability can be used between the MN and any CN, increases the time for the<br />

Binding registration by roughly the round trip time between MN and CN, and prevents against the majority of<br />

attack scenarios. It allows the performance of a mutual authentication between the MN and the CN without a<br />

common PKI or a previously established trust relationship allowing for offline exchange of keying material.<br />

However, there is still a minor possibility for an attacker to calculate itself the same keying material as used by<br />

the MN, use it for authenticating its own Binding Updates sent to the CN, and therefore again redirect<br />

communication traffic sent from the CN to the MN. For this purpose it is required that the attacker eavesdrop all<br />

messages exchanged within the Return Routability process, HoTI, CoTI, HoT, and CoT.<br />

One place to achieve this would be to perform the attack directly on the MN’s access network. However, as<br />

HoTI and HoT will be sent via the HA and are therefore encrypted on the path between MN and HA, an attacker<br />

could not read these messages in the case it is able to receive them. The only remaining place to receive and read<br />

all four Return Routability messages is directly at the access network of the CN. As it is difficult to predict when<br />

a CN will communicate using MIPv6 with an MN, and as the Return Routability process is initiated by the MN,<br />

a attack scenario in principal is possible, but in limited frequency.<br />

One solution to also address this remaining thread would be the use of Cryptographically Generated Addresses<br />

(CGAs) as discussed in chapter <strong>A2.</strong>10.4.3. However, this is currently not part of the MIPv6 protocol<br />

specification.<br />

<strong>A2.</strong>10.4.2 IPv6 Privacy Extensions for Address Autoconfiguration<br />

Description<br />

The IPv6 Privacy Extensions for Address Autoconfiguration as defined in RFC3041 are a mechanism to<br />

mitigate privacy concerns which might arise from the use of static IPv6 addresses derived from IEEE identifiers<br />

such as MAC addresses. Anytime the same IP-address is used in multiple contexts, it becomes possible to<br />

correlate seemingly unrelated activities. A network sniffer placed on a link across which all traffic to/from a<br />

particular node crosses could determine which destinations a node communicated with and at what times. The<br />

use of a constant identifier within an address is of special concern because addresses are a fundamental<br />

requirement of communication and cannot easily be hidden. Even when higher layers encrypt their payloads,<br />

addresses in packet headers appear in the clear. Consequently, if a mobile host (e.g. laptop) accessed the<br />

network from several different locations, an eavesdropper might be able to track the movement of that mobile<br />

host from place to place.<br />

The standard proposes to generate additional, randomised interface identifiers which might be used for outgoing<br />

connections in order to hide activities of the node behind these addresses. Incoming connections might still be<br />

using the permanent IP-address of the node.<br />

<strong>Annex</strong> 2 - Page 221 of 282<br />

� �<br />

CN<br />

MN


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Based on these randomised interface identifiers a temporary IPv6 address can be calculated. This is done in the<br />

following steps:<br />

• Take the history value from the previous iteration of this algorithm (or a random value if there is no previous<br />

value) and append to it the interface identifier<br />

• Compute the MD5 message digest over the quantity created in the previous step.<br />

• Take the left-most 64-bits of the MD5 digest and set bit 6 (the left-most bit is numbered 0) to zero. This<br />

creates an interface identifier with the universal/local bit indicating local significance only. Save the<br />

generated identifier as the associated randomised interface identifier.<br />

• Take the rightmost 64-bits of the MD5 digest computed in step 2) and save them in stable storage as the<br />

history value to be used in the next iteration of the algorithm.<br />

When an interface connects to a new link, a new randomized interface identifier should be generated<br />

immediately together with a new set of temporary addresses, making it more difficult to correlate addresses from<br />

the two different links as coming from the same node.<br />

Analysis<br />

The more frequently an address changes, the less feasible collecting or coordinating information keyed on this<br />

address becomes. If a large enough number of clients implement privacy extensions, privacy concerns can be<br />

met.<br />

It must be noted that implementing these privacy extensions does not provide perfect privacy, as the network<br />

prefix portion of an address also serves as a constant identifier.<br />

The network prefix of a home network would identify the topological location of all nodes in the network, and if<br />

the network size is small still give away sensible information. This issue is difficult to address, because the<br />

routing prefix part of an address contains topology information and cannot contain arbitrary values.<br />

Additionally, a node's DNS name serves as a constant identifier. In order to meet server challenges, nodes could<br />

register temporary addresses in the DNS for temporary addresses using random names using Dynamic DNS.<br />

The determination as to whether to use public vs. temporary addresses can, in some cases, only be made by an<br />

application. For example, some applications may always want to use temporary addresses, while others may<br />

want to use them only in some circumstances or not at all. Enabling privacy extensions can have the undesired<br />

side-effect that it will make it more difficult to track down and isolate operational problems in the network.<br />

The current consensus is that the IPv6 privacy extensions only partly solve the privacy problem. The current<br />

application of the standard is thought to be in the area where nodes switch provider and points of attachment to<br />

the network and do not want to be traceable by doing so. There is little protection for the case of the ordinary<br />

home user, and DHCP assigned addresses from an address pool might actually provide much better privacy.<br />

Although there are implementations available it is not clear to what extent they are currently used and how<br />

privacy extensions interact with other IPv6 mechanisms such as Mobile IP and IPsec. Implementation of privacy<br />

extensions can, however, enhance the even distribution of hosts in the 64-bit subnet space of IPv6, thus making<br />

it harder for an attacker to guess the IP-address of a potential victim.<br />

<strong>A2.</strong>10.4.3 Cryptographically Generated Addresses<br />

Description<br />

Cryptographically Generated Addresses (CGAs) are IPv6 addresses, which allow for a secure association of an<br />

IPv6 address, the CGA, with a public key. While this kind of association otherwise is mainly done using<br />

certificates, and therefore requires the deployment of Public Key Infrastructures (PKIs), the CGA approach<br />

doesn’t require any infrastructure at all.<br />

In principle, CGAs are generated like all IPv6 addresses by concatenating a 64 bit long subnet prefix with a 64<br />

bit long identifier. However, in CGAs the identifier additionally reflects the public key belonging to the CGA.<br />

More precisely, the identifier is a hash value formed from a CGA parameter set, including among others the<br />

public key. Knowing these CGA parameters, any receiver of IPv6 packets with a CGA as source address can recalculate<br />

the hash value, and verify if it matches the one contained in the 64 bit identifier of the packet’s source<br />

address. Figure 90 provides an overview of the structure of CGAs.<br />

<strong>Annex</strong> 2 - Page 222 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Cryptographically Generated Address<br />

Subnet prefix (64 bit) CGA specific ID (64 bit)<br />

012 67<br />

security<br />

parameter<br />

Figure 90: Structure of CGAs<br />

<strong>Annex</strong> 2 - Page 223 of 282<br />

„u“ bit „g“ bit


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The CGA parameters mentioned above, which are used for calculating the CGA, consist of the following:<br />

• a 16 octet long modifier, which can be chosen arbitrarily,<br />

• an 8 octet long subnet prefix, which is equal to the subnet prefix of the CGA itself,<br />

• a 1 octet long collision count, as well as<br />

• the public key itself, which can have a variable length.<br />

Based on these CGA parameters the identifier of the CGA can be calculated. This is done in the following steps:<br />

• Generate a public / private key pair.<br />

• Chose any arbitrary value for the 16 octet modifier.<br />

• Chose an appropriate value between 0 and 7 as security parameter Sec. This value will determine how<br />

difficult it will be to break a generated CGA by means of brute-force attacks. Breaking a CGA means here<br />

finding another public / private key pair, which results in the same interface identifier. Besides that, the<br />

selection of Sec will also determine the duration for generating a new CGA. The higher the value selected<br />

for Sec, the more difficult it will be to break a generated CGA address with brute-force attacks, but also the<br />

longer it will take to generate the CGA itself.<br />

• Concatenate the selected modifier, the subnet prefix and the collision count (both set to zero), and the public<br />

key value, and calculate from this concatenation a 160 bit hash value using the SHA-1 algorithm. The hash<br />

value Hash2 will be the first 112 bits.<br />

• Compare the (16 * Sec) first bits of Hash2 with zero. If they don’t match, increase the modifier by 1 and<br />

calculate the next hash value. This will be repeated until the (16 * Sec) first bits of Hash2 are all zero.<br />

• Concatenate the final value for the modifier, the real subnet prefix, the collision count set to zero, and the<br />

public key, and calculate from this concatenation a 160 bit hash value using the SHA-1 algorithm. The hash<br />

value Hash1 will be the first 64 bits.<br />

• The CGA specific interface identifier will be Hash1, with the first 3 bits replaced by the Sec parameter, and<br />

bits 6 and 7, the “u” and “g” bits, set to zero.<br />

• Optionally, one could now perform a collision detection in order to check if someone else on the subnet is<br />

using the same IPv6 address. If so, the collision counter should be increased by 1, and a new Hash1 value<br />

should be generated with this modified CGA parameter. In order to protect against denial of service attacks,<br />

this process is stopped after three collisions.<br />

In order to allow the receiver to verify a CGA, it needs to have the CGA parameters as well as the CGA itself.<br />

The latter is implicitly provided to the receiver in case it is used as a source address in IPv6 packets. In principle,<br />

there can be many ways for exchanging CGA parameters. The IETF SEND WG for example specifies one<br />

alternative used for securing the neighbour discovery process.<br />

After a successful verification, the receiver can securely assign a certain public key to an IPv6 address,<br />

information which is usually provided by certificates. With this information the owner of a CGA can use its<br />

private key in order to sign messages, knowing that the receiver will be able to associate the appropriate public<br />

key and use this for the verification of the message signature.<br />

Analysis<br />

If a node, which is using a CGA as source IPv6 address has privacy concerns, two aspects have to be<br />

considered. Privacy could be broken by the usage of a fixed CGA identifier in IPv6 sources addresses, as well as<br />

by the transmission of the public key within the CGA parameter set.<br />

The issue with the fixed CGA identifier can be addressed by computing several CGAs for the same public key,<br />

and switching between their usage according to the IPv6 privacy extensions specified in RFC 3041. Generating<br />

more CGAs for the same public key can be achieved by varying the modifier part of the CGA parameters. As<br />

the computation of CGAs could become computationally expensive, especially for higher values of Sec, a<br />

solution could be a pre-calculation of CGAs, or to involve a high-performance node for offline computation.<br />

The issue with the fixed public key transmitted within the CGA parameter set is more difficult. As the public<br />

key will be visible along the whole path CGA parameters are exchanged on, the node owning the CGA will be<br />

traceable in this area. The only chance to avoid this would be the change of the public key itself. However, if<br />

CGAs are used only in local environments, such as for securing neighbour discovery, one may not be concerned<br />

with privacy as tracing a node could here be possible also by other means, such as following link layer<br />

information. Also, using certificates instead of CGAs would cause the same privacy concerns.<br />

<strong>Annex</strong> 2 - Page 224 of 282


<strong>A2.</strong>10.4.4 EAP<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Description<br />

EAP is an authentication framework which supports multiple authentication methods. EAP typically runs<br />

directly over data link layers such as PPP or IEEE 802, without requiring IP. EAP is used to select a specific<br />

authentication mechanism and permits the use of a backend authentication server, which may implement some<br />

or all authentication methods. EAP was designed for use in network access authentication, where IP layer<br />

connectivity may not be available. Use of EAP for other purposes, such as bulk data transport, is not<br />

recommended.<br />

EAP Key Management Framework (EAP-KMF) provides support for the Extensible Authentication Protocol<br />

(EAP) that was designed to enable extensions to authentication for network access. The Framework provides<br />

support and an explanation for the generation, transport and usage of keying material generated by EAP<br />

authentication algorithms, known as methods. Keying material generated by EAP methods is usually transported<br />

by AAA protocols to different entities that need this material transformed into session keys. These session keys<br />

could be used for security association protocols, such as IKE (IPsec) or for the 4-way handshake defined in the<br />

IEEE 802.11i specification.<br />

Analysis<br />

The involved entities in the EAP scenario are:<br />

• EAP peer: end of the link that responds to the authenticator.<br />

• Authenticator: end of the link that responds to the EAP peer; initiates EAP authentication<br />

• Backend Authentication server: entity that provides an authentication service.<br />

• AAA: Authentication, Authorisation and Accounting server with EAP support, such as Radius or Diameter-<br />

EAP.<br />

Steps in an EAP exchange:<br />

• Authenticator sends a request to authenticate the peer.<br />

• Peer sends a response message in reply to a valid request.<br />

• Authenticator sends an additional request packet and the peer replies with a response. This sequence<br />

continues as long as needed. After a suitable number of retransmissions, the Authenticator SHOULD end the<br />

EAP conversation. The Authenticator must not send a Success or Failure packet when retransmitting or<br />

when it fails to get a response from the peer.<br />

• The conversation continues until the Authenticator cannot authenticate the peer (unacceptable Responses to<br />

one or more Requests), in which case the Authenticator implementation must transmit an EAP Failure.<br />

Alternatively, the Authentication conversation can continue until the Authenticator determines that<br />

successful authentication has occurred, in which case the Authenticator must transmit an EAP Success.<br />

<strong>A2.</strong>10.4.5 EAP-TLS<br />

Figure 91 shows how EAP-TLS works:<br />

EAP peer Authenticator Auth. server<br />

0. discovery<br />

1a. EAP auth AAA pass-through<br />

(optional)<br />

1b. AAA-Key transport<br />

2. Secure association<br />

(optional)<br />

Figure 91: Diagram showing EAP-TLS<br />

<strong>Annex</strong> 2 - Page 225 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

There are several entities in this diagram. The EAP peer is the entity that needs to be authenticated and responds<br />

to the authenticator. In the example, the EAP peer can be a mobile node trying to obtain access to network<br />

resources. The Authenticator is the entity that requests for EAP peer authentication. The Authenticator can<br />

implement different EAP methods in order to verify credentials sent by the EAP peer through the EAP protocol.<br />

In some circumstances, the Authenticator can act as a pass-through to a backend-authentication server (AAA<br />

server), in which case the EAP exchange is between the EAP-peer and a AAA server. A AAA protocol is<br />

needed to carry EAP packets between the Authenticator and the AAA server (i.e. by using Diameter EAP<br />

Application). Really, this situation happens when the Authenticator cannot handle the authentication locally.<br />

After a successful authentication, keys can be derived by the EAP peer and the AAA server. There are EAP<br />

methods (i.e. EAP-TLS, EAP-TTLS) that allow the derivation of keys and provide mutual authentication (the<br />

recommendation is to use these kinds of EAP method). If keys are derived by a AAA server , it will take care of<br />

sending them to the Authenticator. This allows the Authenticator to run a security association protocol (i.e. IKE)<br />

to establish a security association between the Authenticator and the EAP peer (phase 2). Furthermore, the AAA<br />

server can distribute other derived keys to other entities (considered like Authenticators) so the EAP peer can<br />

run a security association protocol with these entities. In this case, the AAA server acts like a Key Distribution<br />

Center (KDC).<br />

Note that the EAP Key Management Framework is a good reference framework for PANA, due to the fact that<br />

PANA transports EAP packets. A PANA client (PaC) would be an EAP-peer and a PANA Agent (PAA) would<br />

be an Authenticator.<br />

Protocol for carrying Authentication for Network Access (PANA)<br />

While the Extensible Authentication Protocol (EAP) is being used in more and more different networks for<br />

authentication between clients and the network, its realisation and implementation mostly depends on the<br />

underlying subnetwork type. Therefore an EAP implementation for link type A mostly cannot be used at all on<br />

link type B. The goal of PANA is to provide a link layer agnostic transport mechanism for carrying EAP based<br />

network authentication information. This has been achieved by running PANA on top of UDP / IP.<br />

Within the PANA concept three main components can be identified:<br />

• PANA Client (PaC): The PANA client is basically the end system looking for access to a certain network.<br />

• PANA Authentication Agent (PAA): The PANA Authentication Agent belongs to the network itself, and is<br />

responsible for authenticating the PaC, as well as for deciding whether to grant it network access. Therefore,<br />

the PAA can be seen as the counterpart to the PaC concerning the PANA protocol.<br />

• Enforcement Point (EP): The Enforcement Point controls the access to the network. That is, it either allows<br />

or disallows packets sent by PaCs to access the network.<br />

In principle, the PAA and EP are two different logical entities, however, in reality they can be integrated in a<br />

single physical device. Figure 92 illustrates an example PANA architecture for a hot spot scenario, in which<br />

clients connect to a network via WLAN access points (APs). In this case the EP functionality is integrated into<br />

the WLAN APs while the PAA functionality is located behind the WLAN APs and shared between them.<br />

The PANA protocol functionality itself happens in the following three phases:<br />

• PAA Discovery phase: As the name already indicates, the main task of this phase is the discovery of the<br />

PAA. During this phase the PaC, either itself or by sending traffic via the EP, initiates a PANA-Discover<br />

message to be sent to the PAA. The PAA in turn starts a handshake with the PaC, in which more details<br />

about the intended network access could be exchanged. At the end of this phase a PANA session is<br />

established between the PaC and PAA.<br />

• Authentication phase: The main task of this phase is the exchange of EAP authentication information<br />

between the PaC and PAA. The authentication achieved by this exchange fully depends on the mode of EAP<br />

selected. At the end of this phase a Security Association (SA) is established between the PaC and PAA. This<br />

phase covers the initial authentication process, but also any re-authentication if required.<br />

• Termination phase: In this phase the PANA session will be terminated. In turn, any existing PANA SA<br />

belonging to the terminated PANA session will be deleted. The termination process can be initiated by both<br />

the PaC and the PAA.<br />

<strong>Annex</strong> 2 - Page 226 of 282


PaC EP PAA<br />

PANA<br />

Client<br />

PANA Authentication<br />

PAA Discovery<br />

Enforcement<br />

Point<br />

<strong>A2.</strong>10.4.6 Security in wireless networks<br />

Authorisation<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

PANA<br />

Authentication<br />

Agent<br />

Figure 92: PANA architecture<br />

Not long after its development, WEP (Wired Equivalent Privacy) 's cryptographic weaknesses began to be<br />

exposed. A series of independent studies from various academic and commercial institutions found that even<br />

with WEP enabled, third parties could breach WLAN security. A hacker with the proper equipment and tools<br />

can collect and analyse enough data to recover the shared encryption key. Although such security breaches<br />

might take days on a home or small business WLAN where traffic is light, it can be accomplished in a matter of<br />

hours on a busy corporate network.<br />

Despite its flaws, WEP provides some margin of security compared with no security at all and remains useful<br />

for the casual home user for purposes of deflecting would-be eavesdroppers. For large enterprise users, WEP<br />

native security can be strengthened by deploying it in conjunction with other security technologies such as<br />

Virtual Private Networks or 802.1x authentication with dynamic WEP keys. Nevertheless, Wi-Fi users<br />

demanded a strong, interoperable, and immediate security enhancement native to Wi-Fi. The result of this<br />

demand is Wi-Fi Protected Access (WPA), constructed to provide improved data encryption, which was weak in<br />

WEP, and to provide user authentication, which was largely missing in WEP.<br />

<strong>A2.</strong>10.4.6.1 WPA<br />

WPA is a security technology for wireless networks that improves on the authentication and encryption features<br />

of WEP.<br />

Enhanced Data Encryption through TKIP<br />

To improve data encryption, WPA utilises the Temporal Key Integrity Protocol (TKIP). TKIP provides<br />

important data encryption enhancements including a per-packet key mixing function, a message integrity check<br />

(MIC) named Michael, an extended initialisation vector (IV) with sequencing rules, and a re-keying mechanism.<br />

Through these enhancements, TKIP addresses all WEP’s known vulnerabilities.<br />

Enterprise-level User Authentication via 802.1x and EAP<br />

To strengthen user authentication, WPA implements 802.1x and the Extensible Authentication Protocol (EAP).<br />

Together, these implementations provide a framework for strong user authentication.<br />

<strong>Annex</strong> 2 - Page 227 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

This framework utilises a central authentication server, such as RADIUS, to authenticate each user on the<br />

network before they join it, and also employs “mutual authentication” so that the wireless user doesn’t<br />

accidentally join a rogue network that might steal its network credentials. With this feature, WPA provides<br />

roughly comparable security to VPN tunnelling with WEP, with the benefit of easier administration and use.<br />

In a home or Small Office/Home Office (SOHO) environment, where there are no central authentication servers<br />

or EAP framework, WPA runs in a special home mode. This mode, also called Pre-Shared Key (PSK), allows<br />

the use of manually-entered keys or passwords and is designed to be easy to set up for the home user. All the<br />

home user needs to do is enter a password (also called a master key) in their access point or home wireless<br />

gateway and each PC that is on the Wi-Fi wireless network. WPA takes over automatically from that point.<br />

First, the password allows only devices with a matching password to join the network, which keeps out<br />

eavesdroppers and other unauthorised users. Second, using the TKIP encryption process, WPA-PSK<br />

automatically changes the keys at a preset time interval, making it much more difficult for hackers to find and<br />

exploit them.<br />

<strong>A2.</strong>10.4.6.2 802.11i<br />

802.11i 540 is the name of the IEEE Task Group dedicated to standardising WLAN security. 802.11i security<br />

consists of a framework based on RSN (Robust Security Mechanism).<br />

RSN consists of two parts:<br />

• the Security Association Management.<br />

Security Association Management is addressed by RSN Negotiation Procedures, IEEE 802.1x<br />

Authentication and IEEE 802.1x Key management.<br />

• the Data Privacy Mechanism<br />

The Data Privacy Mechanism supports two proposed schemes: TKIP and AES. TKIP (Temporal Key<br />

Integrity) is a short-term solution that defines software patches to WEP to provide a minimally adequate<br />

level of data privacy. AES or AES-OCB (Advanced Encryption Standard and Offset Codebook) is a robust<br />

data privacy scheme and is a longer-term solution.<br />

WPA and 802.11i authentication and key management are done both by 802.1X and EAP. It means that the<br />

authentication and the exchange of encryption keys are secured by the EAP method. Therefore, the selection of<br />

the EAP method is very important and will impact the entire 802.11 security.<br />

WPA will be forward-compatible with the IEEE 802.11i, as WPA is a subset of the current 802.11i<br />

specifications, taking certain 802.11i pieces that are ready to bring to market today, such as its implementation<br />

of 802.1x and TKIP. These features can also be enabled on most existing Wi-Fi certified products as a software<br />

upgrade. The main pieces of the 802.11i specifications that are not included in Wi-Fi Protected Access are<br />

secure IBSS, secure fast handoff, secure deauthentication and disassociation, as well as enhanced encryption<br />

protocols such as AES-CCMP. These features are either not yet ready for market and will require hardware<br />

upgrades to implement.<br />

<strong>A2.</strong>10.4.7 PKI revocation issues<br />

<strong>A2.</strong>10.4.7.1 Description<br />

As described above, the standard way to signal revocation of a certificate to relying parties is the use of a CRL<br />

(Certificate Revocation List). A CRL is simply a "blacklist" of all currently revoked certificates for a particular<br />

CA. Typically, a CRL would be produced by the CA at regular time periods, such as one every day or every<br />

week, and published in a publicly accessible directory. It is then the responsibility of the relying parties to<br />

download the current CRL (if they don't already have it) before accepting any certificate issued by that CA as<br />

valid.<br />

While the CRL mechanism works, there are several reasons why it may not be suitable in all circumstances.<br />

Firstly, due to the fact that CAs only periodically issue CRLs, there is an inevitable time delay between<br />

revocation occurring and it being signalled to relying parties. This could lead to a relying party accepting a<br />

certificate as valid even though it has been revoked. Secondly, constantly downloading CRLs can use up a lot of<br />

bandwidth, particularly as CRLs can become quite large. This is a major concern for constrained environments,<br />

particularly wireless ones.<br />

540 802.11i specifications, http://grouper.ieee.org/groups/802/11/private/Draft_Standards/11i/P802.11i-D10.0.pdf<br />

<strong>Annex</strong> 2 - Page 228 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Having to store CRLs can also use up resources on the relying parties' devices, which could be a problem for<br />

very limited devices. Note that the previous two issues are related. To increase security, it could be argued that a<br />

CA should issue CRLs more frequently. However, this would require relying parties to download them more<br />

frequently, thus increasing bandwidth usage. Finally, the CRL mechanism only provides relying parties with the<br />

status of a certificate (revoked or not). Validating a certificate involves first of all constructing a path from the<br />

certificate to a trusted root certificate. This can involve searching for and downloading any relevant missing<br />

certificates from the path. Once the path has been constructed, and status information has been obtained on each<br />

of the certificates in the path, the relying party must also verify the certificate path. This can involve, amongst<br />

other operations, multiple, computationally intensive, digital signature verification operations. Constructing a<br />

certificate path and verifying it can again involve the use of significant bandwidth, as well as processing and<br />

memory resources on the relying parties' device. The software required to perform these actions can also be<br />

quite complex and undesirable to have in many environments.<br />

Several existing and emerging mechanisms have been proposed to get around some or all of these issues with<br />

the use of CRLs. A selection of these is described in the following section.<br />

<strong>A2.</strong>10.4.7.2 Existing and emerging protocols and systems<br />

Previous solutions (Delta CRLs and OCSP)<br />

One of the first, and perhaps one of the simplest, ways proposed to get around some of the problems with CRLs<br />

is the use of Delta CRLs 541 Delta CRLs contain only the changes to a specified full CRL since it was issued.<br />

With Delta CRLs, a relying party only needs to download a full CRL once. This can then be updated locally by<br />

downloading the most recent Delta CRLs. Clearly, this can reduce the amount of bandwidth used up when<br />

downloading the CRLs. However, the problems of storing CRLs and of processing them remain. In addition,<br />

periodically the full CRLs still have to be downloaded otherwise the Delta CRLs would become as large as the<br />

full CRLs.<br />

An alternative solution to get around both the downloading and timeliness problems is OCSP (Online Certificate<br />

Status Protocol) 542 OCSP allows a relying party to query a server for the status (revoked or not) of a certificate.<br />

This removes the need for the relying party to download CRLs, however, they are still responsible for finding<br />

certificates to construct a certification path to a trusted root, and for validating the certificate path. Therefore,<br />

OCSP and Delta CRLs can both be viewed as only partial solutions to the problems in many circumstances.<br />

SCVP (Simple Certificate Validation Protocol)<br />

SCVP 543 is currently being developed by the IETF PKIX working group. In contrast to the previous solutions, it<br />

aims to provide a flexible solution to all of the previously mentioned problems with the use of CRLs. In a<br />

similar way to OCSP, SCVP defines an SCVP server, to which certificate processing can be offloaded, and a<br />

request/response protocol which relying parties can use to access it.<br />

The SCVP server and protocols can be used in two main ways. Untrusted SCVP servers (from the point of view<br />

of relying parties) can be used to collect together certification paths and revocation information (in the form of<br />

CRLs or OCSP responses from trusted servers for example) and return them to clients. This is valuable for<br />

clients that do not want to have to support software to perform the complicated protocols and to bear the<br />

increased communication costs that such searches would otherwise require. If such SCVP servers also cache the<br />

information collected, then this reduces the overall resources required to make certificate validation decisions<br />

and makes the process more efficient. This first kind of service is known as a DPD (Delegated Path Discovery)<br />

service in IETF terminology. Trusted SCVP servers can be used to perform the entire validation process for<br />

clients. This removes the need for relying parties to incur the overhead of including path validation software or<br />

performing certificate or revocation searches themselves. They can simply provide the relevant certificate to the<br />

SCVP server and obtain whether or not it is valid in response. This second kind of service is known as a DPV<br />

(Delegated Path Validation) service in IETF terminology. Note that an SCVP server can also be used by<br />

organisations to centralise the certificate path validation, thus enabling the enforcement of a uniform policy at a<br />

centralised location.<br />

541 "Certificate and Certificate Revocation List (CRL) Profile", RFC 3280, IETF Network Working Group, April 2002<br />

542 "Online Certificate Status Protocol - OCSP", RFC 2560, IETF Network Working Group, June 1999.<br />

543 "Simple Certificate Validation Protocol (SCVP)", draft-ietf-pkix-scvp-13.txt, IETF Network Working Group, October 2003.<br />

<strong>Annex</strong> 2 - Page 229 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

DPD and DPV over OCSP<br />

An alternative to SCVP is being developed at the same time by the IETF PKIX working group. This alternative<br />

is an extension to the existing OCSP protocol 544 that allows it to support DPD and DPV services in a limited<br />

way. The aim is to be able to provide support for DPD and DPV with minimal changes to the OCSP syntax, thus<br />

providing "closed, self-contained environments" with a way of "optimising their investment in PKI". It provides,<br />

at a high-level, similar functionality to SCVP, although the provided options for relying parties are not as<br />

extensive or flexible as with SCVP. Fundamentally, if a certificate validation system needs to be set up from<br />

scratch, then SCVP may be the best option. If on the other hand an OCSP service already exists, then adapting it<br />

to support SCVP may be best option.<br />

DVCS (Data Validation and Certification Server)<br />

DVCS 545 has been published as an experimental RFC by the IETF. A DVCS is a server that responds to several<br />

types of queries from clients with a DVC (Data Validation Certificate). This DVC contains the results of the<br />

query, and is intended to be long-lived and usable by other parties. DVCS has been designed for providing longterm<br />

evidence, in the form of these assertions that could be used to support a non-repudiation service for<br />

example.<br />

In terms of the issues with the use of CRLs, a DVCS provides two types of service that could be of use. These<br />

are the "validation of digitally signed document" and "validation of public key certificates" services, known as<br />

the VSD and VPKC services respectively. The VSD service can be used by a relying party to offload the<br />

validation of a signature on a document to the DVCS. In a similar way, the VPKC service can be used by a<br />

relying party to offload validation of a certificate to the DVCS. However, DVCS is only of limited use in this<br />

respect as a replacement for CRLs due to its limited scalability. Its main benefits will occur when it is used in<br />

certain specialised circumstances. An example is its use as a central server in an enterprise. This can be used to<br />

centralise all validation decisions at one place so that a common policy can easily be enforced, and evidence<br />

conveniently collected and stored. A DVCS could also be used by signers of documents to obtain evidence, in<br />

the form of a DVC, as to the validity of the signature at a particular point in time. When provided to relying<br />

parties, the relying parties need only validate the DVC rather than having to validate the original signer's<br />

certificate to validate the signature on the document. This is only of help if the DVC's certificate is more widely<br />

available and well-known than the signer's certificate, and is less likely to be revoked than signers' certificates in<br />

general.<br />

XKMS (XML Key Management System)<br />

XKMS is a specification currently in progress by the World Wide Web Consortium (W3C). The comments in<br />

this section are based on the most recently available draft version 546<br />

XKMS defines protocols for registering and distributing public keys, and has been specifically designed to be<br />

used in conjunction with the XML signature 547 and XML encryption 548 standards. Of interest to this discussion<br />

is the contained XML Key Information Service Specifications (XKISS), and in particular its Validate Service.<br />

This service allows clients to offload certificate validation to a third-party server, and can provide similar<br />

functionality to SCVP with trusted servers. The main difference between the two protocols is the use of the<br />

XML format for messages in XKMS, as opposed to DER encoded ASN.1 for SCVP. This means that XKMS<br />

messages are significantly larger than the corresponding SCVP messages, due to the relative inefficiency of the<br />

XML encoding format.<br />

<strong>A2.</strong>10.4.8 PKI architecture issues<br />

<strong>A2.</strong>10.4.8.1 Description<br />

Implementing a PKI to support digital certificates can be a complex task, particularly where support across<br />

multiple administrative domains is required. This can lead to the need for multiple cross certifications and<br />

complicated CA hierarchies, with correspondingly increased management costs and difficulties to users in<br />

obtaining certificate paths and revocation status information. In the general case, where a global PKI would be<br />

544 "DPV and a DPD over OCSP", draft-ietf-pkix-ocsp-dpvdpd-00.txt, IETF Network Working Group, January 2003<br />

545 "Data Validation and Certification Server Protocols", RFC 3029 (Experimental), IETF Network Working Group, February 2001<br />

546 "XML Key Management Specification", Version 2.0, W3C Candidate Recommendation 5 April 2004, http://www.w3c.org<br />

547 "XML-Signature Syntax and Processing", W3C Recommendation, 12 February 2002. http://www.w3.org/TR/xmldsig-core/<br />

548 "XML Encryption Syntax and Processing", W3C Recommendation, 10 December 2002, http://www.w3.org/TR/xmlenc-core/<br />

<strong>Annex</strong> 2 - Page 230 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

required, this can be an impossible task. For this reason, alternatives to the use of certificates in public key<br />

systems are of interest. Some example alternative architectures are described in this section.<br />

<strong>Annex</strong> 2 - Page 231 of 282


<strong>A2.</strong>10.4.8.2 Existing protocols and systems<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Identity Based Encryption (IBE)<br />

IBE is an alternative architecture to the PKI based certificates described above. With the standard PKI schemes,<br />

a user, say Alice, chooses a private key and then generates the public key from that. This public key must then<br />

be certified by a CA. If Bob wants to encrypt a message to Alice, he must first obtain Alice's certificate and<br />

validate it. However, discovering Alice's certificate and validating it can be difficult to achieve and manage (see<br />

the revocation checking issues above for example).<br />

With IBE, if Bob wants to encrypt a message to Alice then Bob generates the public key from Alice's ID, which<br />

could be her name, e-mail address etc.. This generation makes use of publicly known system parameters from a<br />

third-party Private Key Generator (PKG) that Bob trusts, which is the equivalent of a CA in this architecture.<br />

Note that these are global parameters that are not specific to particular senders or receivers, and hence can be<br />

distributed once, offline. The encrypted message is then sent to Alice together with the public key used. Alice<br />

must then contact the PKG, which verifies Alice's identity before returning the decryption key. This is illustrated<br />

in Figure 93.<br />

Figure 93: IBE architecture example<br />

From Bob's point of view, IBE significantly simplifies his task as he no longer needs to obtain a certificate for<br />

Alice or to validate it. From Alice's point of view, registration of a certificate at a CA is simply replaced by<br />

obtaining a private key from the PKG. Once the private key is obtained from the PKG, it can be reused multiple<br />

times by Alice to decrypt messages without needing to contact the PKG again. Therefore, it can be seen that in<br />

theory IBE could significantly simplify public key management.<br />

The concept of IBE has been around for roughly 20 years, however, until very recently no efficient and secure<br />

realisations of the concept were available. This situation has recently changed and now practical schemes are<br />

available (see for example 549 . In addition, alternative uses for IBE have been proposed (also see 549 . For example,<br />

Bob could construct a public key based on Alice's identity and the current date to create a so-called ephemeral<br />

public key. This would mean that Alice would have to obtain a new decryption key every day, thus helping to<br />

deal with the problem of revocation of Alice's private key (by limiting the consequences of a compromise of<br />

Alice's private key).<br />

Some potential disadvantages remain. The security of IBE schemes relies on a master key held by the PKG. If<br />

this is compromised, then all messages, both past and future, encrypted using the PKG public parameters can<br />

now be decrypted. Note that this is more serious than the compromise of a CA private key, which only<br />

potentially compromises the security of future message exchanges. There is also the issue of PKGs having to<br />

generate decryption keys for users online and on demand. This can be a significant scalability issue, particularly<br />

if ephemeral keys are being generated.<br />

549 "Identity-Based Encryption: a Survey", RSA Laboratories Cryptobytes, Volume 6, No. 1, Spring 2003<br />

<strong>Annex</strong> 2 - Page 232 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Mediated IBE<br />

A slight variation of IBE schemes is possible where the PKG has to be involved in every decryption. Such a<br />

mediated IBE scheme is called IB-mRSA. With this scheme the PKG, renamed a Security Mediator (SEM),<br />

generates and distributes public parameters and Bob generates a public key in the same way as for conventional<br />

IBE. However, when Alice comes to decrypt the message, she must send the encrypted message to the SEM.<br />

The SEM performs a partial decryption using its partial decryption key on the message and returns the result to<br />

Alice. Alice then uses her partial decryption key to complete the decryption of the message.<br />

Since Alice never has the full decryption key, she must contact the SEM for every decryption. This has the<br />

advantage of removing the revocation checking issue completely, as if and when Alice is revoked, the SEM will<br />

refuse to perform further (partial) decryptions. Another potential advantage is that this scheme makes use of<br />

conventional algorithms, namely RSA, albeit in a novel way. It can therefore make use of existing RSA<br />

implementations and coexist with a conventional PKI. A final advantage is that the compromise of the SEM<br />

does not reveal any user private keys, and therefore does not enable the attacker to decrypt messages. Having<br />

said that, the compromise of an SEM together with just one user would allow the generation of all other users<br />

private keys, and therefore this is likely to be only a minor advantage. An obvious disadvantage is that the SEM<br />

has to perform a very high number of decryptions online and on demand, which does not scale well.<br />

<strong>A2.</strong>10.4.9 Multicast security<br />

Description<br />

IP multicast is designed for the situation where the same data needs to be sent to a large number of recipients.<br />

Doing this using unicast can lead to multiple copies of the same data being sent on the same links. IP multicast<br />

minimises bandwidth used by only sending one copy on each link that is on the path to more than one end-user.<br />

It can also significantly reduce the complexity of discovery and set up for group communications. Sending data<br />

to or receiving data from the group is simply a matter of sending or listening to a particular multicast IP address.<br />

To achieve the same with unicast it is necessary for each participant to obtain the IP addresses of all the other<br />

participants, which of course requires them to know who the other participants are.<br />

However, there are many outstanding security issues for the use of IP multicast, which are mainly due to the<br />

lack of security and reliability in the current IP specifications for multicasting. These issues are summarised<br />

below.<br />

• IP multicast does not provide any confidentiality protection. There is no protection against eavesdropping for<br />

multicast data itself, and no controls on who can join multicast groups. Data sent to an IP multicast group is<br />

sent in clear and can be read by anyone sniffing packets on the network. It can also be easier, compared to<br />

unicast traffic, for non-group members on the same LAN as group members to eavesdrop. For unicast traffic,<br />

infrastructure measures, such as the use of switched Ethernets for example, can be used to limit vulnerability<br />

to packet sniffing. However, in many LANs multicast traffic will be broadcast to all nodes allowing<br />

everyone to read it. IGMP aware switches are available which will only deliver multicast packets to those<br />

who have joined the relevant group. However, this technique is clearly only useful in wired environments. In<br />

any case, the underlying network infrastructure cannot protect against unauthorised users joining multicast<br />

groups and reading data in this way, as IGMP/MLD does not provide the facility to deny membership. This<br />

can also be a threat to availability. An unauthorised user could join a large number of multicast groups in a<br />

DOS attack. This could lead to excessive and unnecessary use of bandwidth, or may even lead to starvation<br />

of resources on servers and routers. Note that this threat could occur through accidental usage of the network<br />

by users.<br />

• Data origin authentication and integrity protection are not provided by IP multicast packets. This means that<br />

packets could be deliberately modified en route, replayed, inserted or deleted without the knowledge of the<br />

group members. Such problems are made more serious by the fact that anyone can send data to a multicast<br />

group without having to join it.<br />

• No control over multicast address assignment. There is therefore the possibility that two independent<br />

multicast groups will choose the same address, with the result that data from each group will get mixed and<br />

confused. This can lead to integrity and confidentiality problems as well as a waste of resources (data being<br />

sent to members who don't want it). Source Specific Multicast (SSM) can solve the problem if it is available,<br />

but of course is only applicable in single sender groups. Note that multicast address assignment is more of an<br />

IPv4 issue than an IPv6 issue, as the much larger address space and the ability to scope multicast addresses<br />

in IPv6 more or less solves this issue.<br />

<strong>Annex</strong> 2 - Page 233 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• IP multicast is based on UDP, and hence is inherently unreliable. IP multicast applications therefore have to<br />

handle packet reordering, jitter and latency issues as well as the possible loss of packets.<br />

Trying to provide solutions to these underlying issues can also be difficult.<br />

• Confidentiality protection would typically be provided by using encryption together with suitable key<br />

management arrangements to ensure that only authorised members have encryption/decryption keys. The<br />

encryption itself is fairly straightforward, however, the key management needs to be dynamic to cope with<br />

members entering and leaving a group during its lifetime. For unicast communications, any security can be<br />

set up at the start of the session and torn down at the end, with little if any management required between.<br />

However, for multicast groups, members could enter and leave at any time, which may require continuous<br />

management of the group. In particular, new members may need to be prevented from reading old data<br />

(backward secrecy) and members who have left may need to be prevented from reading future data (forward<br />

secrecy). For large and/or highly dynamic groups, this can be particularly difficult to achieve.<br />

• In general, some kind of authentication of multicast data will be required, but the provision of authentication<br />

is complicated by the fact that it has to work for a group. Group authentication refers to being able to tell<br />

when packets have come from a member of the group, but not necessarily which member. Providing this<br />

kind of authentication in an efficient and scalable way is, in general, relatively easy with existing techniques,<br />

such as MACs. Although group authentication may be sufficient for many applications, some other<br />

applications may require the more strict source authentication where the precise member of the group who<br />

sent a particular packet can be identified and authenticated. Providing source authentication in an efficient<br />

way, particularly for real-time data flows, can be difficult to achieve and is a significant problem. This is<br />

because the usual way of solving this problem, by using digital signatures, is not very efficient and does not<br />

scale to high-volume data flows, particularly for users with limited performance terminals.<br />

• To cope with some of the reliability problems of UDP, protocols have been designed to sit above it and<br />

provide at least some form of flow control, such as RTP/RTCP in particular. RTP/RTCP can cope with<br />

packet reordering, jitter and latency issues, however, it can't cope with the loss of packets. Supplementing IP<br />

multicast to provide guaranteed delivery of packets is in general a difficult problem.<br />

<strong>A2.</strong>10.5 Dependability in Broadband networks<br />

Dependability is defined as fault tolerant secure systems. Even when the system is securised, it is necessary that<br />

systems function in a fault tolerant mode to make the system trustable and dependable. Hence, the resilient<br />

systems are an important of reliable networks.<br />

<strong>A2.</strong>10.5.1 Intrusion detection and prevention systems<br />

<strong>A2.</strong>10.5.1.1 Description<br />

It is probably impossible to achieve ultimate and unbreakable security architecture and there will always be<br />

weak points, attacks, incidents and failures. So a trusted and dependable security framework must provide<br />

mechanisms to detect this abnormal behaviour and give the end user the possibility to react as soon as possible.<br />

Intrusion detection is one of the major pillars of any computer and network security policy. It deals with the<br />

problem of unwanted trespass into systems by users or, increasingly, automated software agents. Usually used<br />

together with conventional security devices such as firewalls and antiviral boxes, they are able to detect unusual<br />

behaviour that can indicate a breakdown of confidentiality, integrity and/or reliability of the corporate<br />

information system.<br />

The potential for damage to systems from intruders is immense, and includes loss or corruption of data, release<br />

of sensitive information, theft of scarce or valuable resources and loss of availability of systems through crashes<br />

or congestion of resources. The risks faced, and the difficulties in defending systems with any degree of<br />

certainty, mean that intrusion detection must complement front-line defences.<br />

There are number of intrusion detection and prevention systems. The functionality o of each system differs:<br />

Intrusion Detection System: Detect an intrusion using various algorithms, log it and trigger an alarm based on<br />

the defined policy.<br />

Intrusion Prevention System: These solutions go one step further. An IT administrator can define the actions<br />

to be taken by the IPS (like for example adding a new rule to a firewall) when the attack severity reaches a predetermined<br />

threshold.<br />

<strong>Annex</strong> 2 - Page 234 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Anomaly Prevention System: It is the fusion between monitoring and IPS. Every event within the information<br />

system are collected and could be use to detect an intrusion, a down service, a performance problem,<br />

A lot of information needed for monitoring and intrusion detection is the same.<br />

Using several different solutions (an agent for monitoring the network, another for intrusion detection, another<br />

to collect performance data,) increase the risks and complexity. It produces duplicate data and increases the<br />

complexity of correlation (needs for common log format, time synchronisation, etc…)<br />

The philosophy of an APS is to collect all these information only one time and use powerful technique of<br />

correlation and various algorithms of detection (protocol analysis, behaviour analysis…) to process the data.<br />

Then using a role management system (based on LDAP technologies) it can produce the appropriate data, alarm<br />

or action for each specific role (security team, performance team, web admin, etc…).<br />

<strong>A2.</strong>10.5.1.2 IDS architecture<br />

The literature on intrusion detection contains several alternative architectural approaches, most of which are<br />

quite similar to each other and based on the following architecture (Figure 94).<br />

<strong>A2.</strong>10.5.1.3 Data collection<br />

The data collection’s goal is to get in efficient manner all the data needed for processing intrusion detection.<br />

Historically, the collection procedure defines the IDS name Figure 95):<br />

• NIDS: network intrusion detection system<br />

• HIDS: host intrusion detection system<br />

• NNIDS: network node intrusion detection system<br />

• DIDS: distributed intrusion detection system.<br />

Alarms, Logs, Actions<br />

Data processing for detection<br />

Various Algorithms :<br />

pattern matching,<br />

Behaviour analysis,<br />

…<br />

Signatures Database,<br />

User profils,<br />

neural networks …<br />

Figure 94: IDS Architecture<br />

<strong>Annex</strong> 2 - Page 235 of 282<br />

Data<br />

Collection


Internet<br />

DIDS<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

NIDS<br />

Figure 95: IDS Data Collection<br />

<strong>Annex</strong> 2 - Page 236 of 282<br />

HIDS<br />

Host<br />

Business Logic<br />

Application<br />

Operating System<br />

Network Interface<br />

<strong>A2.</strong>10.5.1.4 Data processing for detection<br />

It is the « heart » of an APS. Using all the collected data, the APS will process it using several algorithms in<br />

order to detect an « anomaly ».<br />

Two major alternative approaches are taken in well known commercial products and open-source solutions:<br />

Misuse Detection (or scenario based analysis)<br />

Misuse detection is the oldest method for spotting intrusions. This procedure uses a pattern matching approach.<br />

The system compares the collected data with attack signatures from a database. If the comparison results in a<br />

positive match, the system recognises an anomaly and reacts accordingly. Misuse detection remains the most<br />

commonly used procedure in the commercial and non-commercial sectors. The procedure is easy to implement<br />

and use and it is not very prone to false alarms (false positives). Misuse detection, however, has a major<br />

drawback. This method recognises known attacks only. Consequently, new attack patterns that have not yet been<br />

added to the attack signature database do not trigger an alarm (false negative) and thus go unnoticed<br />

Anomaly Detection (or Behaviour Analysis)<br />

Anomaly detection is based on the premise that anything outside the realm of "normal" behaviour, is, by<br />

definition "abnormal" (i.e., it is an anomaly), and therefore constitutes an attack. Compared to misuse detection,<br />

this method's advantage is the ability to recognise new attacks, since they are defined as abnormal behaviour. In<br />

addition, there is no need to implement and maintain a database of attack patterns. Nonetheless, anomaly<br />

detection comes with its own set of problems that significantly impede its use in the commercial sector. Anomaly<br />

detection procedures must first acquire knowledge of what constitutes "normal" behaviour for a network or<br />

computer system by creating user and system profiles. This phase alone is an obstacle and could be exploited by<br />

an opponent who could teach the IDS to classify attacks as normal behaviour. Thus, in the future, the ID system<br />

might then no longer recognise that type of attack as an unauthorised intrusion. Another drawback is the high<br />

rate of false positives triggered by disruptions of normal system activities that are not actually attacks.<br />

Moreover, compared to misuse detection, the implementation of anomaly detection is more difficult, since the<br />

latter method involves more complex procedures.<br />

Anomaly detection, though adaptive, is problematic in highly dynamic environments, as it takes time for systems<br />

to build up a profile for normal use. As the lifecycle of technologies and services shortens, and diversity and<br />

complexity grow, this gets more and more difficult.<br />

NNIDS


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>10.5.1.5 Alarms, logs, actions<br />

Once an anomaly is detected, the APS can execute several actions (send an alarm, write some important<br />

information in a database, and trigger a rule on a firewall...).<br />

While all the reactive measures executed can have many different forms, they generally fit into one of two<br />

categories:<br />

Passive:<br />

• Send E-Mail<br />

• Send message to a pager or cell phone<br />

• Play audio data<br />

• Log the intrusion in a database<br />

• Increase the volume of audit data to record<br />

Active:<br />

• Turn off critical systems and services<br />

• Customise the packet filter rules for firewalls<br />

• Get information about the system from which the attacks originate, such as, for example, logged-in users<br />

(fingerd, rusersd or identd), services offered (port scanning) and the operating system being used (OS<br />

fingerprinting) - the latter two pieces of information can be used for additional attacks<br />

• shut down the attacking computer system with denial-of-service (DoS) attacks<br />

Active measures (which justify the name prevention system) are controversial and should be deployed carefully.<br />

Unfortunately, the attacker can also use the active actions initiated by the APS and turn them against the<br />

network that is being protected...<br />

If the attacker disguises his attacks to make them appear as if they originated from a business partner's IP<br />

address, and if the APS then changes the firewall configuration to block packets from the partner's network,<br />

collaboration with the business partner is no longer possible. In addition, if the APS sends too many alarm<br />

messages by e-mail or pager, the excessive traffic can overload the mail or modem servers (denial of service<br />

attacks).<br />

<strong>A2.</strong>10.5.1.6 Known problem and issues<br />

APS are not perfect and there is still a lot of research in this area.<br />

The major known drawbacks are:<br />

• False positive<br />

• False negative<br />

• Performance (ex : missing some packets on gigabit networks)<br />

• Too much alarm and data (human correlation became quite impossible)<br />

• Increasing Cost (multiple sensor deployment, huge database and complex software for analysis and<br />

correlation)<br />

• Complex management issues<br />

• Evasion technique :<br />

- Fragmented packet<br />

- Ids flooding<br />

- Teaching a wrong behaviour<br />

- Encapsulation technique (some attacks already exists using ipv4/ipv6 transition mechanism)<br />

- Non standard protocol (like the backdoor using protocol 11 discovered by the honeynet team in February<br />

2002 550 )<br />

Main issue: Going beyond the centralised paradigm of current architectures<br />

550 The reverse challenge, http://project.honeynet.org/reverse/results/project/<br />

<strong>Annex</strong> 2 - Page 237 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

It is very difficult to apply traditional IDS techniques to networks such as Peer to peer and ad-hoc network due<br />

to the following differences with wired one:<br />

• No fixed infrastructure, so no traffic concentration point for data collection.<br />

• Slower links, Limited bandwidth, Power constraints change the communication pattern in ad-hoc networks.<br />

For example disconnected operation is very common. All these suggest that the anomaly models in these<br />

environments will be different.<br />

• There may not be a clear separation between normalcy and anomaly in wireless ad-hoc networks. A node<br />

that sends out false routing information could be the one that has been compromised, or merely the one that<br />

is temporarily out of sync due to volatile physical movement.<br />

To address these specific issues, the architecture needed is distributed and collaborative. Anomaly detection<br />

must be carried out locally and in conjunction with each network node.<br />

<strong>A2.</strong>10.6 Digital privacy protection<br />

Technologies for the protection of digital privacy are currently a wide open field for research and most solutions<br />

are far from providing a dependable level of privacy for the ordinary user. This seems odd at first, since<br />

computers and networks have been around for such a long time. However, it is only recently that the general<br />

public is using them for other means than simply sending electronic mail or browsing the Web for information<br />

content – applications that either do not release much information about the user or can draw from a one-to-one<br />

trust relationship between sender and receiver of a message.<br />

The expression of data protection in various declarations and laws varies. All require that personal information<br />

must be:<br />

• obtained fairly and lawfully;<br />

• used only for the original specified purpose;<br />

• adequate, relevant and not excessive to purpose;<br />

• accurate and up to date;<br />

• accessible to the subject;<br />

• kept secure; and<br />

• destroyed after its purpose is completed.<br />

Nearly thirty countries have so far signed the convention and several others are planning to do so shortly. The<br />

OECD guidelines have also been widely used in national legislation, even outside the OECD member countries.<br />

A detailed examination of these legislations and their implementation is outside the scope of this document.<br />

Designing and Implementing suitable solutions for the protection of privacy is not easy since there are different<br />

requirements from different users and application domains. Privacy is usually not regarded as a hard security<br />

target as it can interfere with other security objectives of corporations and governmental bodies (such as national<br />

security).<br />

However, it is clear that the protection of digital privacy becomes more important as the Internet becomes<br />

ubiquitous and business, ordinary people and governments are dependent on the availability of secure electronic<br />

communication processes and data. Without adequate protection of privacy such systems become open to<br />

exploits such as impersonation, theft of digital identity, industrial espionage and others.<br />

Use of computer security models to protect private data<br />

Privacy in computerised environments has been historically managed through the use of access-rights and<br />

ownership to files and processes. This works quite well as long as only a restricted circle of persons has access<br />

to the data and the data itself is localised - so that it does not leave the administrative domain, which itself is<br />

protected against unauthorised access from the outside.<br />

As soon as data leaves this protective domain it is usually void of any control or protection. The owner of the<br />

data has to trust the receiving party to maintain the confidentiality of the received information.<br />

As such trust can not always be assumed a number of guidelines have been introduced to avert the danger. The<br />

Common Criteria for Information Technology Security Evaluation 551 define four primary privacy objectives,<br />

namely:<br />

551 Common Criteria for Information Technology Security Evaluation<br />

<strong>Annex</strong> 2 - Page 238 of 282


• Anonymity<br />

• Pseudonymity<br />

• Unlinkability<br />

• Unobservability<br />

<strong>A2.</strong>10.6.1 Privacy in trusted environments<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

For trusted environments the question of privacy protection can be reduced to the protection of the trusted<br />

environment and taking precautions against security breaches. As long as the private data never leaves this<br />

environment and is adequately protected while being in transit, it can be pronounced safe.<br />

Access control, Authentication and Encryption are the main technologies to guard private data. Communication<br />

partners need to trust each other with their procedures and measures against security breaches. If a security<br />

breach does occur or one partner withdraws his commitment to the protection of the private data, there is usually<br />

very little concrete action that can be taken by other partners to safeguard their privacy.<br />

Partners need to be clear about their terms of business in order to enforce any legal action in such cases.<br />

Independent audits that testify the safety of data handling procedures can help to establish trust between the<br />

communication partners.<br />

As such trusted domains are essentially small they do not provide an adequate privacy model.<br />

<strong>A2.</strong>10.6.2 Privacy in untrusted environments<br />

In normal Internet communication the level of trust between communication partners might be usually very low.<br />

A private user needs to rely on brand names and general terms of conduct while he or she is entrusting<br />

confidential personal data into the hands of other users, businesses or governmental bodies. As soon as any data<br />

has been released, it is effectively void of any control of the releasing party. The sender must trust the receiver to<br />

comply with all the regulations, and to adequately handle and protect the sensitive information he is receiving.<br />

Even so, this does not guarantee privacy. Accidents can happen and information that was meant to be protected<br />

could be released without any chance to regain control.<br />

Also, businesses have problems guarding their private digital assets, such as the media industry or software<br />

companies.<br />

There are currently two directions of research that try to establish an acceptable level of privacy in the untrusted<br />

environment.<br />

<strong>A2.</strong>10.6.2.1 Minimising the dissemination and distribution of private data<br />

Most current research to enhance privacy is directed towards minimising the amount of released private data, as<br />

well as providing anonymity to the user.<br />

Many networking tools and application reveal more information about the user than what would be absolutely<br />

necessary. It is therefore important to minimise the amount of information that is released during networking<br />

activities (IP-Address, Referral Web Pages, Operating System details, etc.).<br />

The following technologies are used to achieve this:<br />

• Anonymisation of data (e.g. Anonymous Web Proxies delete personal references)<br />

• Pseudonymisation (e.g. Trusted third parties can testify to the validity of a Pseudonym)<br />

• Encryption of data (e.g. Pretty Good Privacy (PGP) for Email)<br />

Examples are the use of IPv6 privacy extensions (RFC3041) and Identity Federation Systems that allow the user<br />

to be represented to external service providers under a pseudonym.<br />

<strong>A2.</strong>10.6.2.2 Retaining control over released private data<br />

While minimising the amount of released data is of great value for the protection of privacy, it is still not<br />

sufficient to guarantee an acceptable level of privacy.<br />

There are certain situations where people are forced to reveal their true identity or at least reveal enough<br />

information about themselves that enables the link from pseudonyms to the real person behind them. This<br />

information can then be stored and processed further without the victims consent and suspicion. There are also<br />

many systems installed that gather data (location data, transaction data) unbeknownst to the user (such as Closed<br />

Circuit TV, RFID and others).<br />

<strong>Annex</strong> 2 - Page 239 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

It would be most beneficial to give the sender control over the subsequent release of private data that he has<br />

entrusted to the receiver. The generation of a binding between the information content, its intended purpose and<br />

a limiting timeframe could greatly enhance information privacy.<br />

One example can be found in the current development of Digital Rights Management systems (DRM). The<br />

content industry has greatly benefited from the digitalisation of content, which allows for innovative content<br />

creation and distribution. At the same time, this has made this valuable content vulnerable to simple copying and<br />

distribution outside the official distribution channels. Digital rights management systems make it possible to<br />

retain ownership over this content and grant only limited access to this content 552 .<br />

The Trusted Computing Platform, which is currently jointly developed by the Trusted Computing Group 553 ,<br />

aims to provide the necessary level of trust in the computing environment. This becomes necessary, as private<br />

and trusted communication needs to be secured along the whole path the information travels. Users and<br />

organisations need to be confident that the operating environment meets their trust criteria.<br />

Further research is needed in order to estimate to what extent new technologies infringe on the privacy rights of<br />

the ordinary user and how they could be used as a general means to enhance digital privacy on the Internet.<br />

Since the end-user in most cases hat the least technical know-how and negotiation power in respect to other<br />

players in the market and society, it would be beneficial to start developing technologies that are by default<br />

privacy friendly and stop treating privacy enhancing technologies as opt-in technologies.<br />

552 Internet Digital Rights Management<br />

553 Trusted Computing Group<br />

<strong>Annex</strong> 2 - Page 240 of 282


<strong>A2.</strong>11 OVERALL MANAGEMENT and CONTROL<br />

<strong>A2.</strong>11.1 Introduction<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Network functionality is usually split into three parts called planes; each of them responsible for certain<br />

functions:<br />

• Management plane<br />

Longer-term functionality, such as static bandwidth provisioning, device configuration and monitoring, fault<br />

management and restoration.<br />

• Control plane<br />

Short term functionality: Connection set-up and tear-down, protection, signalling, etc.<br />

• User plane<br />

Could also be called data transport plane. Raw data transport, buffering, shaping…<br />

The overall function is to provide a certain degree of Quality of service (QoS) to the user.<br />

<strong>A2.</strong>11.1.1 Control plane<br />

Management Plane<br />

Control Plane<br />

Discovery<br />

Signalling<br />

Routing<br />

Resource<br />

management<br />

Transport Plane<br />

Figure 96: A network subdivided into control, management and transport planes<br />

The control plane takes care of signalling information in the network. The control plane provides such<br />

functionalities as routing and resource reservation. With the introduction of a control plane conventional optical<br />

networks can become much more dynamic and responsive, and thus dynamically changing customer bandwidth<br />

demands can be met and a better utilisation of the network resources can be achieved. It also supports quick<br />

protection.<br />

Hence, control is related to:<br />

• routing<br />

• bandwidth reservation /resource management<br />

• traffic engineering capabilities<br />

<strong>Annex</strong> 2 - Page 241 of 282


<strong>A2.</strong>11.1.2 Management plane<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The management plane performs overall network functions such as bandwidth reservation/provisioning,<br />

monitoring and restoration. A separate network usually performs network management. I.e., the network<br />

carrying management information is different from the one carrying user data. In that way network monitoring<br />

can be maintained even in the case of failures. Network management information is carried by specialized<br />

protocols such as SNMP. Each network element or sub-element contains an element manager, which gathers<br />

information and communicates with a centralized network management systems located at the control centre of<br />

the network provider.<br />

Usually, network management is sub-divided into a number of sub-functions, namely:<br />

• Configuration management<br />

• Performance management<br />

• Fault management<br />

• Security management<br />

• Accounting management<br />

<strong>A2.</strong>11.1.3 Future network architectures<br />

New network architectures are emerging and that imposes new requirements to the control, signaling and<br />

management. Hence, a brief overview of new network architectures shall be provided here.<br />

A single technology architecture and network has the advantage of easier maintenance. However, a single<br />

technology probably can not provide the optimal solution in all circumstances. In any case, it is a given fact that<br />

most of today's telecommunications networks are multi technology networks. There are three main reasons for<br />

that:<br />

• These networks are themselves interconnected networks of different operators. Each operator has full control<br />

over its own domain (and adopt a single technology, if it can), but has no control over what technology its<br />

neighbour is using.<br />

• Many of these networks are very large. Each technology has its advantages. Therefore different technologies<br />

can be optimal in different circumstances/environments, and thus an operator can decide to use the optimal<br />

technology solutions in the different areas.<br />

• The upgrade of large networks is done gradually, during which phase multiple technologies are present in the<br />

network.<br />

A consequence of migrating from one, existing network architecture to another one is that new equipment must<br />

be introduced. The sheer size of networks often dictated a step-by step migration strategy, which implies that at<br />

all times the network will consist of a mixture of equipment, ranging from e.g., electrical routers to all-optical<br />

packet and wavelength switches. It is important to find a suitable architecture in which new technology (e.g.,<br />

optical switches) can be introduced gradually and hence enable a seamless migration. It should also be<br />

emphasized that this mixed network, which for instance could be organized in a hierarchical fashion, is in fact<br />

advantageous for many networks because it makes the operator leverage a number of different technologies<br />

while seamlessly migrating to the newest technologies.<br />

Figure 97: A mixed-technology network. New technologies are popping up as “islands” within the network.<br />

<strong>Annex</strong> 2 - Page 242 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The Infranet architecture<br />

The Infranet architecture is a multi provider network with QoS guarantees 554 . It is an initiative started by more<br />

than two dozen vendors and service providers. Infranet is a carrier class overlay to the internet architecture. Is it<br />

suitable for enterprise-wide VoIP solutions requirening handovers from one carrier’s network to another, multi<br />

provider VPNs, utility computing, inter-company peer-to-peer collaboration.<br />

PEI – Policy-to-element interface<br />

SNI – Signaling-to-network interface<br />

The basic components of an infranet are MPLS and web services. Web services are a tool for integrating<br />

computer intelligence into computer networks. MPLS is a tool for controlling networks.<br />

The infranet architecture is subdivided into 3 strata<br />

• Packet handling stratum<br />

• Network policy and control stratum<br />

• Service signaling stratum<br />

<strong>A2.</strong>11.2 Management<br />

Network management is the long-term administration of network resources, monitoring of device/sub system<br />

performance, including taking appropriate action in case of malfunction or unsatisfactory performance. The<br />

management capabilities of a system, thus refers to the ability of the network to control and to monitor the<br />

available network resources.<br />

<strong>A2.</strong>11.2.1 Reference architectures<br />

This section contains background information only and is more or less directly quoted from 555 .<br />

Under the standardisation framework, ITU continues to enlarge the scope of ASON by developing various<br />

accompanying components required for Automated Switching Transport Networks (ASTN). Even though not<br />

designed with the express purpose to be managed using a policy-based system, the ASON framework provides<br />

fundamental guidelines in a protocol and implementation neutral way to understand the operational relation<br />

between network control and management.<br />

<strong>A2.</strong>11.2.1.1 TMN and SNMP<br />

A significant achievement by the TMN effort is its Logical Layered Architecture. It was developed to better<br />

handle the inherent complexity and scalability issues of network management. Management layers are defined<br />

by grouping functionality and information according to levels of abstractions or into clusters whose scope<br />

appears natural. Specifically, TMN has identified the following management layers (in addition to the network<br />

element layer):<br />

• Element Management Layer (EML)<br />

Manages individual network elements possibly on a group basis. As an objective, this layer will provide a<br />

vendor-independent view of network elements to the layers above.<br />

• Network Management Layer (NML)<br />

Manages and controls (abstract) resources associated with the network view related to the network elements<br />

within a domain. Must provide an appropriate view of the network level resources or services to the service<br />

management layer.<br />

• Service Management Layer (SML)<br />

This layer is responsible for the contractual aspects of services provided to customers, such as service order<br />

handling, co-ordination of services, complaint handling, QoS data, and invoicing.<br />

• Business Management Layer (BML)<br />

This layer has responsibility of the entire enterprise. It may relate to all the other layers. It should support the<br />

decision-making process for optimal investments and use of new resources. Management information in this<br />

layer is not subject to standardisation.<br />

554<br />

http://www.infranet.org/<br />

555<br />

DAVID deliverable D132, “Specification of atomic functions and study of control plane issues including management inter-working”,<br />

IST-DAVID<br />

<strong>Annex</strong> 2 - Page 243 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Both the TMN (Telecommunications Management Network) architecture 556 and the SNMP 557 (Simple Network<br />

Management Protocol) architecture 558 use the notion of manager and agent as well as the notion of a MIB. The<br />

TMN architecture is a more abstract architecture establishing separate functional, informational, physical, as<br />

well as logical layered architectures. Actually the SNMP protocol has just recently become an allowed option<br />

that may be used in a TMN solution. The first management protocol that was used with TMN was the Open<br />

Systems Interconnection (OSI) - Common Management Information Service/Protocol (CMIS/CMIP) solution 559<br />

560<br />

. Thus, very often when people speak of TMN solutions and their strengths and weaknesses they actually<br />

mean CMIS/CMIP-based solutions. However, TMN is intended to be independent of the management protocol.<br />

The OSI based approach was developed for vendor-independent management of complex OSI data networks<br />

(OSI systems) and telecommunications facilities. Reliable management communications has been emphasised,<br />

thus the choice of a connection-oriented solution. To handle the complexities of the resources to be managed as<br />

well as to keep the management traffic low both the management protocol as well as the associated specification<br />

language (GDMO 561 ) are powerful and expressive. However, this results in complex functionality in the agent<br />

increasing the cost and the processing load of the managed system.<br />

The SNMP architecture on the other hand, was designed to provide a low complexity cost efficient solution for<br />

vendor-independent management of IP-based networks and related equipment such as routers, servers,<br />

workstations and other network resources. The SNMP MIB specification language (Structure of Management<br />

Information, SMI) is significantly simpler than GDMO. It does not provide the notion of class nor inheritance.<br />

Thus, it is not possible to specialise one class from a generic class. Correspondingly the SNMP protocol is<br />

simpler and “smaller”. An SNMP (managed) object is a simple data value or row entry whose number of<br />

elements are fixed and the elements are of simple types. Like for GDMO, ASN.1 (Abstract Syntax Notation One<br />

562<br />

) is also used for specifying SNMP MIBs. However, only the basic (non-complex) types of ASN.1 are used.<br />

Although tables can be specified as part of the MIB, whose lengths can dynamically be varied, SNMP does not<br />

provide any notion of composite object of several attributes manageable as a whole. As such, a table is not a<br />

named and manageable object. SNMP objects are singly instantiated. Multiple instantiated “objects” can only be<br />

achieved through tables and multiple table entries.<br />

Although SNMP v2 and v3 provide some improvements regarding “bulk” operations the SNMP approach is<br />

considered a simple and low-level approach, whereas the OSI-based solution provides means for complex<br />

higher-level management operations for instance by various sophisticated selection mechanisms (scooping and<br />

filtering). While SNMP is the preferred solution in IP-based networks GDMO/CMIP-based solutions have often<br />

been preferred for management of transport networks such as SDH.<br />

It should also be mentioned that the developments in distributed computing during the 90’s also have influenced<br />

management solutions. Whereas management interfaces towards the agents associated with the NEs most often<br />

are still based on either SNMP or CMIP, higher-level interfaces among management systems (OSSs) are more<br />

often based on middleware such as CORBA and J2EE. Thus, mapping techniques have been developed for<br />

mapping MIB specifications from one kind to another, as well as interworking methods capable of translating<br />

one protocol to another 563 .<br />

Policy based network management<br />

Policy-based management (PBM) is introduced to overcome many of the shortcomings mentioned above of the<br />

traditional management solutions. The overall goal of PBM is increased operational efficiency and fewer<br />

mistakes during operations, administration, maintenance, and provisioning. Eventually, the goal is lower<br />

operational expenses.<br />

Policy-based management involves the use of administratively prescribed rules that specify actions in response<br />

to defined criteria or conditions. Policy rules may be general and abstract or very specific.<br />

556<br />

ITU-T Rec. M.3010, “Principles for a Telecommunications management network”, February 2000.<br />

557<br />

Note that the notion of SNMP is used in two different ways; to denote the overall SNMP based management architecture, and to denote<br />

the specific SNMP protocol as such.<br />

558<br />

J. Case et al., “Introduction to Version 3 of the Internet-standard Network Management Framework”, RFC2570, April 1999<br />

559<br />

ITU-T Rec. X.710, “Managed objects for diagnostic information of public switched telephone network connected V-series modem<br />

DCE's”, October 1997<br />

560 ITU-T Rec. X.711, “Information technology - Open Systems Interconnection - Common management information protocol:<br />

Specification”, October 1997<br />

561 Structure of management information: Guidelines for the definition of managed objects [X.722], Management information model<br />

[X.720]<br />

562 ITU-T Rec. X.680, “Information technology - Abstract Syntax Notation One (ASN.1): Specification of basic notation”, July 2002<br />

563 “Inter-Domain Management: Specification & Interaction Translation”, The Open Group, Specification C802, UK ISBN 1-85912-256-6,<br />

January 2000<br />

<strong>Annex</strong> 2 - Page 244 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The rules are condition/action chains. When an event occurs that triggers the evaluation of the rule condition,<br />

and the condition is satisfied, the action will be performed automatically. The basic idea is to describe the<br />

network and service objectives with network device, network service, and business level policy rules that have<br />

well defined mappings from high-level policies to low-level policies. Eventually, policies are translated into<br />

network device configuration commands. The whole process of propagating actions from high-level to low-level<br />

policies and the execution of management operations at network device level, is performed in an automated way.<br />

Thus, Policy-Based Network Management (PBNM) is expected to offer a more dynamic, automated, and<br />

distributed way of management, and to contribute to automation and operational efficiency.<br />

However, considering the process of transforming policies from high-level business goals to device specific<br />

enforcement rules one should make a distinction between human made transformation and automated machine<br />

processed transformations. High-level business goals and service objectives are in the first place human<br />

specified. Typically, a network administrator will analyse these business goals and service objectives together<br />

with the network to realise the services. Based on the topology of the network the administrator will assign roles<br />

to the various elements of the network such as roles related to nodes, links, and interfaces. From these roles and<br />

the analysis made the administrator can translate the high-level goals into domain specific informal policies.<br />

Next, the administrator translates these informal policies into formal policies using formal policy information<br />

constructs that can be handled by a policy tool or server. This prepares for the further automated handling of the<br />

policies according to the architecture described below. (For a more detailed description of the policy translation<br />

processes see 564 .)<br />

Standardisation bodies like the Distributed Management Task Force (DMTF) and the IETF are responsible for<br />

standardisation of PBM. However, there are several approaches and frameworks. One PBM framework is<br />

developed by the Resource Allocation Protocol Working Group in IETF, which focus on policies for network<br />

resource access and usage covering both the IntServ and the DiffServ QoS models. Although this working group<br />

has developed the COPS (Common Open Policy Service) protocol 565 to be an integral part of their framework,<br />

one may consider their reference architecture in a protocol neutral way as is done in the following reference<br />

architecture.<br />

The reference architecture is composed of four main functional entities as shown in Figure 98:<br />

• The Policy Management Tool is composed of policy editing, rules translation and validation functions.<br />

• The policy rules, as stored in the policy repository using defined schemata, provide a deterministic set of<br />

policies for managing resources in the policy domain.<br />

• The Policy Decision Point (PDP) is composed of trigger detection and handling, rule location and<br />

applicability analysis, network and resource-specific rule validation and device adaptation functions.<br />

• Policy Enforcement Points (PEP) are responsible for execution of actions, and may perform device-specific<br />

condition checking and validation.<br />

Policy Management Tool<br />

e.g. LDAP<br />

Policy Decision Point (PDP)<br />

Policy Server<br />

e.g. COPS,<br />

SNMP<br />

Policy Enforcement<br />

Point (PEP)<br />

e.g. LDAP<br />

e.g. LDAP<br />

e.g. COPS<br />

PEP Agent<br />

Policy unaware<br />

device<br />

<strong>Annex</strong> 2 - Page 245 of 282<br />

Policy<br />

Repository/Directory<br />

SNMP<br />

Figure 98. Policy-based network management architecture<br />

564 Y. Snir et al., “Policy QoS Information Model”, IETF, Internet Draft, May 2003. Work in progress<br />

565 D. Durham, Ed., “The COPS (Common Open Policy Service) Protocol”, IETF RFC2748, January 2000


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

One important question that needs an answer is how to deal with policy unaware devices. The approach<br />

illustrated in the figure defines a PEP agent that translates the policies received with COPS into traditional<br />

management actions 566 . These actions are then communicated to the policy-unaware device with e.g. SNMP.<br />

Two different modes of execution are the outsourcing mode and the provisioning mode. In the outsourcing<br />

mode the PEP, on the occurrence of an event such as an RSVP request, outsources or delegates the resource<br />

allocation and admission decision to the more centralised PDP. (See e.g. 567 ). In the provisioning mode policies<br />

are provisioned (semi-permanent) onto the PEPs. Examples of such policies are policies controlling Diffservbased<br />

interfaces. (See e.g., 568 ).<br />

PBM is so far primarily based on directory technology such as the Lightweight Directory Access Protocol<br />

(LDAP, see e.g. 569 ). New software technologies that more naturally correspond with the structures of policies,<br />

such as object-oriented technologies, should be considered as a possible way of improving implementation<br />

efficiency.<br />

The SNMPconf Working Group in IETF has developed an alternative to the COPS-based solutions 570 . This<br />

approach defines SNMP-based managed objects to enable an SNMP-based solution for distribution and<br />

execution of policies that are defined with respect to MIB modules. Early outputs of this working group show<br />

that the SNMP framework can provide all the necessary capabilities required for configuration and monitoring,<br />

so this type of policy-based configuration management can provide an integrated approach to management. In<br />

this architecture a policy manager residing in a Management Station or Mid-Level Manager may play the role of<br />

a PDP, while the Managed System may have the role of a PEP. The PolicyScrip Language has been developed<br />

to express policy conditions and actions for this setting.<br />

Using policy-based configuration management with SNMP doesn’t prevent the use of traditional, instance<br />

specific configuration. The main advantage that we gain here is that the traditional configuration methods can be<br />

used in combination with more powerful policy-based configuration operations. Using objects from the same<br />

name space for configuration with policies as for monitoring will help the error detection and recovery process<br />

571 .<br />

The basic policy rules, conditions and actions are formalised in Policy Core Information Model (PCIM) 572 and<br />

its extensions PCIMe 573 as a set of PolicyRule, PolicyCondition and PolicyAction classes, and a set of<br />

aggregation definitions for aggregating rules into policy groups. PCIM and PCIMe, developed by IETF are<br />

derived from the Common Information Model (CIM) 574 provided by DMTF. Besides a generic policy schema,<br />

CIM contains object oriented UML schemata modelling managed objects with respect to among other things<br />

network components and protocols (including QoS, MPLS and routing). These generic Policy Information<br />

Models (PIMs) have been extended or specialised for domain specific usage. One important example is the<br />

Policy QoS Information Model (QPIM) 575 . QPIM defines an information model for QoS enforcement for<br />

differentiated and integrated services (e.g. modelling RSVP actions and rules for managing bandwidth and delay<br />

constraints). Note that these PIMs are protocol neutral.<br />

On the other hand there are also definitions of policy rules (or rule classes) that are COPS specific. PRovisioning<br />

Classes (PRCs), which are policy data for the provisioning mode of operation, are using the language Structure<br />

of Policy Provisioning Information (SPPI) 576 , which is based on the SNMP related Structure of Management<br />

Information (SMI) 577 . Instances of these classes (PRIs) are residing in a virtual information store called the<br />

Policy Information Base (PIB). A PIB definition is a collection of related PRCs, defined as a SPPI module. The<br />

Framework Policy Information Base (FPIB) 578 defines a set of common policy provisioning classes.<br />

566<br />

Wang Changkun, “Policy-based Network Management”, Proceedings of WCC - ICCT 2000, International Conference on<br />

Communication Technology, Vol. 1, pp. 101–105, August 2000<br />

567<br />

S. Herzog et al., “COPS usage for RSVP”, IETF RFC2749, January 2000<br />

568<br />

K. Chan et al., “COPS Usage for Policy Provisioning (COPS-PR)”, IETF RFC3084, March 2001.<br />

569<br />

J. Strassner et al., ”Policy Core LDAP Schema”, IETF Internet Draft, October 2002. Work in progress<br />

570<br />

S. Waldbusser et al., “Policy Based Management MIB”. IETF, Internet Draft, March 2003. Work in progress.<br />

http://www.ietf.org/internet-drafts/draft-ietf-snmpconf-pm-13.txt<br />

571<br />

S. Boros. “Policy-based network management with SNMP”. CTIT Technical Report Series, No. 00-16, ISSN 1381-3625, 8 pages,<br />

October 2000<br />

572<br />

B. Moore et al., “Policy Core Information Model -- Version 1 Specification”, IETF RFC3060, February 2001<br />

573<br />

B. Moore, “Policy Core Information Model (PCIM) Extensions”, IETF RFC3460, January 2003<br />

574<br />

Distributed Management Task Force, “Common Information Model (CIM) Specification”, Version 2.2, June 14, 1999<br />

575<br />

Y. Snir et al., “Policy QoS Information Model”, IETF, Internet Draft, May 2003. Work in progress.<br />

576<br />

K. McCloghrie et al., “Structure of Management Information Version 2 (SMIv2)”, IETF RFC 2578, April 1999<br />

577<br />

K. McCloghrie et al., “Structure of Management Information Version 2 (SMIv2)”, IETF RFC 2578, April 1999<br />

578<br />

R. Sahita, et.al. “Framework Policy Information Base”, IETF RFC2753, March 2003<br />

<strong>Annex</strong> 2 - Page 246 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The focus of the PIM, PIB and PBM work have thus far been on IP/MPLS nodes and their interfaces for<br />

supporting either the IntServ or the DiffServ QoS models. Using PBM for management of control plane solution<br />

for intelligent optical networking on the other hand is a rather new and unexplored area.<br />

It should be noted that PBM solutions would not replace existing management solutions. They will rather<br />

complement traditional management solutions.<br />

RM-ODP (Reference model for Open distributed Processing)<br />

The joint ISO/ITU-T standardisation of RM-ODP, Reference Model for Open Distributed Processing, aims at<br />

providing a co-ordinating framework for the standardisation of Open Distributed Processing. This architectural<br />

framework defines a set of well-structured concepts supporting distribution and interworking capabilities of<br />

open heterogeneous distributed systems, as well as portability of their components. The RM-ODP standards are<br />

organised into four parts 579 580 581 582 .<br />

The notion of (ODP) viewpoints is important. RM-ODP Part 1 states: “A (ODP) viewpoint is a subdivision of<br />

the specification of a complete system, established to bring together those particular pieces of information<br />

relevant to some particular area of concern during the design of the system”. The five RM-ODP viewpoints<br />

(viewpoints on the system and its environment) are:<br />

• the enterprise viewpoint, which is concerned with the high-level “business” activities and logics of the<br />

specified system, by focusing on actors, purpose, scope, and policies.<br />

• the information viewpoint, which is concerned with the information that needs to be stored and processed,<br />

and the semantics of the information as well as associated functionality.<br />

• the computational viewpoint, which is concerned with distribution by addressing functional decomposition<br />

into computational objects which interact at interfaces.<br />

• the engineering viewpoint, which is concerned with the mechanisms and functions supporting distribution of<br />

the computational objects and their interaction.<br />

• the technology viewpoint, which is concerned with the choice of implementation and computing technology.<br />

The viewpoints should not be considered as layers of functionality, nor should a fixed order be assigned to them<br />

according to design methodology.<br />

Distribution transparencies deal with a number of concerns and aspects that are direct results of distribution,<br />

such as remoteness, failure, etc. The RM-ODP framework addresses these concerns by identifying generic<br />

means to make these aspects transparent to the application designers and developers. Examples of distribution<br />

transparencies are access, location, migration, failure, transaction, and persistence transparencies. Middleware<br />

solutions such as CORBA support several of these transparencies inherently or optionally.<br />

The RM-ODP work has significantly influenced the developments within both general distributed processing<br />

technologies such as developments within OMG (CORBA and UML 583 ) as well as within TMN. In particular in<br />

the last few years, protocol neutral information models have become important. Instead of worrying about the<br />

details and choice of management protocols the focus can be on the functional content of the information models<br />

or “MIBs” as suggested by the information viewpoint of RM-ODP. Then, it becomes a matter of detailed system<br />

structure and implementation design as computational and engineering aspects as well as technology aspects are<br />

addressed in the computational, engineering and technology viewpoints respectively. Moreover, RM-ODP<br />

introduces several concepts supporting system refinement and specialisation as well as system evolution.<br />

Note that when considering routing and signalling protocols and their specifications developed by IETF such as<br />

GMPLS, these specifications do not consider various viewpoints. Instead of developing high-level protocol<br />

neutral specifications they go directly from identified requirements to a specific byte&message-oriented protocol<br />

specification. Nor do these protocols assume or take advantage of distribution transparencies. This is in contrast<br />

to ITU, which also develop protocol neutral specifications.<br />

579 ITU-T Rec. X.901, “Information technology - Open distributed processing - Reference Model: Overview”, August 1997<br />

580 ITU-T Rec. X.902, “Information technology - Open distributed processing - Reference Model: Foundations”, November 1995<br />

581 ITU-T Rec. X.903, “Information technology - Open distributed processing - Reference Model: Architecture”, November 1995<br />

582 ITU-T Rec. X.904, “Information technology - Open distributed processing - Reference Model: Architectural semantics”, Dec. 1997<br />

583 Unified Modelling Language, specified by OMG, http://www.omg.org<br />

<strong>Annex</strong> 2 - Page 247 of 282


<strong>A2.</strong>11.2.2 Policy-based management solutions for GMPLS<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

This section contains background information only and is more or less directly quoted from 584 .<br />

The network control, namely ASON/GMPLS control plane, addresses real-time action issues, such as topology<br />

discovery, resource discovery, signalling, and routing. In parallel, the management plane’s operations are more<br />

time consuming and deals with various functions already described above. Policy-based management (PBM)<br />

allows the connection of these two operational levels in a unique manner (see also Figure 99). Furthermore,<br />

policy-based management is well adapted to the collaborative management of a grand number of network<br />

elements. It also allows capturing of high-level management knowledge, which is stored in dedicated policy<br />

repositories.<br />

Refining in time and in space the network control by using PBM, it is possible to influence the behaviour of the<br />

control plane. On the other hand, the end-to-end support in service and network management must be<br />

accommodated to the end-to-end feature of network control via a specific, end-to-end management solution,<br />

which is sufficiently scalable, flexible, and smoother than classical Telco-style management systems.<br />

There are several sources of Information Model (IM) and protocol applicable to policy-based management,<br />

Figure 100, where ICIM refers to IPSP Configuration Information Model, QDDIM refers to modelling of QoS<br />

datapath mechanisms, and IPSP to the IPsec Policy Security Protocol in IETF. For example, IETF PCIM and<br />

PIB, DMTF CIM, TINA as well as SNMP and COPS protocols may offer a good basis in the development of<br />

policy-based management.<br />

Figure 99. Policy-based management principle applied to GMPLS<br />

As mentioned before, one of the major motivations for developing policy-based methodology in network<br />

management is to overcome issues related to the SNMP management framework, which is generally limited to<br />

periodic monitoring in the network (Inventory Management and Data Management subsystems). In fact, SNMP<br />

has been dominant in the areas of network status monitoring and statistics gathering, but has not been widely<br />

used for configuration management, i.e. Provisioning Management. This fundamental split makes problem<br />

resolution more costly and difficult with SNMP. Furthermore, although SNMP is generally regarded as the best<br />

option for monitoring, the proprietary Command-Line Interface (CLI) is widely used.<br />

584<br />

DAVID deliverable D132, “Specification of atomic functions and study of control plane issues including management inter-working”,<br />

IST-DAVID<br />

<strong>Annex</strong> 2 - Page 248 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 100. Various Information Models for policy-based management<br />

The COPS protocol is essentially an admission control protocol that is currently under development by the<br />

RSVP admission policy (RAP) working group at IETF. COPS is a simple query-and-response protocol for<br />

exchanging policy information between a policy server (policy decision point, PDP) and its clients (policy<br />

enforcement points, PEPs). COPS policy services can also be supported in Resource ReSerVation Protocol<br />

(RSVP) environments.<br />

However a comparison was undertaken in many studies to specify why COPS is better suited to then SNMP 585 .<br />

Here it is important to understand that SNMP is being used today. COPS can never take SNMP's place but could<br />

be used as a complement for policy-based management. SNMP has many weaknesses. For example, an<br />

application, which configures a device using SNMP, can never be sure that the configuration it set several<br />

minutes/hours/days ago is still in effect, because it is possible that some other management application (or<br />

human) might have modified the configuration more recently. So, it is necessary for the management application<br />

to periodically check that the configuration is unchanged. Now, it is not our intention to describe in detail<br />

protocols and related issues. Instead, we wish to demonstrate that policy-based management can match the<br />

flexibility requested in control plane management. Accordingly, we do not extol the advantages of a protocol<br />

over its concurrent protocols, and do not compare the efficiency of Information Models.<br />

<strong>A2.</strong>11.2.3 Summary.<br />

It is increasingly admitted that policy-based management efficiently contributes to end-to-end configuration<br />

management by reducing complexity of network management and with enforcing appropriate QoS and security<br />

constraints. Service providers may find a way to pre-program their networks through rules or policies that define<br />

how the optical network should automatically respond to various networking situations. However, the maturing<br />

GMPLS control plane for optical networks leads to a new control-management interaction scheme, where the<br />

control plane is also under the exclusive authority of the management plane. Owing to a policy-enabled control<br />

plane, service providers should specify more automation and better managed operations in provisioning,<br />

monitoring, and service assurance for revenue-generating services.<br />

For many years, policy-based management is being intensively studied not as an additional part to conventional<br />

management systems, but as making part of the same management mechanisms and providing flexible end-toend<br />

management functionality for a large set of managed elements, for instance, along a route. First<br />

requirements for policy-enabled MPLS state that even though MPLS allows a direct control through various<br />

embedded MIBs and SNMP, associating higher abstraction layer mechanisms to the controllability of the LSP<br />

life-cycle may bring additional operational advantages.<br />

In essence, this policy-based management approach for MPLS focuses on Life Cycle management (i.e., creating,<br />

modifying, deleting, and monitoring) for Label Switched Paths along with controlling access (LSP Admission<br />

Control) to those managed resources depending on the traffic in the network -like in ATM. Therefore, one of the<br />

major assumptions is that the policy management architecture used to control essentially traffic engineering<br />

functionality of the control plane should be independent of the MPLS mechanisms used. Nevertheless, it is these<br />

mechanisms that are targeted with policy management.<br />

585<br />

S. Boros. “Policy-based network management with SNMP”. CTIT Technical Report Series, No. 00-16, ISSN 1381-3625, 8 pages,<br />

October 2000<br />

<strong>Annex</strong> 2 - Page 249 of 282


<strong>A2.</strong>11.3 Control<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The control plane is a relatively new entity in networks. Control is the means by which resources are allocated to<br />

actual traffic demands, i.e., it performs some functions similar to the management plane, but, contrary to<br />

management, which is usually centralised, control is distributed and is operating autonomously - configured by<br />

the management system.<br />

<strong>A2.</strong>11.3.1 Architectural characteristics of control plane solutions<br />

This section contains background information only and is more or less directly quoted from 586 .<br />

It was a great technological and operational step forward when transport networks such as SDH networks were<br />

capable of having their connections set up in an automated fashion based on network management technologies.<br />

In the years to come we will be experiencing a new important technological and operational advancement as<br />

transport networks are supported by a distributed control plane to become automatically switched transport<br />

networks.<br />

<strong>A2.</strong>11.3.1.1 Basic control plane building blocks<br />

Traditionally, control plane components have been an integral part of network elements as shown in Figure 101<br />

below. The two fundamental building blocks of the control plane are signalling and routing. The signalling<br />

building block is in charge of establishing connections by exchanging messages with signalling components in<br />

other network elements. The routing building-block is in charge of disseminating and discovering network<br />

topology and possibly network resource information to/from routing components in other network elements.<br />

From this network topology information routing information can be computed to establish a routing<br />

(forwarding) table for the support of forwarding or to determine a connection path (route) used by the signalling<br />

function. A third functional building block, which may be present in the control plane, is the link management<br />

function. This function will be explained further below.<br />

Mgmt. Plane (MP)<br />

Operations systems<br />

Network element/node (NE)<br />

Mgmt. Plane (MP)<br />

NE Agent part<br />

Control plane (CP)<br />

- Signalling<br />

- Routing<br />

- Ling management<br />

Transport plane (TP)<br />

Figure 101. Building blocks of a network element<br />

586<br />

DAVID deliverable D132, “Specification of atomic functions and study of control plane issues including management inter-working”,<br />

IST-DAVID<br />

<strong>Annex</strong> 2 - Page 250 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

In legacy telephony environment such as the traditional PSTN networks, the clear separation of control,<br />

management, and data/transport planes 587 is maintained in order to provide highly reliable voice services. Each<br />

plane operates its proper and separated circuits and protocols. On the contrary, the Internet technology collapsed<br />

the control and data planes, into an IP layer used for transporting both control messages as well as regular data<br />

traffic. This allows a simplified network configuration and control. However, with the introduction of MPLS<br />

and connection-oriented technologies the IP/MPLS control plane has become functionally richer. But still, the<br />

data/transport plane and the control plane are not split into separate networks.<br />

A sensible next step is to enrich and adapt IP/MPLS-based control plane solutions also for next generation<br />

transport network. However, additional requirements apply, which will be identified further below. For instance,<br />

control messages may have to be carried separately from the transport network that is being controlled. Several<br />

options exist depending on the specific transport solution. One also has the choice of keeping the CP<br />

components in the network elements or introducing a separation of the CP (components) from the transport<br />

plane elements (switching/forwarding elements). This is further elaborated below.<br />

The notion of control plane (instance or system) is used to designate the set of control plane component<br />

instances constituting an autonomous control system under the administration and management of a single<br />

administrative domain. Thus, a control plane system is configured and managed consistently under a single<br />

domain. Within an administrative domain the control plane system may be further subdivided, e.g., by actions<br />

from the management plane. This allows the separation of resources into, for example, domains for geographic<br />

regions, that can be further divided into domains that contain different types of equipment. Within each domain,<br />

the control plane may be further sub-divided into routing areas for scalability, which may also be further<br />

subdivided into sets of control components. The transport plane resources controlled by the control plane will<br />

be partitioned to match the sub-divisions created within the control plane, i.e. to mach the control plane<br />

instances. In the ASON (Automatic Switched Optical Network) architecture (see the following subsection), the<br />

interconnection between domains, routing areas and, where required, sets of control components is described in<br />

terms of reference points.<br />

<strong>A2.</strong>11.3.1.2 ASON high level view<br />

Both in traditional PSTN networks as well as in IP networks there is only a single networking layer that is<br />

capable of dynamically and flexibly providing connectivity. Correspondingly the topology and resources<br />

supporting such a service networking layer is perceived as a static server layer network. However, as transport<br />

networking technologies such as SDH and optical cross-connects and in the future optical burst/packet switched<br />

solutions are supported by a distributed dynamic control plane the control plane solutions must handle a<br />

situation where there is a hierarchy of networking layers providing dynamic switching or forwarding<br />

capabilities. Resource efficient traffic engineering and cost efficient network provisioning and operations are the<br />

main objectives for the forthcoming control plane solutions.<br />

According to ITU, the purpose of the Automatic Switched Optical Network (ASON) control plane (G.8080) 588 is<br />

to:<br />

• Facilitate fast and efficient configuration of connections within a transport layer network to support both<br />

switched and soft permanent connections.<br />

• Re-configure or modify connections that support calls that have previously been set up.<br />

• Perform a restoration function.<br />

It should be sufficiently generic to support different technologies, differing business needs and different<br />

distribution of functions by vendors (i.e., different packaging of the control plane components). The ASON<br />

diagram (Figure 102) provides a high level view of the interactions of the control, management and transport<br />

planes for the support of dynamic and flexible handling of connections of a layer network. Also included in this<br />

figure is the DCN (Data Communication Network) that provides the communication paths to convey both<br />

control and management information.<br />

The management plane performs management functions for the Transport Plane, the control plane and the<br />

system as a whole. It also provides coordination between all the planes. The following management functional<br />

areas are performed in the management plane: performance management, fault management, configuration<br />

management, accounting management, and security management.<br />

587<br />

The notion of transport plane (or data/forwarding plane) should not be confused with the OSI transport layer (the fourth layer of the OSI<br />

layered communications model).<br />

588<br />

ITU-T Rec. G.8080/Y.1304, “Architecture for the automatic switched optical networks (ASON)”, November 2001<br />

<strong>Annex</strong> 2 - Page 251 of 282


LN1<br />

LN2<br />

LN3<br />

LN1<br />

LN2<br />

LN3<br />

Control Plane<br />

Transport Plane<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

CP<br />

Man.<br />

<strong>Annex</strong> 2 - Page 252 of 282<br />

TP LN Manager<br />

Management Plane<br />

TP for man. comms<br />

DCN<br />

Resource<br />

Man.<br />

TP for signalling<br />

Figure 102. Relationship between Architectural Components (Figure1/G.8080)<br />

UNI UNI<br />

User Global<br />

ASON<br />

ASON<br />

Provider 1<br />

I-NNI<br />

E- E-NNI<br />

ASON<br />

Provider 2<br />

User<br />

E-NNI<br />

ASON<br />

Provider n<br />

Figure 103. ASON’s reference points<br />

As mentioned above, the ASON’s control plane is subdivided into domains that match the administrative<br />

domains of the network. The exchange of information between domains is done across the following abstract<br />

interfaces also known as “reference points”:<br />

• User Network Interface (UNI). The signalling between the client and the ASON network is carried out via<br />

the UNI. In order to establish an optical channel, the user has to specify some attributes such as bandwidth,<br />

service class, and the egress node that it wants to be connected to. Functionality such as authentication and<br />

connection admission control are needed.<br />

• Exterior Network to Network Interface (E-NNI). This interface assumes an untrusted relationship between<br />

domains. In addition to authentication and connection admission control functionality,<br />

reachability/summarised network address information is exchanged.<br />

• Interior Network to Network Interface (I-NNI). Possibly proprietary interface used within an operator’s<br />

network. Typically, detailed topology/routing information is exchanged across such an interface.<br />

These reference points are illustrated in Figure 103, which is adopted from G.807, Requirements For Automatic<br />

Switched Transport Networks (ASTN) 589 .<br />

589 ITU-T Rec. G.807/Y.1302, “Requirements for automatic switched transport networks (ASTN)”, July 2001


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>11.3.1.3 Control plane interconnection models<br />

The IP over optical working group in IETF has developed an IP over optical framework where various control<br />

plane interconnection models are considered 590 . These have been named the overlay, the augmented, and the<br />

integrated (peer) interconnecting models and are presented in the following. The examples below show an IP<br />

network being supported by an optical transport network (OTN).<br />

IP control channel<br />

UNI interface<br />

between IP<br />

and OTN layer<br />

OTN control channel<br />

Control-plane<br />

Physical (= fiber)<br />

topology<br />

Data-plane<br />

IP<br />

UNI<br />

OTN<br />

IP<br />

OXC<br />

IP<br />

OXC<br />

NNI<br />

NNI<br />

IP<br />

IP<br />

OXC OXC<br />

OXC<br />

IP<br />

Figure 104. The overlay model<br />

IP<br />

OXC<br />

IP<br />

OXC<br />

IP<br />

OXC<br />

<strong>Annex</strong> 2 - Page 253 of 282<br />

IP<br />

OXC<br />

Physical (= fiber)<br />

topology<br />

IP<br />

OXC<br />

Integrated IP/OTN box OXC switch fabric<br />

Figure 105. The peer model<br />

IP-router<br />

controller<br />

IP-router<br />

Forwarding Engine<br />

Logical (= lightpath)<br />

topology<br />

OXC controller<br />

OXC switchfabric<br />

IP/OXC controller<br />

Control Channel<br />

Customer Premise Equipment<br />

IP-router<br />

Forwarding engine<br />

590 B. Rajagopalan et al., “IP over Optical Networks: A Framework”, IETF, Internet Draft, Work in progress. http://www.ietf.org/internet-<br />

drafts/draft-ietf-ipo-framework-04.txt


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

As shown in Figure 104 both IP and OTN layers run their own control plane (instance) in the overlay model.<br />

The IP layer acts as client – or user - layer, while the OTN acts as server layer. Therefore, a UNI between both<br />

layers allows the client IP layer to request capacity (i.e., lightpaths) from the server OTN network. This interface<br />

is under standardisation by the Optical Internetworking Forum (OIF). Both control planes are completely<br />

independent from each other. Rephrased, the client layer’s routing (IP routing like OSPF) and possibly MPLS<br />

signalling is independent from the optical layer control plane signalling (and routing). Of course, both control<br />

planes can be instantiated from the same control plane type (e.g., GMPLS). However, the independence also<br />

allows the OTN to run ASON-compliant protocols. Note that the UNI interactions actually take place between<br />

the control components in the two control planes. The control components are illustrated as residing in<br />

computers separated from the network elements. This is only for illustration purposes and several options are<br />

possible regarding the distribution of control components and their association with the transport plane.<br />

In the peer-model, shown in Figure 105, a single control plane controls both the IP and OTN layers. The result<br />

is that IP router forwarding engines and OXC switch fabrics are treated as single logically integrated IP/OTN<br />

entities. Since the current MPLS control protocols would only require minor modifications to become GMPLScompliant,<br />

it would be an interesting scenario having a unified control plane taking over the control of both the<br />

IP/MPLS routers and the optical cross-connects. By having a standardised control/management interface (e.g.,<br />

GSMP between the control plane and transport plane such a scenario should not be that unrealistic. However,<br />

some traffic engineering support in the management plane is likely needed to initiate the manipulation of OTN<br />

resources by the control plane. So-called IP/OTN control channels are realised over the physical links between<br />

these logical IP/OTN entities. Lightpaths are treated as regular (optical) Label Switched Paths (LSPs) (in case<br />

G-MPLS is assumed) and thus do not result in a new peering session between the end-points (i.e., no control<br />

channel is established over the lightpath). The peer model assumes dissemination of detailed topology/routing<br />

information and thus is not a likely interconnecting model for E-NNIs. However, inside an operator network the<br />

peer model is considered to have some advantages.<br />

Finally, Figure 106 illustrates the augmented model, which allows a more automated solution than the overlay<br />

model and avoids the dissemination of detailed topology/routing information. As such, the augmented model can<br />

be applicable also on E-NNIs. It is quite similar to the overlay model, in the sense that both layers may have<br />

their own control plane instance. However, some control information like reachability information may “leak”<br />

through the interface between both layers. Rephrased more practically, 591 states in what they call the<br />

“interdomain interconnection model” that the client layer reachability information is carried through the OTN,<br />

but OTN addresses are not propagated to the client network.<br />

IP control channel<br />

IP<br />

Enhanced UNI<br />

OTN control channel<br />

OTN<br />

IP<br />

OXC<br />

NNI<br />

NNI<br />

IP<br />

IP<br />

OXC OXC<br />

OXC<br />

IP<br />

<strong>Annex</strong> 2 - Page 254 of 282<br />

IP<br />

OXC<br />

Physical (= fiber)<br />

topology<br />

Figure 106. The augmented model<br />

IP-router<br />

controller<br />

IP-router<br />

Forwarding Engine<br />

Logical (= lightpath)<br />

topology<br />

OXC controller<br />

OXC switchfabric<br />

591<br />

Krishna Bala, Thomas E. Stern, David Simchi-Levi, and Kavita Bala, “Routing in a Linear Lightwave Network”, IEEE Communication<br />

Magazine, Vol. 3. No. 4, August1995.


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The principle of leaking client layer reachability information from one side of the network to the other is similar<br />

to the principle of MPLS/BGP VPNs and is illustrated in Figure 107. Consider that IP router rA is attached via<br />

port opA to the OXC oxA and that IP router rB is attached via port opB to OXC oxB. When router rB and OXC<br />

oxB run an E-BGP session over the UNI, then OXC oxB learns the address from rB. More precisely, OXC oxB<br />

knows then that rB can be reached via its port opB. It advertises this relation via an I-BGP session to OXC oxA.<br />

OXC oxA forwards this BGP route over an E-BGP session to router rA, after removing any optical address from<br />

the route. In other words, router rA can easily learn the address from router rB and the address resolution is kept<br />

inside the ASON (which is in contrast to the address resolution service specified in the OIF UNI 1.0). From this<br />

moment on, router rA can simply ask OXC oxA to establish a lightpath to router rB. It is the responsibility of<br />

that OXC oxA to translate the address rB in the connect request to the appropriate optical port address.<br />

OK, I can ask the<br />

transport network<br />

to connect me to<br />

router rB!<br />

Optical Port<br />

Addres opA<br />

rB sits on port opB on oxB<br />

I can reach rB<br />

Router<br />

OXC Transport OXC<br />

Address rA Address<br />

Address<br />

Network<br />

oxB<br />

Connect rB<br />

Connect opB on oxB<br />

UNI oxA<br />

UNI<br />

<strong>Annex</strong> 2 - Page 255 of 282<br />

Optical Port<br />

Addres opB<br />

HELLO (I’m rB)<br />

Router<br />

Address rB<br />

Figure 107. Illustration of how the ASON carries the client reachability information from one side of the<br />

network to the other<br />

We see that depending on the chosen model the role of the management plane may differ as well as the<br />

functionality needed considering the management plane and control plane interworking.<br />

<strong>A2.</strong>11.3.2 Architectural characteristics of ASON and GMPLS<br />

This section contains background information and is more or less directly quoted from 592 .<br />

This section identifies some architectural differences between ASON and GMPLS. A few comments are<br />

provided regarding how OIF has responded to these differences.<br />

ASON is a reference architecture that does not define specific protocols, but the (abstract) components in an<br />

(optical) control plane and the interactions between them. On the other hand GMPLS is protocol specific as it<br />

extends the MPLS ideas to the optical domain and reuses existing Internet protocols with the appropriate<br />

extensions in order to be used in the optical domain. So there is a significant difference in level of abstraction<br />

between the two approaches.<br />

However, ITU do define other more specific documents addressing particular topics of the control plane, such<br />

as:<br />

• Distributed Call And Connection Management (DCM) 593<br />

• Distributed Call And Connection Management (DCM) based on PNNI 593<br />

• DCM Signalling Mechanism Using GMPLS RSVP-TE 594<br />

592<br />

DAVID deliverable D132, “Specification of atomic functions and study of control plane issues including management inter-working”,<br />

IST-DAVID<br />

593<br />

ITU-T Rec. G.7713.1/Y.1704.1, “Distributed Call and Connection Management (DCM) based on PNNI”, March 2003<br />

594<br />

ITU-T Rec. G.7713.2/Y.1704.2, “DCM Signalling Mechanism Using GMPLS RSVP-TE (DCM GMPLS RSVP-TE)”, March 2003


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

• Distributed Call and Connection Management using GMPLS CR-LDP 595<br />

• Generalized Automatic Discovery Techniques 596<br />

• Protocol For Automatic Discovery In SDH And OTN Networks 596<br />

• Architecture And Requirements For Routing In The Automatically Switched Optical Network 597<br />

• Architecture And Requirements Of Link Resource Management For Automatic Switched Transport<br />

Network598 (currently in draft status)<br />

In some sense GMPLS can be considered as one possible realisation of the ASON framework. However, there<br />

are some fundamental differences between ASON and GMPLS in particular with respect to routing that make<br />

the above statement only partially true. ASON considers each network layer (G.805 layer network)<br />

independently, whereas GMPLS speaks of “multi-technology” links or hierarchical LSPs, that is, network layers<br />

are considered together and accordingly GMPLS routing is “multi-layered”. ASON requires each layer of the<br />

network to be treated separately, and as such there are layer-specific instances of the signalling, routing and<br />

discovery protocols running. The requirement of keeping the layers separate is considered very important for the<br />

ITU since it allows scalable administration of large networks since each layer can operate independently. By<br />

contrast, GMPLS routing is considered as unscalable since it greatly increases traffic in the control network and<br />

there is a need for a large address space capable of accommodating multiple layers.<br />

Similarly, there are some discrepancies between ASON and GMPLS regarding separation between network<br />

layers considering link management and discovery solutions as well. Furthermore, there are some discussions<br />

whether control channel management should be considered part of link management. The functions fulfilled by<br />

LMP do not map exactly onto the ITU conception of discovery as described in 599 . In order to solve this problem<br />

the OIF has already developed neighbour discovery extensions to LMP as part of UNI 1.0. An important aspect<br />

of the OIF UNI is a service discovery mechanism that will enable clients to determine the services that are<br />

available from the optical network.<br />

In the general GMPLS point of view all the nodes and links that constitute a GMPLS network share the same IP<br />

address space and information is shared freely between control nodes. In other words, GMPLS implies a trusted<br />

environment and a set of peer network elements. Thus GMPLS is naturally compatible with the peer<br />

interconnection model, which is not likely to be applicable to a UNI between untrusting domains. As such there<br />

are UNI features that are not covered by GMPLS. On the other hand ASON has identified different reference<br />

points (UNI, I-NNI, E-NNI) and specifications will be developed with these reference points in mind. The<br />

ASON UNI hides addressing information pertaining to the interior of the network. This is a security requirement<br />

of the operators who are supporting ASON. Therefore a different address space needs to be created to users of<br />

the network in order to maintain complete separation of the user and the network addressing spaces. However,<br />

GMPLS protocols may be adapted to other reference points than the I-NNI as well. But special considerations<br />

must be made to handle such reference points or interfaces.<br />

Yet another significant difference between ASON and GMPLS is the notion of call, which is an important part<br />

of the ASON approach and that is not part of the GMPLS approach. A call is defined as an association between<br />

endpoints that supports an instance of a service. As an example, a call object may exist while its (first)<br />

supporting connection is released and a new connection is (re)established due to some operational requirements.<br />

Moreover, a multiparty call may have several associated connections. Service or user related information<br />

associated with the call can be separated out and handled only at the ingress/egress, thus lowering the burden of<br />

intermediate control nodes, which can limit its information processing to connections. Moreover, GMPLS does<br />

not yet provide (explicit) support for soft-permanent connections. However, IETF proposals exist to extend<br />

GMPLS signalling to include ASON call set up and support the capability to request SPC connections 600 .<br />

However, there are ways of using GMPLS-based solutions such that they become (more) ASON compatible.<br />

One important step to define a GMPLS UNI was taken by the OIF, which produced a specification document<br />

(OIF UNI 1.0) describing how existing GMPLS protocols can be extended to provide UNI functionality. The<br />

OIF UNI enables clients to establish optical connections dynamically using signalling procedures compatible<br />

with GMPLS signalling.<br />

595 ITU-T Rec. G.7713.3/Y.1704.3, “Distributed Call and Connection Management using GMPLS CR-LDP”, March 2003<br />

596 ITU-T Rec. G.7714/Y.1705, “Generalized Automatic Discovery Techniques”, November 2001<br />

597 ITU-T Rec. G.7715/Y.1706, “Architecture and Requirements for Routing in the Automatic Switched Optical Networks”, June 2002<br />

598 ITU-T Draft Rec. G.7716, “Architecture And Requirements Of Link Resource Management For Automatic Switched Transport<br />

Network”, current draft as of June 2003 subject to revision<br />

599 ITU-T Rec. G.7714/Y.1705, “Generalized Automatic Discovery Techniques”, November 2001<br />

600 D. Papadimitriou, “Requirements for Generalized MPLS (GMPLS) Usage and Extensions For Automatically Switched Optical Network<br />

(ASON)”. IETF, Internet Draft, draft-lin-ccamp-gmpls-ason-rqts-00.txt, expired in April 2003<br />

<strong>Annex</strong> 2 - Page 256 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

In the peer model there is a single control plane controlling both the client nodes (e.g. IP/MPLS nodes) and the<br />

transport network nodes (or the nodes can be hybrid multi-layer nodes).<br />

As such, this implies a unified control plane as the corresponding control nodes handle and coordinate across<br />

several network layers (G.805 layer networks). When having a single control plane encompassing several<br />

network layers this implies, strictly speaking, a GMPLS-based solution as this is not compatible with ASON.<br />

However, an ASON-based solution does not prohibit a unified control plane in the sense that an ASONcompatible<br />

control node may deal with several network layers. But, inter-layer coordination within a control<br />

node is an implementation issue as perceived from the ASON framework and as such the issues of a unified<br />

control plane are outside the scope of the ASON framework. However, one can envisage a modification of the<br />

notion of "peer model" allowing an ASON-compatible peer-model implying an ASON-compatible unified<br />

control plane based solution.<br />

The ASON framework 601 specifies a set of abstract software technology neutral interconnecting control plane<br />

components. Although not explicitly mentioned, the framework resembles a specification approach according to<br />

the computational viewpoint of RM-ODP (see above). Thus, ASON-based solutions can in principle be<br />

implemented based on several software technologies with or without special support for distributed processing.<br />

On the other hand, GMPLS protocols are specific byte-oriented message-based protocols.<br />

There is an ongoing debate regarding the pros and cons of these approaches 602 . Aspects such as scalability,<br />

manageability and addressing scheme must be considered. The ASON-GMPLS differences are also subject to<br />

differences in approaches and expectations regarding interoperability and business model. One of the main aims<br />

of GMPLS is to achieve multi-vendor interoperability between nodes. In fact, the first interoperability<br />

demonstration of GMPLS was carried out at the Next Generation Networks show held in Boston in October<br />

2002 603 . By contrast, ASON views full multi-vendor interoperability both as a low priority and unrealistic to<br />

achieve in the near term and it is focused on maintaining compatibility with currently existing transport network<br />

protocols.<br />

The fundamental rationale behind GMPLS’s business model is deployment as soon as possible. Either optical<br />

start-ups, which aim to deploy cheap new networks, or some large equipment vendors who aim to stay ahead of<br />

the competition have been the main drivers in the IETF for rapid development of GMPLS signalling and routing<br />

protocols.<br />

ASON’s business model is quite different. ASON allows to maintain compatibility with existing transport<br />

network protocols and this is very interesting for large incumbent network operators since providing a smooth<br />

upgrade path could be very important for them, especially in times of economic slowdown.<br />

Link management<br />

In traditional IP routing a link is considered a static resource (which is either available or unavailable). When<br />

introducing (G)MPLS and routing protocols extended with capabilities of reflecting traffic engineering aspects a<br />

link also has additional properties such as bandwidth, Traffic Engineering (TE), and recovery related properties.<br />

GMPLS introduces the notion of TE-links for such links. With the introduction of transport technologies such as<br />

SDH, DWDM, and OXC the control channels must be handled separately from the regular transport channels.<br />

Moreover, there may now be numerous links between adjacent nodes for the purpose of TE based routing. It is<br />

simply impractical to manage and configure these links manually. Likewise, it is impractical for routing protocol<br />

to consider these links on an individual basis.<br />

To support bundled TE links as well as efficient discovery and management of links, a link management<br />

protocol (LMP) 604 is being developed by IETF. Link management is a collection of useful procedures between<br />

adjacent nodes that provide local services such as control channel management, link connectivity verification,<br />

link property correlation, and fault management. Control channel management and link property correlation are<br />

mandatory procedures of LMP.<br />

LMP control channel management is used to establish and maintain control channels between nodes. Control<br />

channels exist independently of TE links, and can be used to exchange MPLS control-plane information such as<br />

signalling, routing, and link management information. An "LMP adjacency" is formed between two nodes that<br />

support the same LMP capabilities. Multiple control channels may be active simultaneously for each adjacency.<br />

A control channel can be either explicitly configured or automatically selected, however, LMP currently assume<br />

601 ITU-T Rec. G.8080/Y.1304, “Architecture for the automatic switched optical networks (ASON)”, November 2001<br />

602 N. Larkin, “ASON AND GMPLS – The Battle Of The Optical Control Plane”. White paper, Aug.2002. Data Connection Limited<br />

603 This demonstration tested vendor interoperability using the GMPLS RSVP-TE protocol<br />

604 J. Lang et al., “Link Management Protocol (LMP)”, IETF, Internet Draft, draft-ietf-ccamp-lmp-03.txt, March 2002.<br />

<strong>Annex</strong> 2 - Page 257 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

that control channels are explicitly configured while the configuration of the control channel capabilities can be<br />

dynamically negotiated.<br />

Link property correlation is used to aggregate multiple data-bearing links (i.e. component links) into a bundled<br />

link and exchange, correlate, or change TE link parameters. It allows for instance to add component links to a<br />

link bundle, change a link's protection mechanism, change port identifiers, or change component identifiers in a<br />

bundle.<br />

Link management and more specifically, the LMP protocol, is considered as part of GMPLS. However, LMP<br />

may also operate independently of other GMPLS protocols. Note also that the notion of link management may<br />

also be used in association to the management plane but then with a somewhat different but related meaning.<br />

The management plane must however support management of LMP related operations, which is further<br />

discussed in the following section.<br />

<strong>A2.</strong>11.3.3 Routing / Traffic Engineering<br />

Routing is the process of selecting a path through the network between two nodes wishing to communicate.<br />

Traffic engineering means directing traffic to places in the network where bandwidth/capacity is available.<br />

Traffic engineering involves the tasks of measuring bandwidth availability, calculating appropriate routes and<br />

redirecting traffic. Routing and traffic engineering are thus related – traffic engineering can impose strict<br />

constraint on the routing.<br />

<strong>A2.</strong>11.3.3.1 Network vs. traffic engineering<br />

When considering the challenges mentioned above there are two crucial concepts that can bring remedy:<br />

network engineering and traffic engineering (TE). The first term concerns locating capacities concerned with<br />

long term planning, whereas the second idea deals with traffic organization in the network that is rather targeted<br />

for shorter periods. They can be associated with two indicated schemes: one focusing on development of highcapacity<br />

optical systems and the other directed towards logical traffic management. There could be two main<br />

observations noted: the network planning requires intervention within the network infrastructure, whereas TE<br />

appears demanding in terms of network management.<br />

While network engineering is more manual and policy-dependent planning practice, TE is getting more directed<br />

towards automatic operation. TE concentrates within networking technologies and protocols, which targets<br />

current research society. The great interest towards TE originates also from the fact that the approach can<br />

provide facilities helping in overcoming many problems emerging in today’s as well as next generation<br />

networks. Definitely, it may also bring higher return on network backbone infrastructure investment.<br />

<strong>A2.</strong>11.3.3.2 TE definition<br />

It should be noted that TE is an abstract logical concept that should not be associated with specific<br />

implementation. The TE definition provides general idea without constraints for any particular practical<br />

employment.<br />

ITU-T defines TE:<br />

“Traffic engineering (TE) is an indispensable network function which controls a network's response to traffic<br />

demands and other stimuli, such as network failures.”<br />

Furthermore TE is said to comprise traffic management, capacity management, traffic measurement and<br />

modelling, network modelling, and performance analysis. ITU-T defines also methods for TE. They involve<br />

network functions that support call routing, connection routing, QoS resource management, routing table<br />

management, and capacity management.<br />

ITU-T also developed TE model that conforms to definition of TE issued by Traffic Engineering Working<br />

Group (TEWG) within the Internet Engineering Task Force (IETF).<br />

In the main, TE techniques aim to optimize the mapping of logical traffic streams onto physical network<br />

topology in order to provide efficient and quality concerned solution. They cope with proper load distribution<br />

eliminating congestion hot spots and taking into account scalability and flexibility factors. Besides, with extra<br />

features they enable handling traffic in an intelligent manner so that best-effort streams integrated with quality<br />

demanding flows e.g. of real-time applications are guaranteed with the pertinent treatment.<br />

Shortly speaking, TE by means of ER can be an answer for problems coming from traditional SPF routing.<br />

Thus, operators may handle undesirable effects such as under- and over- utilization of resources as well as<br />

congestion. With TE they can manage the resources in the way to provide qualitative services. Then, the<br />

<strong>Annex</strong> 2 - Page 258 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

customers could experience throughput or delay as demanded. All the TE practices may apply successfully<br />

especially to the backbone networks, as there the traffic aggregation is much higher than in the edges.<br />

TE practices are most interesting to consider in mesh networks. Mesh structures present most opportunities for<br />

manipulating traffic streams within topology.<br />

TE may employ classes of service for traffic in order to provide QoS e.g. in terms of quantitative performance<br />

requirements such as end-to-end blocking, delay, and/or delay-jitter objectives.<br />

There are three main TE functions indicated by ITU-T:<br />

• Traffic management focuses on providing maximized network performance under all circumstances<br />

including dynamic traffic alterations and network failures. It is concerned with routing 605 schemes,<br />

connection routing, address translations and QoS resource management. The control over these functions<br />

may take centralized, decentralized or combined approaches.<br />

• Capacity management deals with optimisation of network design and provisioning with aim to ensure<br />

relevant performance and minimum costs. Network design is concerned with both routing and capacity<br />

design. Routing design takes care of proper routing tables 606 adjustment (automatic or manual) in order to<br />

provide appropriate service performance.<br />

• Network planning practices are supposed to ensure the network is designed and deployed with regards to<br />

anticipated traffic growth. It involves node and transport planning based on multiyear forecasts of demands<br />

for network capacity expansions.<br />

<strong>A2.</strong>11.3.3.3 Traffic- and resource- oriented TE<br />

TE can be seen as traffic- or resource- oriented (could be also perceived as user- and operator- oriented). In the<br />

first case, the TE practices concentrate on providing QoS for traffic streams, targeting best traffic performance<br />

objectives. Then, the goals for minimization of packet loss and delay, maximization of throughput, and<br />

conformance of SLAs are accounted. In the resource oriented TE the focus centres at optimization of resource<br />

utilization, where objectives include efficient and easy bandwidth management. In fact, both ideas converge at<br />

the efforts of minimizing congestion in the network.<br />

<strong>A2.</strong>11.3.3.4 Local and global TE<br />

Another approach regards TE considered from local and global perspectives. Local TE focuses on acting upon<br />

hot spots in normal and failure conditions, whereas global practices relate rather to load balancing intended to<br />

optimize global network throughput.<br />

Before proceeding further, we summarise the major required conceptual, functional, and architectural features of<br />

GMPLS control plane for optical networks:<br />

• LSP life-cycle management, i.e. set-up/modify/tear-down an LSP or more specifically LSP provisioning,<br />

may involve different resources along the whole path. This requires the co-ordination of different control<br />

plane functions at the ingress, transit, and egress nodes, and not only at path initiator and ending nodes. In<br />

addition, life-cycle management encapsulates the reporting of the administrative status of LSPs, the<br />

modification of LSP attributes and behavioural features such as rerouting property for network resource<br />

optimisation, alternative protection scheme, and holding priority. It also decides on specific network resource<br />

utilisation such as full or partial route configuration, resource sharing between multiple LSPs, preemption<br />

issues etc.<br />

• Call and connection admission control for LSPs. As described before, call - and connection admission -<br />

control is one of the key architectural functions in service delivery, and allows a typical utilisation for policybased<br />

management, as specified in the ASON standard. The role of the connection control is to provide the<br />

information necessary for routing and admission that contributes to the establishment of a connection. For<br />

example, connection control is used to translate connection parameters of the client into characteristics of<br />

network resources. In parallel, the call control correlates under constraints service request with network<br />

resource capabilities for many connections for a client. The constraints are associated with service fulfilment,<br />

assurance, and other business related criteria. The call control impacts connections in order to guarantee<br />

appropriate service delivery through service adaptation both at the ingress and egress locations. For instance,<br />

605 Routing is the process of determination, establishment, and use of routing tables to select paths between an input port at the ingress<br />

Error! Bookmark not defined.<br />

network edge and output port at the egress network edge.<br />

606<br />

Routing tables indicate the path choices and selection rules to select one path out of the route for a connection/bandwidth-allocation<br />

request.<br />

<strong>Annex</strong> 2 - Page 259 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

request for redial is handled by the call control. In addition, call control contributes to the client<br />

management, since it validates client requests.<br />

• Given that the call control provided at ingress and egress nodes has also an interface to the management<br />

plane, the question of role separation of network control-management functionality regarding specific<br />

service control and management functions was raised at IETF. Indeed, although both service and network<br />

resource policies endorse the fulfilling of service requirements, call- and admission control relies on a<br />

signalling protocol (except for the so-called permanent connections).<br />

• Control-Management interoperation. This can be seen as an extension to the first point above.<br />

Nevertheless, this modification is sufficient to ensure consistent control of the GMPLS functionality. In<br />

order to provision and manage customisable optical services, to deal with constantly changing needs of<br />

customers, or to (re)-optimise network exploitation using business rules, the management plane must operate<br />

in conjunction with the autonomous control plane that may need additional information to support operators’<br />

expectations.<br />

• Especially from operators’ point of view, the control plane may then be subordinated to the decisions taken<br />

at the network management level. With carrier networks becoming larger, the functionality of signalling,<br />

routing, and connectivity configuration and supervision protocols in the control plane may operate under<br />

additional external constraints that are provided by network administrators.<br />

Figure 108. Proposed policy hierarchy including control plane policies<br />

The policy-based management is the solution to all of these interoperability requirements. It is an approach<br />

therefore to network management that offers the flexibility to manage different network layers, network<br />

elements, LSPs, individual packets, while keeping management decisions coordinated within the network. In<br />

order to provision and manage customisable optical services, to deal with constantly changing needs of<br />

customers, or to (re)-optimise network exploitation using business rules, the management plane must operate in<br />

conjunction with the autonomous control plane that may need additional information to support operators’<br />

expectations. The corresponding new policy architecture will have a policy hierarchy as shown in Figure 108<br />

that introduces a so-called Control Plane Policy Layer between the device configuration policy and LSP lifecycle<br />

policy layers.<br />

Although the GMPLS control plane does not transform the network technology it clarifies to the management<br />

system the network layers that are being managed. Indeed, in that way, the management plane has the<br />

opportunity to manage data and functional flows between the transport and the control plane layers. Policybased<br />

management provides a privileged tool to this specific multi-layer interaction. In addition, this type of<br />

control-management interaction contributes to alleviate management complexity and allows achieving better<br />

FCAPS processes.<br />

<strong>A2.</strong>11.3.4 Resource reservation / management<br />

RSVP-TE<br />

The RSVP operation can be summarized by pointing out the main characteristics: raw IP transport, maintenance<br />

of soft states, receiver-controlled reservation requests, and flexible reservation control (sharing, sub-flows). It<br />

should be stressed that RSVP signalling occurs between a pair of neighbouring routers rather than a pair of<br />

hosts. The new RSVP-TE features expand the basic functionality with critically important labelling service and<br />

TE facilities. They enable aggregation of host micro-flows into traffic trunk sharing common LSP. This way the<br />

number of RSVP states is significantly reduced.<br />

The great practical TE features involve options for dynamic automatic rerouting, route pinning, LSP<br />

preemption priorities, local repair and session merging. RSVP-TE provides means for TE in terms of related<br />

objects including Session_Attribute flagging.<br />

The significant importance lies in ‘make-before-break’ principle applied when any rerouting takes place.<br />

Moreover, rerouting functionality is facilitated with options for relevant reservation styles such as SE.<br />

<strong>Annex</strong> 2 - Page 260 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure.109: RSVP-TE operation with refreshed states<br />

It is believed that number of refresh messages may occur troublesome in terms of signaling traffic volume and<br />

processing. In order to minimize this effect there are RSPV extensions developed to simplify and compress<br />

messaging. The extensions also address problems of possible latency and unreliable IP transport means. There is<br />

always a tradeoff between increasing reliability by more frequent refreshes and the overhead volume.<br />

RSVP-T limitations involve unicast, unidirectional LSPs with multicast options concerned for further study.<br />

<strong>A2.</strong>11.3.5 MPLS – its specializations and generalizations<br />

<strong>A2.</strong>11.3.5.1 The MPLS concept<br />

MPLS (Multi Protocol Label Switching) is a networking concept that is based mainly on a shift of all complex<br />

functionality to the edge of the network, leaving only simple tasks for the core network and hence enabling fast<br />

and efficient operation. The control plane (that takes care of e.g., routing) and switching (packet forwarding) are<br />

completely decoupled, which yields the advantageous property that they can be chosen independently. MPLS is<br />

designed as a pure 'everything over everything' concept, hence its name. In reality, however, its predominant use<br />

and the majority of standardization work is focused on carrying IP traffic with MPLS, which is due to the<br />

ubiquitous Internet.<br />

Packets in MPLS are forwarded along Label Switched Paths (LSPs) that are determined by routing protocols<br />

based on predefined traffic classes called Forward Equivalent Classes (FECs). An FEC can be equivalent to a<br />

single entry in a conventional IP routing table or it can be an aggregation of multiple such entries. An FEC can<br />

also be specified based on a number of additional constraints such as originating address, receiving port number<br />

and QoS parameters. These LSPs are defined in the switches by using labels, which are distributed by a Label<br />

Distribution Protocol (LDP) responsible for mapping between routing and switching. The MPLS standard<br />

doesn't specify one specific label distribution protocol; it just highlights the required properties. Currently, four<br />

protocols are under consideration, of which two are new and two are modifications of existing protocols (BGP<br />

and RSVP). 607 608 609 610 .<br />

<strong>A2.</strong>11.3.5.2 Label processing in MPLS<br />

In MPLS, switches are generally called Label Switch Routers (LSRs). Ingress edge routers (or more correctly<br />

ingress edge LSRs) take care of attaching short, fixed length labels to packets when they enter the MPLS<br />

domain This includes the non-trivial task of determining to which FEC a given packet belongs. Within the core<br />

of the network forwarding will be based on the label only, and before leaving the MPLS domain packets have<br />

their label removed by the egress edge LSR (See Figure 110).<br />

607<br />

Andersson, L., et. al., "LDP Specification", Internet draft, draft-ietf-mpls-ldp-08.txt, work in progress, June 2000<br />

608<br />

Y. Rekhter, "Carrying Label Information in BGP-4", Internet Draft, draft-ietf-mpls-bgp4-mpls-04.txt, work in progress, January 2000<br />

609<br />

Jamoussi, B., et. al., "Constraint-Based LSP Setup using LDP", Internet draft, draft-ietf-mpls-cr-ldp-03.txt, work in progress, September<br />

1999<br />

610<br />

Braden, R., et al, "Resource ReSerVation Protocol (RSVP) -- Version 1 Functional Specification", RFC 2205, September 1997<br />

<strong>Annex</strong> 2 - Page 261 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 110: The MPLS label is used only within one domain. By attaching different labels at the ingress LSR,<br />

different routes through the network for the same destination can be selected, which allows for traffic<br />

engineering.<br />

The labels are generally not kept constant along a LSP and thus a path through the network is defined by a<br />

sequence of labels, all of which are assigned by the LDP. In the core switches only the labels are examined.<br />

What distinguishes this method from that of conventional IP routing are the loose coupling between the label<br />

and the destination address as well as the lookup scheme within the switches themselves. The labels used by<br />

MPLS require exact match in the lookup tables, which is a much simpler operation than LPM used for ordinary<br />

IP routing, i.e., OSPF would build a routing table is each LSR and based on this information and possibly<br />

additional information the label distribution protocol builds another table in (the NHLFE table) which the label<br />

is used as the key. The outcome of a table lookup is information about outgoing port number and the outgoing<br />

label, which is used to replace the label contained within the packet as well as expediting the packet to the<br />

designated output port. The label replacement operation is usually called label swapping and is the most<br />

common packet modification operation in MPLS. In addition, when working with multiple domains in a<br />

network, the single label might be replaced by a stack of labels with only the top label being used within one<br />

particular domain. At domain boundaries label swapping is insufficient and must be exchanged by more<br />

complex operations such as label pushing and popping.<br />

A number of schemes have been devised to simplify this label distribution scheme (e.g., 611 ) or even to complete<br />

avoid label distribution 612 .<br />

611<br />

Ghani, ”On IP-over-WDM integration”, IEEE communication magazine, March 2000<br />

612<br />

H. Christiansen, T. Fjelde, H. Wessing, ”Novel label processing schemes for MPLS”, Optical Networks Magazine, November/<br />

December 2002<br />

<strong>Annex</strong> 2 - Page 262 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>11.3.5.3 MPLS assessment<br />

One of the major benefits of the MPLS concept is its ability to perform traffic engineering, i.e., to be able to<br />

control how traffic flows through the network, which is one of the prerequisites for providing QoS guarantees<br />

on connections. The other major advantage is protocol independence. When wanting to support transport of a<br />

new protocol only edge devices need to be upgraded. This feature will make the transition to e.g., IPv6<br />

smoother.<br />

<strong>A2.</strong>11.3.5.4 GMPLS<br />

This section contains background information only and is more or less directly quoted from 613 .<br />

In regular MPLS labels are represented as integers attached to IP packets as an additional shim header.<br />

However, a colour –or thus a wavelength channel– can also be interpreted as a label. Therefore, the concept of<br />

Generalised MPLS (GMPLS) allows a label to be represented as an integer, a timeslot in a TDM-frame, a<br />

wavelength or waveband on a fibre, a fibre in a cable, etc. The example in Figure 111 illustrates this principle.<br />

IP Payload<br />

IP Header<br />

MPLS Label<br />

5<br />

A<br />

B<br />

C<br />

D<br />

7<br />

IN IF IN LABEL OUT IF OUT LABEL<br />

A 2 D 3<br />

B<br />

B<br />

5<br />

9<br />

C<br />

D<br />

7<br />

7 tributary<br />

λ IN<br />

aggregate<br />

fiber port<br />

Figure 111. GMPLS concept<br />

<strong>Annex</strong> 2 - Page 263 of 282<br />

add/drop ports<br />

λλ IN IN --> --> λλ OUT OUT<br />

OXC<br />

λ OUT<br />

aggregate<br />

fiber port<br />

The idea is to reuse the same protocol suite as adopted in regular MPLS to set up and tear down LSPs, to be able<br />

to control switched instead of permanent connections through (optical) transport networks. Customers trigger<br />

the set up and the tear down of switched connections, while the network operator manages permanent<br />

connection through its network management system. In case the transport network is enhanced with a control<br />

plane, the network management system can trigger this control plane to set up or tear down the so-called softpermanent<br />

connections.<br />

<strong>A2.</strong>11.3.5.5 GMPLS: routing and signalling protocols<br />

This section contains background information only and is more or less directly quoted from 614 .<br />

Emerging requirements brought about by the growing trend of data traffic and the need for re-configurability in<br />

optical networks lead to the introduction of GMPLS as the control plane solution for next-generation optical<br />

infrastructures. This allows the automated control of Generalised Label Switched Paths closer to the optical<br />

layer than the management plane.<br />

613<br />

DAVID deliverable D132, “Specification of atomic functions and study of control plane issues including management inter-working”,<br />

IST-DAVID<br />

614<br />

DAVID deliverable D132, “Specification of atomic functions and study of control plane issues including management inter-working”,<br />

IST-DAVID


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Even though it is an extension to MPLS, the integration problem of network control and management for<br />

GMPLS becomes more pronounced due to the different operational management concepts. Optical networks<br />

may contain entirely photonic, hybrid, and opaque network nodes. Nevertheless, distributed intelligence for fast<br />

and flexible optical bandwidth provisioning and efficient network utilisation, remain the key areas of<br />

investigation today.<br />

As mentioned above, important in GMPLS is the reuse of the MPLS signalling protocols to set up and tear down<br />

LSPs. Mainly two signalling protocols for label distribution exist:<br />

• Reference 615 specifies the Constraint-based Routed – Label Distribution Protocol (CR-LDP) that is an<br />

extension to the Label Distribution Protocol (LDP) specified in 616 .<br />

• Reference 617 specifies the Resource reSerVation Protocol with Traffic Engineering extensions (RSVP-TE),<br />

which specifies the extensions for LSP tunnels to RSVP specified in 618 .<br />

It seems that the IETF is not willing anymore to support (CR-)LDP, although 615 is dated very recently (January<br />

2002). An important distinction between both protocols is that RSVP is a soft-state protocol (requires regular<br />

refresh messages being exchanged), while<br />

(CR-)LDP is a hard-state protocol (the call, link, or path, once established, will not be torn down until<br />

requested). Network operators prefer hard-state approaches because soft-state refresh overhead could introduce<br />

scalability problems. However, contributors to the IETF have taken steps to address this issue by adding<br />

extensions to the RSVP protocol. A possible solution for RSVP is to set a long enough refresh time so that<br />

effectively acts like a hard state 619 . Important to note is that both RSVP and (CR-)LDP allow the source to<br />

explicitly specify the transit nodes through which the LSP are be set up.<br />

Typically also a routing protocol should be run in the network. In (G)MPLS networks the focus is on link-state<br />

routing protocols like OSPF 620 or IS-IS. In the following we will focus on the OSPF protocol. First of all, each<br />

router monitors the status of its incident links. This network status is then flooded throughout the whole<br />

network. By collecting all this status information in its Link-State Database, each router in the network is able to<br />

keep a complete and up-to-date view of the network. Based on this overview, each router is able to take a wellfounded<br />

decision on how to route the traffic across the network. Of course, this overview still allows populating<br />

traditional IP routing tables for the hop-by-hop shortest path routing. For scalability reasons an overview of only<br />

a part of the network is kept. By introducing a routing hierarchy, other parts of the network are summarised. In a<br />

typical IP network, the Border Gateway Protocol (BGP) works between Autonomous Systems (ASs), while<br />

Interior Gateway Protocols (IGPs) work within single ASs. OSPF is an example of such an AS. OSPF allows<br />

introducing an additional routing hierarchy by splitting up the AS in multiple areas, connecting each other via a<br />

central backbone area.<br />

GMPLS networks may consist of multiple network technologies. In other words, a GMPLS network may be a<br />

multi-layer network: an LSP in a lower layer may function as a logical link in a higher network layer. For this<br />

purpose, the link-state information will also specify the multiplexing/switching capability of the advertised<br />

(logical) link. This multiplexing/switching capability identifies the hierarchy the LSP belongs to in a predefined<br />

LSP hierarchy, which currently still is under standardisation 621 .<br />

Although the Link Management Protocol (LMP) is considered being part of GMPLS, it can also operate<br />

independently of other GMPLS protocols and be applied within the management plane (with a slightly different<br />

but related meaning). Therefore, we assume that this protocol is beyond the scope of this chapter and is thus not<br />

discussed in more detail. We just mention that it involves a set of procedures between adjacent nodes providing<br />

local services such as control channel management, link connectivity verification, link property correlation, and<br />

fault management.<br />

615 RFC3212, ‘Constraint-Based LSP Setup using LDP’, IETF, January 2002<br />

616 RFC3036, ‘LDP Specification’, ftp://ftp.isi.edu/in-notes/rfc3036.txt<br />

617 RFC3209, ‘RSVP-TE: extensions to RSVP for LSP Tunnels’, IETF<br />

618 RFC2205, ‘Resource ReSerVation Protocol (RSVP) - Version 1<br />

619 L. Berger, et al., "RSVP Refresh Overhead Reduction Extensions", IETF RFC2961<br />

620 J. Moy, “OSPF Version 2”, RFC2328, April 1998. ftp://ftp.rfc-editor.org/in-notes/rfc2328.txt<br />

621 K. Kompella et al., “LSP Hierarchy with Generalized MPLS TE”, September 2002, internet-draft (work in progress)<br />

http://www.ietf.org/internet-drafts/draft-ietf-mpls-lsp-hierarchy-08.txt<br />

<strong>Annex</strong> 2 - Page 264 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>11.3.5.6 GMPLS: control plane models<br />

This section contains background information only and is more or less directly quoted from 622 .<br />

The peer, overlay and augmented models have already been described. Table 16: Comparison of GMPLS<br />

control plane models summarises and compares the main characteristics of the three models.<br />

Feature/Model Peer Augmented 623 Overlay<br />

Scalability<br />

Number of routing adjacencies<br />

O(N).<br />

Scalability improved with link<br />

bundling and unnumbered links.<br />

Topology discovery IP routing see optical paths.<br />

Number of routing adjacencies:<br />

IP level: O(N). Each IP router is adjacent<br />

to the optical node to which is attached<br />

rather than to the other devices.<br />

OTN level: O(N 2 ). Optical nodes store IP<br />

routers’ reachability information.<br />

IP routers don’t see the physical<br />

topology. Specific routing exchanges<br />

with underlying transport network.<br />

<strong>Annex</strong> 2 - Page 265 of 282<br />

Number of routing adjacencies<br />

O(N 2 )<br />

IP level: O(N 2 ). IP routers are<br />

logically adjacent to the rest of<br />

IP routers.<br />

OTN level: O(N). Optical nodes<br />

only store information of<br />

physically adjacent nodes.<br />

Complete separation between IP<br />

and OTN through UNI/NNI<br />

interfaces.<br />

Addressing Common IP address space. IP address space in both IP and OTN. Operator dependant.<br />

Signalling<br />

Single signalling in both IP and<br />

OTN.<br />

Routing Single instance.<br />

Protection and<br />

restoration<br />

Management<br />

GMPLS restoration<br />

Management integration for LSP<br />

provisioning supervision.<br />

Separate signalling protocols. Separate signalling protocols.<br />

Separate instances. Separate areas (BGP<br />

or OSPF/ISIS) between IP and OTN.<br />

Separated mechanisms. No coordination<br />

between IP and OTN layers624 .<br />

Separated network views and supervision<br />

Table 16: Comparison of GMPLS control plane models<br />

Separate instances. No<br />

exchange of routing information<br />

between IP and OTN.<br />

Separated mechanisms. No<br />

coordination between IP and<br />

OTN layers.<br />

Separated network views and<br />

supervision<br />

<strong>A2.</strong>11.3.5.7 GMPLS for controlling (advanced) optical networking technologies<br />

There are at least two major issues when studying how to control such advanced optical networks. First of all,<br />

before the network can be controlled, it should be possible to properly present its architecture. Secondly, in<br />

contrast to digital electronic network technologies, optical networking technologies are often (still) analogues.<br />

Therefore, the network control should take into account (and thus keep track of) possible physical impairments.<br />

The following subsections will elaborate on these issues.<br />

<strong>A2.</strong>11.3.5.8 MPLS based schemes (VPLS, Martini)<br />

The Internet Engineering Task Force (IETF) is currently defining E-Line and E-LAN services running over<br />

MPLS. The E-Line service, denoted “draft Martini” named after the author, uses MPLS connections as Pseudo<br />

Wires (PW) to transport Ethernet frames. The PW concept is defined by IETF. An Ethernet PW allows Ethernet<br />

protocol data units to be carried over e.g. MPLS networks. The operation of MPLS Martini tunnels (PW) is<br />

shown in Figure 112.<br />

622<br />

DAVID deliverable D132, “Specification of atomic functions and study of control plane issues including management inter-working”,<br />

IST-DAVID<br />

623<br />

The augmented model has been described in some Internet drafts. A specific routing approach using BGP is considered, however, it<br />

should be noted that other routing approaches might be equally possible.<br />

624<br />

If electrical and optical layers use separate protection mechanisms then the optical mechanism should be faster and completely<br />

independent. The main reasons to avoid interactions between electrical and optical protection mechanisms are as follows: a) the optical<br />

layer should be able to serve transparently any kind of electrical layer; b) network operators should be able to decide if they need (or not)<br />

a protection mechanism in a certain layer; c) interaction between protection layers implies higher costs and lower flexibility for operators<br />

in order to choose the network equipment provider.


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

CE1 PE1 P1 P2 PE2 CE2<br />

Ethernet<br />

Packet<br />

52 5<br />

13<br />

52<br />

Figure 112: MPLS Pseudo Wire<br />

The Customer Edge (CE1) transmits an Ethernet packet to CE2. The ingress Provider Edge (PE1) pushes a label<br />

(52) that indicates this specific PW. Then, the packets are tunnelled from PE1 to PE2 across the provider (P)<br />

routers P1 and P2. The tunnel is a label switched path with labels 5 and 13. The outer label is popped in the<br />

penultimate hop.<br />

The IETF defined Virtual Private LAN Service (VPLS) is an E-LAN service based on MPLS. It is an extension<br />

to the Martini draft, which is only point-to point. With VPLS support, the MPLS network basically constitutes<br />

an emulated Ethernet bridge. Bridging functionality, such as MAC address learning, is performed in the PE<br />

routers. The core P routers do not contain any VPLS functionality. The following section describes in more<br />

detail, the mechanisms required to establish a VPLS service.<br />

<strong>A2.</strong>11.3.5.9 VPLS signalling<br />

IETF have two different drafts on VPLS: VPLS-LDP and VPLS-BGP. The main difference is on how circuits<br />

are established between PE routers, which is pointed out in Table 17.<br />

<strong>Annex</strong> 2 - Page 266 of 282<br />

52<br />

TYPE DISCOVERY SIGNALLING<br />

VPLS-BGP BGP BGP<br />

VPLS-LDP - LDP<br />

Table 17: VPLS types<br />

MPLS-LDP is basically an extension of Martini-tunnels that also uses LDP for signalling. Tunnels must be<br />

specified manually between all the PE routers that participate in the VPLS instance. On the other hand, VPLS-<br />

BGP uses the Border Gateway Protocol (BGP) for topology discovery and signalling; All PE routers are BGP<br />

peers in this case, and when e.g. a new site joins a VLAN, then BGP announces this information to other PE<br />

routers. Furthermore, BGP announces the label used for that VLAN, so it is both used for discovery and<br />

signalling at the same time. Tunnels between PE routers are typically established based on ordinary IP/MPLS<br />

procedures, e.g. OSPF and LDP.<br />

There is a potential scalability problem with the full mesh of BGP connections. This problem can, however, be<br />

solved by introducing what is called a BGP Routing Reflector (RR) as shown in Figure 113.


PE1<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

RR<br />

PE4<br />

PE2<br />

Figure 113: BGP Route Reflector (RR)<br />

<strong>Annex</strong> 2 - Page 267 of 282<br />

PE3<br />

Actually, there are many similarities between VPLS and L3 MPLS/BGP VPNs, and one of the main benefits<br />

from using IP/MPLS is the broad range of services that can be offered. Furthermore, MPLS has support for<br />

traffic engineering and there exists schemes for fast recovery from failures.<br />

<strong>A2.</strong>11.3.6 Redundancy and resiliency<br />

Resiliency in networks has become very important with the increasing demands to reliability being imposed on<br />

modern networks. One fundamental requirement in order to be able to do protection is two disjoint paths<br />

between destination and source.<br />

The preferred topologies for resilient networks are rings because they provide disjoint paths between any two<br />

nodes on the ring in spite of their inherent simplicity. In addition it turns out that arbitrary topologies can be<br />

decomposed into rings and thus the methodologies developed to treat ring networks can be applied to any<br />

network (see Figure 114).<br />

The advantages of using rings are:<br />

• Easy protection. Many protection schemes already exist.<br />

• Simple topology<br />

• Any topology can be decomposed into rings (see Figure 114 below)<br />

Figure 114: An arbitrary topology can be decomposed into rings. This is one of the reasons why rings are so<br />

interesting. The red nodes indicate where bridges between the rings must be implemented 625<br />

625 T.E: Stern, K. Bala, “Multiwavelength optical networks – a layered approach”, Addison Wesley Longman, 1999


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

However, ring networks are being replaced by mesh networks that can provide higher efficiency and robustness.<br />

The survivability of networks is achieved by providing backup paths. Two mechanisms are used to supply these<br />

backup paths: In protection, the paths are pre-computed and established prior to a failure, in contrast to dynamic<br />

restoration, where the backup paths are computed and established by using remaining resources in the network<br />

only after a failure has occurred.<br />

The evaluation criteria for protection and restoration methods are:<br />

• Reliability<br />

• Robustness<br />

• Resource usage<br />

• Recovery time<br />

Traditionally, optical networks have been controlled in a centralized structure, but a more dynamic traffic<br />

pattern is creating the need for distributed and highly flexible control mechanisms, where the network can be<br />

self-organizing by setting up and tearing down connections as they are required. In order to restore a network<br />

failure in a distributed control plane, the failure first needs to be discovered and a backup path with sufficient<br />

resources has to be allocated before the traffic can be switched to the identified backup path.<br />

In the field of survivable networks, researchers and industry are currently focusing on the flowing areas:<br />

• Fault detection schemes<br />

• Algorithms for survivable routing and resource allocation<br />

• Analysis and optimal design of network topologies<br />

• Integration of quality demands in survivable networks<br />

• Analysis of reliability and availability<br />

• IP-centric control and GMPLS mechanisms<br />

<strong>A2.</strong>11.3.6.1 Protection and restoration<br />

Protection refers to the concept of switching traffic from broken to alternative pre-planned routed. Protection<br />

should be fast in order to minimize data loss. The price for this rapid protection is often paid by the relatively<br />

high capacity required to accomplish it. To remedy this problem, protection is usually accompanied by<br />

restoration, which is a longer term, dynamic rerouting of the connections i.e., traffic streams in the entire<br />

network are reorganized to increase the overall utilisation of the network. Figure 115 depicts two basic ways of<br />

carrying out protection. Link protection, which is the simplest to manage, but requires the most capacity, and<br />

path protection.<br />

The resources needed for protection can be allocated in a number of ways. Figure 116 depicts, 1+1, 1:1 and 1:N<br />

protections schemes, respectively. 1+1 protection requires double capacity because the traffic is always – also in<br />

absence of failures – sent along as well working and protections paths. With 1:1 protection, the protection path<br />

is taken into use only in case of failures, i.e., it can be used for e.g., low priority traffic. 1:N protection is a<br />

generalised case of 1:1 protection in which N working paths share one protection path.<br />

Protection can be carried out at any (also logical) layer in a layered topology, i.e., e.g., at the IP level or at the<br />

optical level. Within the optical layer, protection can be accomplished in a number of ways. Either on a per-link<br />

(point-to-point) basis or, alternatively, more sophisticated methods exploiting the underlying network’s topology<br />

can be used. One obvious example is using an underlying ring topology. However, more advanced schemes that<br />

use ring covers of arbitrary topologies can be used. Below a brief overview of these protection schemes is<br />

provided. Figure 117 shows an overview of various protection schemes.<br />

<strong>A2.</strong>11.3.6.2 Physical layer protection<br />

As indicated in Figure 117 a vast number of protection schemes exits. A brief overview of a number of physical<br />

layer protection schemes is provided in the following section.<br />

Dedicated path protection (DPP)<br />

Dedicated path protection, which is typically applied to mesh networks, protects entire paths through the<br />

network. DPP can be fast (1 + 1 DPP just requires that the receiving node switch to the protection light path)<br />

and robust in the case of multiple faults (if they not affect simultaneously the lines belonging to the working and<br />

protection light paths of the same connection).<br />

<strong>Annex</strong> 2 - Page 268 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 115: Link and path protection<br />

Figure 116: Basic protection schemes<br />

<strong>Annex</strong> 2 - Page 269 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Protection<br />

IP Optical<br />

MPLS LSPs OCh OMS<br />

IP routing<br />

Prot.<br />

Connection<br />

granularity<br />

dedicated dedicated<br />

DPP<br />

1+1 path<br />

1:1 path<br />

DP-WSHR<br />

OUPSR<br />

OCH/<br />

DPRing<br />

Lightpath<br />

granularity<br />

SPP<br />

1:N path<br />

SP-WSHR<br />

OBPSR<br />

OCh/<br />

SPRing<br />

Granularity<br />

<strong>Annex</strong> 2 - Page 270 of 282<br />

shared shared<br />

DLP<br />

1+1 line<br />

1:1 line<br />

OULSR<br />

SLP<br />

1:N line<br />

SL-WSHR<br />

2F-BLSR<br />

4F-BSLR<br />

OMS/<br />

SPRing<br />

Lightpath-bundle<br />

granularity<br />

Figure 117: overview of protection schemes with various granularity.<br />

Dedicated line protection (DLP)<br />

Dedicated line protection protects each link in the network individually. It reserves protection wavelengths for<br />

each line utilized by working light paths. Similar to OCh DPP, DLP may require more spare capacity allocation<br />

than other schemes, but can be even faster than DPP. Indeed, in the case of 1:1 protection, the restoration<br />

completion time is faster than 1:1 DPP because signalling is confined within the area around the faulty line.<br />

However, this comes at the cost of extra capacity required.<br />

Dedicated-path-switched WSHR (DP-WSHR)<br />

This is also called optical unidirectional path-switched ring (OUPSR) or OCh dedicated protection ring (Och /<br />

DPRing) - it is the DPP equivalent for ring networks. DP-WSHR restoration is fast (on the order of milliseconds<br />

or a fraction of a millisecond).<br />

Optical unidirectional line-switched ring (OULSR)<br />

The OULSR approach is similar to DP-WSHR. It utilizes two counter-rotating fibres, one for working light<br />

paths and the other for protection light paths. The difference is that, in the OULSR case, all the light paths<br />

passing through the failed line are jointly switched over the protection fibre. With respect to DP-WSHR, this<br />

scheme requires the same number of wavelengths, but requires less expensive devices while achieving similar<br />

restoration times.<br />

Shared-path protection (SPP)<br />

In Shared path-protection, protection wavelengths are shared by a number of line and node-disjoint working<br />

light paths and thus SPP achieves more efficient utilization of spare resources than DPP, at the price of more<br />

complex control and longer restoration times (on the order of 100 ms). In addition, if two or more faulty<br />

connections share the same protection wavelengths, only one can be recovered. This scheme is also termed 1:N<br />

path protection.


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Shared-line protection (SLP)<br />

SLP applies the SPP technique locally to the faulty line. In this case, it is possible to make use of shared<br />

protection resources among different failure scenarios, thus yielding better resource utilization than achieved<br />

with DLP. The restoration completion time is generally faster than that of SPP because of locally limited<br />

signalling. This scheme is also known as also termed 1:N line protection.<br />

Shared-path WSHR (SP-WSHR)<br />

SP-WSHR - also termed optical bi-directional path-switched ring (OBPSR) or OCh shared protection ring<br />

(OCh/SPRing) — represents the SPP equivalent in ring networks. This scheme’s peculiarity is non-loopback<br />

switching. In SP-WSHR, in case of failure, each working light path is switched to the protection light path at its<br />

source node. Therefore, the recovered traffic reaches the destination node only along the protection light path.<br />

SPWSHR is the most efficient among the WSHR protection techniques in terms of spare resource utilization,<br />

but it requires complex control and signalling. Restoration time may also be affected for the same reason.<br />

Bi-directional shared line-switched WSHR (SL-WSHR)<br />

SL-WSHR is physically implemented with either two fibres (optical two fibre/ BLSR, O-2F/BLSR) or four<br />

fibres (optical four-fibre/ BLSR, O-4F/BLSR). In either case, working light paths and protection wavelengths<br />

may be carried using both directions of propagation. Its peculiarity is loopback switching. Upon failure, the<br />

working light paths are switched, at one failure end, to the protection wavelengths of the counter-rotating fibre.<br />

When they reach the other end of the failing span they are looped back along their original working wavelengths<br />

to reach their destination nodes. For each direction of propagation the number of protection wavelengths is<br />

determined by the largest number of working light paths in any line flowing in the opposite direction (defined as<br />

the ring load). The SLWSHR scheme is simple and fast (completion switching time on the order of tens of<br />

milliseconds) because the switching mechanism is promptly activated upon fault detection without requiring any<br />

further signalling.<br />

<strong>A2.</strong>11.3.6.3 Higher layer protection<br />

As described previously, protocols running above the physical layer (e.g., SDH or IP) have their own view of<br />

the network’s topology. Hence, protection/restoration can be carried out in these layers independently of the<br />

physical layer protection.<br />

A number of SDH protection schemes exist. Equipment is available and commonly used. Moreover SDH<br />

protection is fast and fully transparent to higher layer protocols.<br />

IP routing protocols have built-in recovery schemes, i.e., they will automatically reroute around failed<br />

links/nodes. However, due to the distributed nature of the IP routing protocols (OSPF, BGP) their convergence<br />

time is slow and thus yields slow recovery times.<br />

The overall problem in higher layer protection is to select in which layer the protection should be performed. In<br />

the lower layers only a course grained protection can be performed because at these layers traffic flows in high<br />

capacity trunks. This yields fewer flows to protect and thus simpler control. On the other hand it cannot be<br />

precisely defined what traffic should be protected, because a fraction of a traffic trunk cannot be protected. The<br />

solution is to perform protection in higher layers, which will give a better resource utilization. This, however,<br />

comes at the expense of the increased burden of keeping track of the much larger number of flows. When<br />

protecting a lower layer, all higher layers are automatically protected.<br />

<strong>A2.</strong>11.3.6.4 Multi layer survivability<br />

As seen in the previous sections a number of protection methods exist at various levels. In principle schemes<br />

from all layers can be applied simultaneously, this is called multi-layer survivability. Multi-layer survivability<br />

takes the best from each scheme and thus creates the optimal in terms of protection and resources utilization.<br />

However, this is rather complicated, and requires inter-vendor interoperability, and is thus rarely seen in<br />

practice.<br />

<strong>Annex</strong> 2 - Page 271 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>11.3.6.5 MPLS protection - Local repair 626<br />

Local repair is the MPLS recovery method drawing most attention recently. It may involve restoration or<br />

protection switching implemented for a local scope. In general, the alternative path may be either chosen by<br />

means of routing protocols or programmed manually. Often the implementation is made in the way that the path<br />

is already pre-configured and the only process to be performed is just mapping the traffic with new labels<br />

associated with the backup path interface. Then high speed is achieved due to the fact that no notification is<br />

required as the switchover takes place at the point of fault detection.<br />

In many conditions, rerouting is rather recommended for local restoration case. This is mainly due to utilization<br />

efficiency, economical reasons and the manual configuration burdens present in local repair protection switching<br />

approach. Otherwise, practical local protection approaches could be used for the cases when some vulnerable<br />

segments of the network require protection (e.g. some AS). There is still the concern of redundant resource<br />

involved, but it may appear sensible for some demanding scenarios. Another suggestion helping in increasing<br />

efficiency is extending the local scope to a segment approach.<br />

<strong>A2.</strong>11.3.6.6 Link repair<br />

There can be two local repair implementation schemes. One of them is link repair providing recovery for<br />

vulnerable physical link. Then, the failure is bypassed by the virtual tunnel so that there is minimum service<br />

disruption. In case of implemented protection switching, the required backup capacity is a huge cost to afford<br />

per each LSP on every link. There still remains concern regarding troublesome configuration. Still, this is only<br />

the link that is protected – not an entire node.<br />

Figure 118: Link local repair<br />

In the link repair, due to label space requirements, the tunnelling is implemented by means of label stacking. The<br />

two nodes interconnected by the failed link (being the repair points) do not need to change labelling for the<br />

protected traffic. They perform the mapping as it would have been done normally, and additionally they consider<br />

(add or pop) one more label on the stack. Obviously the label-to-interface mapping is changed for tunnelled<br />

traffic (tunnel enters the node via a different port). It can be easily handled by using global (per platform)<br />

instead of per-interface label space. The link repair labelling scheme is shown in Figure 118. Node C redirects<br />

traffic into a tunnel (chosen manually by protection switching or by means of protocols in restoration case)<br />

swapping the label into 6 as usually, and additionally attaching new label on top of the original one. Then,<br />

within the tunnel the upper labelling fashion is relevant and kept completely independent. Node D will get a<br />

packet with label 6 – just as expected in normal conditions. When global label space is used, the nodes do not<br />

care which interface is used.<br />

626 The term ‘local repair‘ is also referred as: local recovery, fast reroute protection, fast rerouting within literature. Within articles, probably<br />

due to simplification in English terminology, in this area many terms are inconsistent.<br />

<strong>Annex</strong> 2 - Page 272 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>11.3.6.7 Node repair<br />

To eliminate the risk of entire device failure, there is a fast re-route scheme called node repair. Here, the<br />

labelling is more complex. Even having the label stacking, there must be the label preserved – the one that is<br />

expected by the next-hop node following the failed device. This way the labelling order of the primary path is<br />

maintained with tunnelled secured part. It means that the failure would be invisible for nodes other than those<br />

adjacent to the failed one. There is a great concern to match the internal labelling for backup paths. Such label<br />

management can be achieved with RSVP-TE extensions providing RecordRoute object that is passing the<br />

information about LSP’s hop upstream within RESV message. Thus, the upstream node knows all the<br />

downstream hops and allocated labels.<br />

Figure 119: Node local repair<br />

The situation is pictured in Figure 119. When node D fails, LSR C switches the traffic adding the label on the<br />

stack. Node C cannot make the normal label swapping, since node E expects the label 42 (negotiated with D)<br />

instead of 6. LSR C itself is not capable of handling the situation. It must be informed by protocols about the<br />

label that is expected by node E. Then, tunnelling can proceed in the independent fashion.<br />

Due to costly redundancy, the node protection is probably best suited for those nodes and LSPs that really insist<br />

on one. If assuming full node protection for N attached bi-directional links, then N*(N-1) aggregated unidirectional<br />

LSPs may exist within the node, and assuming that each LSP requires protection, at least N*(N-1)<br />

would be necessary.<br />

<strong>A2.</strong>11.3.6.8 Detours<br />

The innovative detour scheme provides arrangement of protection paths for given part of the network, so that<br />

eventually each node or link could have some mean for backup. Then, it is useful and efficient to configure the<br />

backup paths so that they serve simultaneously the link and the node on the working path. Also, they may<br />

provide recovery for many LSPs sharing the links. Such approach has been proposed in a draft and referred to as<br />

Automatic protection using Detours. It requires extensions to RSVP–TE in order to let the protocol signal<br />

protection detours automatically upon the LSP is established or the topology is changed. Such approach involves<br />

more intelligent computations and knowledge distributed among the nodes participating in the recovery. The<br />

necessary Record Route object (RRO) should involve new TE information passed, so that each protection link<br />

should have well-managed and economized resources reserved. The proposed configuration is done so that the<br />

detours are established going around every router on the path and a link between penultimate and egress router.<br />

The case is illustrated in Figure 120.<br />

Regarding these prerequisites, RSVP-TE should signal the detour paths within PATH message objects: RRO,<br />

FastReroute and a new one - Detour. This method involves also appropriate label allocation issues and is<br />

adequate only for unidirectional paths. Still there is question of idle capacity involved in the detours.<br />

<strong>Annex</strong> 2 - Page 273 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>11.3.6.9 Further standardization efforts<br />

There are multiple standard principles for providing resiliency with specific network technology. However, the<br />

future networks heterogeneity makes the issue extremely complex. Lately, within IETF the Common Control<br />

and Measurement Plane (CCAMP) working group has developed framework for standards regarding recovery<br />

schemes and signalling extensions that would support link and path recovery. Their focus is turned to providing<br />

resiliency for heterogeneous networks that could be united under common control plane of GMPLS.<br />

Figure 120. Local repair using detours<br />

<strong>A2.</strong>11.3.6.10 Protection and restoration in GMPLS<br />

It is been believed that GMPLS can enable network recovery schemes that are more cost effective than today’s<br />

1+1 and ring configurations. GMPLS offers both end-to-end path level and local span level repair capabilities. It<br />

presents many enhancements added to MPLS recovery techniques. Among many others, they involve<br />

notification process improved with the possibility of more featured signalling the fault to a particular node on<br />

the LSP.<br />

GMPLS recovery stays a huge topic that is out of the scope of the report. Since GMPLS standardization efforts<br />

are still work in progress, the real applicability of the standard is under question. The breakthrough that can help<br />

in further GMPLS practical development is the maturing of MPLS technology and stabilization and solidity in<br />

optical research.<br />

<strong>A2.</strong>11.3.7 SONET/SDH<br />

Many carriers have already spent huge amount of money building SONET or SDH infrastructures, and this has<br />

paved the way for next generation SONET, which is more optimized for data transport. The Generic Framing<br />

Procedure (GFP) together with Virtual Concatenation (VCAT) and Link Capacity Adjustment Scheme (LCAS)<br />

has introduced more efficient ways of transporting data in SONET based transport networks. The following<br />

sections will discuss these next generation components and show possible Ethernet over SONET (EoS) network<br />

scenarios.<br />

<strong>Annex</strong> 2 - Page 274 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

<strong>A2.</strong>11.3.7.1 Generic Framing Procedure<br />

GFP provides a generic mechanism to adapt traffic from higher-layer client signals over a transport network.<br />

Client signals may be PDU-oriented (such as IP/PPP or Ethernet MAC), block-code oriented constant bit rate<br />

stream (such as Fibre Channel or ESCON/SBCON). GFP uses a variation of the HEC-based frame delineation<br />

mechanism defined for Asynchronous Transfer Mode (ATM) 627 . Two kinds of GFP frames are defined: GFP<br />

client frames and GFP control frames<br />

The GFP specification consists of both common and client-specific aspects. Common aspects of GFP apply to<br />

all GFP-adapted traffic and they are specified in clause 6. Client-specific aspects of GFP are specified in clauses<br />

7 and 8. Currently, two modes of client signal adaptation are defined for GFP.<br />

• A PDU-oriented adaptation mode, referred to as Frame-Mapped GFP (GFP-F).<br />

• A block-code oriented adaptation mode, referred to as Transparent GFP (GFP-T).<br />

Figure 121 illustrates the relationship between the higher-layer client signals, GFP, and its transport paths.<br />

Ethernet IP/PPP Other client signals<br />

GFP – Client specific aspects<br />

(Payload dependent)<br />

GFP – Common aspects<br />

(Payload independent)<br />

Other octet-synchronous<br />

SDH VC-n path OTN ODUk path<br />

paths<br />

Figure 121:GFP relationship to client signals and transport paths<br />

<strong>Annex</strong> 2 - Page 275 of 282<br />

G.7041/Y.1303_F1<br />

In the Frame-Mapped adaptation mode, the Client/GFP adaptation function may operate at the data link layer (or<br />

higher layer) of the client signal. Client PDU visibility is required. This visibility is obtained when the client<br />

PDUs are received from either the data layer network (e.g., IP router fabric or Ethernet switch fabric), or e.g., a<br />

bridge, switch or router function in a transport network element (TNE). In the latter case, the client PDUs are<br />

received via, e.g., an Ethernet interface.<br />

For the Transparent adaptation mode, the Client/GFP adaptation function operates on the coded character<br />

stream, rather than on the incoming client PDUs. Thus, processing of the incoming codeword space for the<br />

client signal is required.<br />

Considerable bandwidth savings can be obtained when transporting Ethernet frames using GFP-F compared to<br />

GFP-T. However, GFP-F terminates Ethernet control frames such as Bridge Protocol Data Units (BPDU), but it<br />

is desirable with a frame-based approach that carries all frames including control frames. Steps are taken to<br />

create the necessary improvements.<br />

<strong>A2.</strong>11.3.7.2 Virtual Concatenation.<br />

Two methods for concatenation are defined: contiguous and virtual concatenation. Both methods provide<br />

concatenated bandwidth of X times container-N at the path termination. The difference is the transport between<br />

the path terminations. Contiguous concatenation maintains the contiguous bandwidth throughout the whole<br />

transport, while virtual concatenation breaks the contiguous bandwidth into individual VCs, transports the<br />

individual VCs and recombines these VCs to a contiguous bandwidth at the end point of the transmission.<br />

Virtual concatenation requires concatenation functionality only at the path termination equipment, while<br />

contiguous concatenation requires concatenation functionality at each network element.<br />

627 ITU-T Rec. I.432.1


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

As an example, consider contiguous concatenation of VC-4s. The possible combinations are denoted VC-4-Xc,<br />

with X = 4,16,64,256. Thus, the bandwidth granularity is quite low, and this problem is basically solved with the<br />

introduction of virtual concatenation in next generation SONET. In case of virtual concatenation of e.g. VC-4s,<br />

VC-4-Xv, X= 1,2,3…256, and this allows for higher granularity in bandwidth assignments.<br />

Table 18 shows bandwidth efficiencies for Ethernet over SONET/SDH both with and without virtual<br />

concatenation. In case of no virtual concatenation, contiguous concatenation might be used instead, as in the<br />

case of Gigabit Ethernet, but will still lead to lower efficiency.<br />

ETHERNET BANDWIDTH<br />

SONET/SDH - WITHOUT VIRTUAL<br />

CONCATENATION<br />

<strong>Annex</strong> 2 - Page 276 of 282<br />

SONET/SDH - WITH VIRTUAL<br />

CONCATENATION<br />

Ethernet 10Mb/s STS-1/VC-3 (21%) VT1.5-7v/VC-11-7v (89%)<br />

Fast Ethernet 100Mbit/s STS-3/VC-4 (67%) VT1.5-64v/VC-11-64v (98%)<br />

Gigabit Ethernet 1000Mb/s STS-48c/VC-4-16c (42%)<br />

STS-3c-7v/VC-4-7v (95%)<br />

STS-1-21v/VC3-21v (98%)<br />

Table 18: Bandwidth efficiencies with virtual concatenation<br />

<strong>A2.</strong>11.3.7.3 Link Capacity Adjustment Scheme<br />

The LCAS Recommendation specifies a link capacity adjustment scheme that should be used to increase or<br />

decrease the capacity of a container that is transported in an SDH/SONET network using Virtual Concatenation.<br />

LCAS in the virtual concatenation source and sink adaptation functions provides a control mechanism to hitless<br />

increase or decrease the capacity of a VCG link to meet the bandwidth needs of the application. It also provides<br />

the capability of temporarily removing member links that have experienced a failure. This is useful if containers<br />

in the virtually concatenated group are routed along diverse paths.<br />

<strong>A2.</strong>11.3.7.4 EoS Location and options<br />

The operator might choose from several options for EOS scenarios with different locations of the EOS interface.<br />

The interface between Ethernet and SONET can be located in either the Ethernet switching equipment or in the<br />

transport equipment. The two options are illustrated in Figure 122.<br />

ADM<br />

SONET/SDH Link<br />

ADM<br />

a) b)<br />

Figure 122: EoS service location examples.<br />

Ethernet Link<br />

The first option which is shown in Figure 122a, has the EOS interface located in the Ethernet switching<br />

equipment. In this case, the transport equipment does not have to deal with mapping the Ethernet frames carried<br />

in SONET/SDH payload. This option integrates Ethernet switching, EOS and Virtual Concatenation in one box<br />

completely decoupled from the transport network. Data and transport is managed separately and can be owned<br />

by different companies.


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 122b shows the second option with the EOS interface located in the Add Drop Multiplexer (ADM). This<br />

approach integrates all EOS functionality into the transport equipment and data and transport can be managed<br />

from the same system, which might reduce operational costs. From an operator perspective, the drawback might<br />

be that data and transport must be delivered from the same vendor, reducing competition.<br />

<strong>A2.</strong>11.3.8 Summary<br />

Evolution in network control and management technology is constantly producing new technology-specific<br />

models and operational schemes. Stated simply, the network management challenge for a GMPLS network is to<br />

devise a management mechanism in line with the real-time dynamic features of the control plane and without<br />

introducing unnecessary overlapping in control-management functionality.<br />

By itself, a common control plane does not fully meet all provisioning needs. Since the network must maintain<br />

its service level, a provisioned path needs to be inventoried for further management operations, including<br />

monitoring and accounting. It is evident that the NMS operations must be governed by a set of policies, which<br />

activates information flow between the two planes’ operations, and keep them coherent at any point during the<br />

lifetime of a provisioned network resource. In other words, to enable simple and automated service provisioning,<br />

the NMS must work in conjunction with the GMPLS-based control plane, but in a hierarchical relationship.<br />

There is a drive in the telecom community toward CP solutions based on IP and MPLS protocols. The notion of<br />

Label Switched Path (LSP) from MPLS is generalised in Generalised MPLS (GMPLS) 628 , as specified by IETF,<br />

where a (generalised) LSP can be any kind of circuit- or connection-oriented path. The LSP label does not have<br />

to be explicit in terms of a label in a header; rather a time-slot or an optical wavelength can also be used as (part<br />

of) a label. Thus GMPLS can also be used for paths or connections based on Layer 2 and/or Layer 1 transport<br />

and switching technologies. Hence GMPLS supports the vision of dynamic and flexible multi-layer transport<br />

network.<br />

<strong>A2.</strong>11.4 Application level signalling<br />

Traditionally Telecom signaling schemes have been based on a centralized model, where the intelligence and<br />

call control stand in the network entity rather then in the communication end-points; this paradigm has been<br />

extended for VOIP applications with the MGCP (media Gateway Control Protocol) protocol and its variants (it<br />

is used for example in the IPCABLECOM architecture, and called Network Control signaling). A Media<br />

gateway controller takes care of application signaling, and transits simple commands to the user terminal. Some<br />

extensions of MGCP could be imagined for multimedia applications; however it does not scale very well, SIP<br />

and H323 being better alternatives.<br />

Currently there is a lot of work going on regarding combining networks or at least bring features from one<br />

network to another. The converging networks are mobile networks, the telephony networks (mainly IN services)<br />

and packet switched networks (primarily, of cause, the ubiquitous Internet). For instance there are efforts going<br />

on trying to port IN services from the telephony network to the Internet because they tend to be very useful<br />

when doing Internet telephony (e.g. VoIP)<br />

<strong>A2.</strong>11.4.1 Common issues with signaling<br />

It is likely that a variety of application signaling protocols will coexist, based on centralized or decentralized<br />

paradigms. Information like session set-up, release, control (in streaming application), session description, and<br />

security information is conveyed generally by these protocols. More then standardizing unique protocol, the<br />

issue can be more to define unique description for this information, so that it can be conveyed between the<br />

layers, and the network entities.<br />

<strong>A2.</strong>11.4.2 Services, migration and interconnection to legacy networks<br />

Today, the signaling system of choice in the telephone network is SS7. SS7 together with IN services are<br />

providing advanced features, which are desirable in future IP based networks also. A number of proposals for<br />

how to integrate existing signaling and IN services to other networks.<br />

628 E. Mannie et al., “Generalized Multi-Protocol Label Switching Architecture”, IETF, Internet Draft, May 2003. Work in progress.<br />

http://www.ietf.org/internet-drafts/draft-ietf-ccamp-gmpls-architecture-07.txt<br />

<strong>Annex</strong> 2 - Page 277 of 282


<strong>A2.</strong>11.4.3 SIP<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Figure 123: Network convergence.<br />

The Session Initiation Protocol (SIP) is an application-layer control signaling protocol for creating, modifying<br />

and terminating sessions with one or more participants. SIP has attracted a lot of attention because of its<br />

simplicity and ability to support rapid introduction of new services, mainly due to its addressing scheme, URL<br />

based, and the support of MIME type of data. There are two major architectural elements to SIP: the Server, and<br />

the User Agent (UA). There exist three different server types, a redirect server, a proxy server, and a registrar<br />

server. The UA resides at the SIP end station, and contains two components: a User Agent Client (UAC), which<br />

is responsible of issuing SIP requests, and a User Agent Server (UAS), which responds to such requests. It is in<br />

the realm of SIP where most of the discussions on IN services in VoIP networks and interoperability with legacy<br />

SS7-IN is happening. In the following paragraphs we briefly comment them.<br />

SIP for Telephones (SIP-T) focuses on how SIP should be used to provide ISUP transparency across PSTN-IP<br />

interconnections. This is achieved using both translation and encapsulation of ISUP messages into SIP<br />

messages. At SIP-ISUP gateways, SS7 ISUP messages are encapsulated within SIP. Intermediaries like proxy<br />

servers that make routing decisions for SIP requests cannot be routed.<br />

<strong>A2.</strong>11.4.4 H.323<br />

Initially targeted to multimedia conferences over LANs that do not provide guaranteed quality of service QoS,<br />

ITU-T H.323 has evolved towards the MAN and WAN environments. A typical H.323 network is composed of<br />

a number of zones interconnected via a WAN. Each zone consists of a gatekeeper (GK), a number of terminal<br />

endpoints (TE), and a number of multipoint control units (MCU) interconnected by a LAN or MAN. The GK is<br />

an H.323 entity providing address translation and control access to the network for the rest of elements. The<br />

MCU is an endpoint providing the capabilities for multipoint conferences. H.323 is not a single protocol but an<br />

umbrella covering a wide range of protocols.<br />

<strong>A2.</strong>11.4.5 PINT & SPIRITS<br />

The Services in the PSTN/IN Requesting Internet Services (spirits) Working Group of the IETF Transport Area<br />

addresses how services supported by IP network entities can be started from IN (Intelligent Network) requests,<br />

as well as the protocol arrangements through which PSTN can request actions to be carried out in the IP network<br />

<strong>Annex</strong> 2 - Page 278 of 282


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

in response to events (IN Triggers) occurring within the PSTN/IN. The SPIRITS Architecture was born as a<br />

response to a previous work called PSTN/Internet Internetworking (PINT) in support of the services initiated in<br />

the reverse direction - from the Internet to PSTN.<br />

<strong>A2.</strong>11.4.6 MGCP<br />

Traditionally Telecom signaling schemes have been based on a centralized model, where the intelligence and<br />

call control stand in the network entity rather then in the communication end_points; this paradigm has been<br />

extended for VOIP applications with the MGCP (media Gateway Control Protocol) protocol and its variants (it<br />

is used for example in the IPCABLECOM architecture, and called Network Control signaling). A Media<br />

gateway controller takes care of application signaling, and transits simple commands to the user terminal. Some<br />

extensions of MGCP could be imagined for multimedia applications:; however it does not scale very well, SIP<br />

and H323 being better alternatives.<br />

<strong>A2.</strong>11.5 Conclusions, comparisons and roadmaps<br />

The control plane interconnecting models for IP/OTN (i.e. overlay, augmented and peer) present different<br />

advantages and drawbacks for operators. For instance, incumbents view ASON compliant models favourably<br />

(i.e. overlay and augmented), since ASON's strong focus on maintaining compatibility with existing transport<br />

network protocols is essential for operators with large centralised or proprietary optical networks.<br />

The ASON platform is not protocol specific since any control plane protocol that fit the requirements of this<br />

architecture, such as GMPLS (overlay and augmented models) or PNNI, could be potentially applied to it.<br />

We envisage the following evolution or roadmap for applying the various control plane-interconnecting models.<br />

As a first step the overlay model will be implemented taking advantage of the simplicity of this interconnecting<br />

model. Implementations of the overlay model are already available for offering services on circuit-based<br />

transport networks. The next likely evolution step will be the augmented model that would enable the<br />

automation of service discovery and configuration via signalling and thus would lead to significant OPEX<br />

savings. We expect the augmented model to be important in both inter- as well as intra-domain settings.<br />

Several implementation variants of the different models can be envisaged, however, it should be noted that it is a<br />

great technology jump from simple overlay solutions to solutions implementing the augmented model. As an<br />

even further evolution step in intra-domain settings, a greater level of automation and control efficiency, as well<br />

as better traffic engineering and resource utilisation, can be achieved by introducing unified control planes.<br />

Whether intra-domain unified control plane solutions will be implemented as GMPLS-based peer-model or as an<br />

"ASON-compatible peer-model" will depend on the further evolution of both ASON and GMPLS standards.<br />

Today PNNI for ASON 629 present some technical advantages over GMPLS specially regarding hierarchical and<br />

inter-domain optical routing. However, there is also ongoing work for improving GMPLS, and many believe<br />

that eventually an ASON control plane can also be realised using GMPLS-based protocols. Nonetheless, there<br />

are still many challenging issues, in particular with respect to inter-domain aspects.<br />

<strong>A2.</strong>11.6 Links to other IST projects<br />

The information in this deliverable was collected mainly from the following IST projects:<br />

• MUPBED (signaling in optical networks – GMPLS)<br />

• GEMINI (IN in packet switched networks)<br />

• FLEXINET (signaling)<br />

• ESTA (FP5, metro networks, VPLS etc.)<br />

• NGNI<br />

• DAVID<br />

• LION<br />

• WINMAN<br />

629 ITU-T Rec. G.7713.1/Y.1704.1, “Distributed Call and Connection Management (DCM) based on PNNI”, March 2003<br />

<strong>Annex</strong> 2 - Page 279 of 282


<strong>A2.</strong>11.6.1 MUPBED<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

The main objective of MUPBED is to investigate and to demonstrate advanced network technologies and<br />

solutions that will help to build future ultra-broadband research networks, which are fundamental building<br />

blocks to ensure the competitiveness of research in Europe. “The advanced applications and collaborative<br />

systems of the research and scientific community increase the requirements for the communication networks<br />

interconnecting the<br />

various research centres and users“, says MUPBED project co-ordinator Jan Spaeth, Marconi. “Based on a<br />

strong consortium of leading European players and on a strong asset of advanced test bed environments,<br />

MUPBED has all the prerequisites to make a real step forward towards the future of research networks.” To<br />

achieve this goal, MUPBED will carefully investigate requirements of advanced multimedia research<br />

applications and collaborative systems like GRIDs, based on which appropriate network architecture solutions<br />

will be developed, taking latest technological progress into account. One key aspect will be the introduction of<br />

latest ASON/GMPLS (Automatically Switched Optical Networks/Generalised Multi-Protocol Label Switching)<br />

control plane technologies into research networks in a multi-domain environment. These technologies allow a<br />

flexible and dynamic configuration of connections by distributed control processes in the network. Furthermore,<br />

the integration of high-demand applications and flexible communication networks requires new solutions that<br />

will be investigated by MUPBED. An important contribution of MUPBED is that these concepts will be<br />

implemented and demonstrated in a large, pan-European test bed environment, comprising interconnected sites<br />

in various countries such as Denmark, Germany, Italy, Poland, Spain, and Sweden, as well as being evaluated on<br />

a theoretical basis. To achieve a broad coverage of user requirements as well as a broad dissemination of project<br />

results this platform will also be offered to other users that are not part of the project consortium. Apart from<br />

demonstration and dissemination of the results to a broad public, the achievements of the project will also be<br />

used to drive standardization in this area forward, which is a key issue to achieve costefficient and interoperable<br />

solutions.<br />

<strong>A2.</strong>11.6.2 FLEXINET<br />

The FLEXINET concepts and architecture are applicable to various access network technologies<br />

(GSM/GPRS/UMTS, WLANs, V5, etc.), but will be focused on the mobile and wireless operator needs (UMTS<br />

& WLAN) for packet switched applications.<br />

UMTS Nodes B & BTS<br />

FlexiNET Generic Data Interface bus<br />

Storage Area Networks<br />

Core UMTS/GSM Networks<br />

Core UMTS/GSM<br />

Core UMTS/GSM<br />

RNC/BSC<br />

RNC/BSC<br />

RNC/ BSC<br />

FlexiNET<br />

FlexiNET<br />

Programmable<br />

Programmable<br />

FlexiNET Data<br />

Services<br />

Services Gateway Access<br />

Access Nodes<br />

Nodes<br />

Nodes (DGWN) (AAN)<br />

(AAN)<br />

FlexiNET<br />

FlexiNET<br />

Programmable<br />

Programmable<br />

FlexiNET UMTS<br />

Services<br />

Services Access Access<br />

Access Nodes<br />

Nodes<br />

Nodes (FUAN) (AAN)<br />

(AAN)<br />

Backbone IP<br />

Networks<br />

RNC/BSC<br />

RNC/BSC IP Router<br />

Gateways<br />

<strong>Annex</strong> 2 - Page 280 of 282<br />

FlexiNET<br />

FlexiNET<br />

Programmable<br />

FlexiNET Programmable Services<br />

Services<br />

Services Access Access<br />

Access Nodes<br />

Nodes<br />

Nodes (FSAN) (AAN)<br />

(AAN)<br />

FlexiNET Generic Applications Interface bus<br />

WLAN Access Points<br />

Legacy Telecommunications<br />

Applications and Enterprise<br />

Interworking<br />

FlexiNET Interworking Bus with Legacy Switch platforms<br />

Figure 124: FlexiNET generic Network Architecture topology<br />

Access routers, bridges, etc<br />

Third-party<br />

applications<br />

Servers<br />

FlexiNET


<strong>A2.</strong>11.6.3 IST-LION project<br />

FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

Automatic Switched Optical/Transport Networks (ASONs/ASTNs) were one of the major study subjects in the<br />

LION project (Layers Interworking in Optical Networks). The LION studies covered the management of<br />

ASTNs.<br />

The management approach adopted in LION is based on the fact that the ASTN concept will be introduced in<br />

the networks only if migration from existing transport network solutions is supported by the management<br />

concept. Therefore, the ASTN management functionality is clearly separated from the underlying transport<br />

network management functionality:<br />

• by developing a separate information model for the control plane functionality, and<br />

• by reusing for the description of the transport network the existing information models.<br />

This separation is reflected in the information model depicted in Figure 125. The ASTN information model<br />

(highlighted in the Figure by the shaded area) only comprises resources that are part of the control plane<br />

network (with other words that are directly linked to the CpNE. Obviously, the control plane and the transport<br />

plane are not completely independent from each other. The trails configured via the control plane (AstnTrails)<br />

will bind resources in the transport network and should therefore be visible to the management system of the<br />

transport network. This means that every time an AstnTrail is set up in the control plane (via soft permanent or<br />

switched scheme), the corresponding TNW trails and connections have to be created on the transport network<br />

level.<br />

This separation allows developing one general management interface for a generic ASTN that can be applied to<br />

any kind of transport network. Furthermore, the management functionality implemented for supervision and<br />

configuration of transport networks can be reused.<br />

ClientNE<br />

ClientCTP<br />

CpNE<br />

TnwNE<br />

AstnTTP<br />

AstnCTP<br />

NniTTP<br />

terminating transport network TTP<br />

with switching capability (e.g. OchTTP)<br />

AstnConnection<br />

NniTrail<br />

transport network trail<br />

transport network connection<br />

ClientConnection<br />

AstnTrail<br />

AstnCrossConnection<br />

CpNE<br />

transport network CTP<br />

with switching capability (e.g. ochCTP)<br />

<strong>Annex</strong> 2 - Page 281 of 282<br />

ClientNE<br />

CpNE<br />

TnwNE TnwNE<br />

TnwNE/CpNE: Transport Network/ Control Plane Network Element<br />

Figure 125: Overview on transport and control plane managed resources<br />

The approach described above allows vendors to reuse implementations of management functionality already<br />

developed. For network operators, the advantage is that the management functionality can be aligned to wellknown<br />

procedures of existing transport networks and hence the need to adapt the operational processes can be<br />

minimised.<br />

For soft-permanent connections, LSP set-up is triggered by a Network Management System (NMS) sending to<br />

the ingress node the appropriate information such as source and destination node ID, port IDs and payload type.<br />

The NMS could also specify an explicit route to the destination. This option has been implemented in the LION<br />

test-bed.<br />

Ingress node receives this parameter directly from NMS whereas the egress node is provided with this<br />

information by NNI signalling messages. Last sub-object of the Explicit Route Object (ERO) in the PATH<br />

ASTN Information<br />

Model<br />

Transport NW.<br />

Info. Model


FP6-IST-507554/<strong>JCP</strong>/R/Pub/D2.2-3.2–Update 09/01/06<br />

message is used for this purpose. It is loaded with the Node identifier and the port ID to be connected to the<br />

client node.<br />

For soft-permanent LSPs, the tear-down procedure is initiated by NMS and not independently by a node. Two<br />

different cases are possible: tear-down commands sent to the source node or sent to the destination node. In both<br />

cases the NMS sends a message indicating the action to be performed (that is “Teardown soft-permanent LSP“)<br />

and the necessary parameters (that is „Sender address, LSP ID and Tunnel ID“) to the involved node.<br />

When the source node receives such a message from NMS, it sends a PathTear message downstream towards the<br />

destination node that releases reserved resources as well as the set-up cross-connections along the way. This is<br />

the option that has been implemented in the LION test-bed.<br />

If the destination node receives a “Teardown soft-permanent LSP“ message from NMS, it sends a ResvTear<br />

message upstream towards the source node that releases just the set-up cross-connections along the way but not<br />

the reserved resources. When source node receives the ResvTear message it will send a PathTear message to the<br />

destination node releasing the reserved resources.<br />

<strong>A2.</strong>11.6.4 IST WINMAN project<br />

The main objective of the WINMAN project was the definition and implementation of an integrated solution for<br />

the management of heterogeneous IP-based networks, with emphasis on IP over WDM. WINMAN work’s<br />

premise was that such solution is expected to facilitate network support for sophisticated services ranging from<br />

real-time, multimedia ones (VoIP, Video on Demand) to IP based VPNs.<br />

The management architecture proposed by the WINMAN project exhibit a two-layer hierarchy. The lower layer<br />

contains the technology specific Network Management Systems (NMS): IP-NMS and WDM-NMS; these take<br />

care of particular needs associated to each network, while at the same time provide a highly abstracted,<br />

technology neutral view at their northbound interfaces. The higher layer features the so-called Interdomain<br />

Network Management System (INMS), in charge of coordinating the management of the multiple network<br />

technologies involved.<br />

The INMS is the central point of the WINMAN proposed architecture: it provides a single point of access for all<br />

systems residing in the TMN service management layer via CaSMIM that is a Service Management Layer<br />

(SML) to Network Management Layer (NML) interface specified by the TeleManagement Forum. Both<br />

CaSMIM and INMS information models are designed to only capture, in a generic way, the entities and<br />

parameters relevant at this reference point, thus hiding technology and vendor specific details.<br />

The interface between the two sublayers is again based on CaSMIM, which allows NMSs of any technology to<br />

be plugged to INMS's southbound interface, provided that they support the defined interface.<br />

WINMAN approach for providing IP connectivity services over an optical transport network proposes to extend<br />

the telecom-style network management model to the IP layer that when complemented with MPLS capabilities<br />

uses a connection-oriented packet-switched forwarding paradigm. The appropriate synergy and integration of<br />

the IP and optical layers is performed with management functions capable of performing integrated provisioning<br />

of Label Switched Paths (LSPs) over optical channels, as well as integrated multi-layer fault and performance<br />

management.<br />

WINMAN project think this approach is the best candidate for control-management interworking in the midterm,<br />

as it provides a friendlier evolution from existing deployments. The gradual gain in maturity of control<br />

plane implementations may lead to a transition period where the control and management plane will interact<br />

with each other; eventually a control plane centric approach could become dominant in the IP over optical area.<br />

Nevertheless, the WINMAN consortium opted to leverage available control plane functionality where<br />

appropriate, in order to reduce development efforts when reliable alternatives were already in place. This was<br />

the case of path establishment in the IP/MPLS layer, accomplished by means of the RSVP-TE protocol.<br />

Modularity and flexibility of WINMAN software allows deciding what functionality to move to the control<br />

plane almost on a function by function basis. For instance, if similar mechanisms had been available in the<br />

WDM domain, path establishment and even path computation functions could have been delegated to the<br />

control plane. Policies were defined to support handling this situation with minimal impact to WINMAN<br />

solution components.<br />

<strong>Annex</strong> 2 - Page 282 of 282

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!