“Network Coding and Its Implications on Optical Networking”
This was an invited talk by Muriel Médard, from the MIT. Some notes:
1. Multicast capacity can be achieved with network coding. Minimum cost multicast is possible by allowing coding.
2. What about achieving the network coding advantage by coding only a subset of nodes? Cost of doing in every node would be prohibitive…
3. Some interesting paper on this topic: “Information-theoretic framework for network management for recovery from non-ergodic link failures” (Ho, Medard, etc.), “Concept of 1*N protection that uses network coding over p cycles”, Kamal 2006
4. Other interesting paper: Menendez and Gannett, 2008, show that network coding can lead to significant savings in back up resources by allowing coding. Only bitwise XOR is considered as it can be handled in the optical domain (in the electronic domain we don’t have this limitation, but the problem is that we need OEO conversion…).
5. Her group has showed that it’s possible to have benefits of coding even without having all nodes doing it.
6. Finally another paper with an evolutionary approach (Infocom 07) to minimise coding cost (number of coded lambdas).
All this network coding stuff could be of interest for multicast…
“CAPEX and OPEX in Aggregation and Core Networks”
This was yet another invited talk by Claus Gruber, Nokia. Some notes:
1. From the slides: “Video services will drive exponential growth… especially from IPTV”.
2. Revenues and traffic were usually correlated, but they are now being more decoupled. So we have maintained the network costs low… otherwise were done.
3. How to reduce CAPEX? Some examples: a) transport bits on the lowest level – it’s cheaper, b) the network should be automated, by using GMLS as control plane and path computation elements, c) use virtualisation techniques to enable fast cost efficient deployment of new services, d) optimal mix of intermediate grooming and routing and transport bypass.
4. How to reduce OPEX? Reduce energy consumption. Internet technologies represent 1 to 4% of energy consumption today…
5. IP/MPLS routers consume more power (1kW /100Gbps). Next in the list L2 switches, followed by SDH nodes, and finally ROADM nodes…
6. Extend traffic engineering decisions with energy profile info. Try to find paper “Energy profile aware routing”. I think this guy stole my idea in this paper… OK, he was first! :) But making routing “green” seems a good thing, and it’s a hot topic definitely.
“Architectures for Energy-Efficient IPTV Networks”
Jayant Baliga presented a paper with a new energy consumption model of IPTV storage and distribution (IPTV in his talk was VoD). Some notes:
1. Energy becoming an issue – OPEX, greenhouse gases. He focused on video delivery over public internet.
2. Some notes on power consumption: ethernet switch – 9 nJ/b, Broadband net gw (140 nJ/b), router (26 nJ/b). Core net: core router (17 nj/b), optical OXC (0.02 nJ/b). Data center: edge router (26 nJ/b), server (430nJ/b). Etc.
3. Packets traverse an average of 13 hops (3 hops metro + 9 on core + 1 data centre).
4. They check how many data centres they should use. The conclusion is that unpopular movies would stay in few data centers, say 1 or 2, but a very popular one would stay in several – he said 200 was a good number.
5. Optical bypass reduces power consumption.
6. P2P is efficient at low downloads per hour (unpopular content), but for high demand content is not efficient at all (due to upstream speeds being low).
“Power Saving Architectures for Unidirectional WDM Rings”
Another paper by our colleagues from Italy, presented by Piero Castoldi. The paper compares the optimal power-saving designs of unidirectional WDM rings using three well-known architectures, first-generation, all-optical, and multi-hop (hybrid). Notes:
1. Are multihop architectures able to offer a reduction of OPEX (they reduce CAPEX)?
2. They have analysed power savings… and used ILP for this. These are the values they used (from work published recently): power of electronic interfaces = 197mW, power of optical interfaces = 189mW.
Main problem: it is an ILP formulation, so not scalable. Heuristics needed… maybe we can collaborate with them again? :)
The first author of this paper was Siamak Azodolmolky. He presented a novel offline physical layer impairment aware routing and wavelength assignment algorithm for various transparent all-optical networks, while considering dedicated path protection. He started by presenting a network planning and operation tool they’ve developed, focusing his presentation on a specific part of it, the impairment aware lightpath routing component of the tool. They consider several impairments, like node crosstalk, PMD, XPM, FWM, etc. and calculate a Q-factor. This is only for offline settings (used for network planning), but in an online setting they would have to change the way the Q factor is currently calculated, because it is very computationally intensive today.
“Impact of Topology and Traffic on Physical Layer Monitoring in Transparent Networks”
This was an invited talk by Dan Kilper, Bell Labs, Alcatel-Lucent, USA. He analysed the benefit of optical performance monitoring, with respect to network topology and traffic patterns. So in OEO networks we can monitor things in the regenerator sites.But in optical nets this is more complicated. So the question is: where to place monitoring sites? They have tested some schemes, and shown that we can reduce costs significantly by placing monitoring sites in specific locations.
“Optical Multi Domain Routing”
This was also an invited talk. The speaker was Xavi Masip.
He reviewed the current limitations in multi-domain routing as well as some of the research lines in the optical area. Some notes:
1. Mixing routing, multi domain, and optics makes things very complex. Today it is known that all ISPs are being glued together with BGP. So to change it is very complicated – no one wants to lose connectivity. And BGP works.
2. Why multilayer? To optimise layers: we have one top IP layer, then a lower one for connection oriented packet networks, and finally the photonic transport layer. With multilayer optimisation we one can reduce costs and power needs.
3. Today we don’t have multidomain optical routing, BGP takes care of everything. There is some work on this area, but more research is needed.
4. Check work on Optical BGP (OBGP), and also OBGP+. Could be interesting.
5. Do we need multi domain optical routing? Answer by pure transport people (optical people): NO (upper layers can do that). Answer by IP layer people: definitely not (we can do that, just provide connectivity). Answer by carriers: are we supposed to share info?... (confidentiality is very important)
“PCE Communication Protocol for Resource Advertisement in Multi-Domain BGP-Based Networks”
The presenter was Francesco Peolucci. They propose new PCEP messages to announce inter-domain resource information typically not advertised by BGP because of scalability reasons. These enable effective PCE-based path computations and preserve network stability, scalability, and intra-domain information confidentiality. Some notes:
1. BGP doesn’t advertise alternative solutions. BGP applies tie breaking rules, and only advertises one route, no alternative paths – for scalability reasons.
2. Proposed solution – extend PCE protocol with one new message that carries route info that is not advertised by BGP. They also propose to include another message with bandwidth availability of interdomain traffic engineering.
3. This is limited to a set of domains (they consider only 20 domains), not to the entire internet… so no worries about scalability. But the fact is that I was still worried about scalability. They don’t address this, they only say “it is not a problem” because they limit themselves to 20 domains (not sure why).
“On Resource Provisioning for Dynamic Multi-Domain Networks”
Xiaolan J. Zhang gave an interesting talk where she studied the performance of multidomain resource dimensioning/routing techniques with limited information sharing, and provided motivation for considering fairness issues. Some notes:
1. Imagine your network is well dimensioned, but your “neighbour” network is underprovisioned. Your connections to that network are rejected because of them, so you don’t have a big incentive to have a well dimensioned net… One domain can hurt performance of other domains.
2. A global shortest path routing solution prefers larger domains – large domains have advantage over smaller domains.
3. So they propose a global SPF but they normalise the costs to the size of the network. They improve performance on smaller nets, without causing problems in the big network.
“Avoiding Path-Vectors in Multi-Domain Optical Networks”
Marcelo Yannuzzi showed that a modified version of a path vector protocol can drastically reduce the blocking and converge significantly faster, while exchanging less number of routingmessages both during failure-free conditions and during a convergence. More notes:
1. OBGP+: A modified Path Vector (ENAW), including a cost that is dependent on number of lambdas available.
2. OBGP+ performs way better than OBGP in terms of blocking, number of messages exchanged, convergence time in case of failure, routing advertisements (churn).
3. Conclusions: even minor modifications are sufficient to outperform a plain path vector like OBPP. Avoid plain path vector for the interdomain RWA protocol.
This was a great day in OFC. Several very, very interesting talks. Now I start understanding why they say OFC is the top conference in optical comms… I went to the plenary session in the morning, with three excellent talks, and then in the afternoon I decided to go to the Symposium on the Future Internet. Also a couple of nice talks there.
Plenary Session
"The Growth of Fiber Networks inIndia"
The speaker was Shri Kuldeep Goyal, Chairman and Managing Director of Bharat Sanchar Nigam Ltd. (BSNL),India (largest SP in India, Government owned). It was a very interesting presentation. The first slides were very interesting. He started by showing a world map. As in any "normal" world map, it is a geogrophical scale, i.e., bigger countries appear bigger in the map. Then he change the type of scale of the map, to his own convenience, which was really neat. For example, he showed the map with a population scale, and we could see India inflating hugely. Then he showed the same map with yet different scales: broadband connections (India was very thin in here), films watched (India was more inflated in here than in the population map), etc.
Some notes:
1. BSNL is offering IPTV in 30 cities today. It has 25k users at the moment.
2. They are now starting to move to FTTH.
3. They offer 610 TV channels!
4. In January 2009 there were 15 million new users of mobile phone in India.
5. They use IP/MPLS in their core network.
6. 85% of the broadband technology in India is ADSL.
7. He expects that, by 2011, 5% of all households will have FTTH.
8. For him IPTV is one of the main drivers of FTTH.
9. In Japan the already have 25% of the households with FTTH.
10. He wants to offer HDTV soon.
“The Changing Landscape in Optical Communications”
This was a very dynamic and also very interesting talk by Philippe Morin, the President of Metro Ethernet Networks, Nortel. Some notes:
1. One of the three “global megatrends” he mentioned in his talk was HDTV. He believes the demands for HDTV video are fuelling the next wave of bandwidth growth.
2. Talking about business applications, he things the top service will be telepresence/videoconference (to reduce costs by decreasing the number of flights).
3. He mentioned Oprah’s “A New Earth” classes. More than 1 million people was watching this online, simultaneously, from 133 different countries, HQ 1.5 Mbps. This really is asking for multicast… :)
4. Another example was Obama’s inauguration, with millions of people watching simultaneously. Real time is definitely needed, in his (and my) opinion.
5. He says that anything less than HD will be unacceptable in 2 years time. And don’t forget 3DTV, doubling the bandwidth needs…
6. Another nice example of needs for live video was the NCAA championship that allows HQ streaming for the PC.
7. He showed a nice figure with the evolution of TDM, until 1995/200, then the evolution of WDM, from 2000 to 2010, and he thinks the next thing are the coherent techniques.
8. Next frontier: terabit per lambda, and access technology cost going down one order of magnitude.
“Getting the Network the World Needs”
The speaker was Lawrence Lessig, a Professor in StanfordLawSchool. I have to be honest: this was the best talk I have ever seen. It was absolutely amazing. Impressive. Superb. He was a Cambridge student (did his masters there), so I guess that explains the high quality a bit… lol :)
It is very difficult to explain his style, because it is very different from the things we are used to. But you can check an example of his style here (you can check others in youtube, it is really cool!):
Someone that is able to put this full video in the middle of the talk, is certainly delivering a great presentation:
By the way, with all the excitement I almost forgot... I didn’t take notes because I was completely absorbed by the presentation, but I can just say that the main topic of the talk was this: Copyright needs change.
Again, superb.
Now moving to the Symposium on the Future Internet…
“Future Internet: Drastic Change, or Muddling Through?”
The speaker was Professor Andrew Odlyzko, University of Minnesota. This was basically the same talk Professor Odlyzko gave some months ago in Cambridge, so it was not new for me. Nevertheless, it is always interesting to hear him. And I leave some brief notes:
1. Huge potential sources of additional Internet traffic: a) storage, and b) broadcast TV. In 2006, in the US, internet traffic per capita was 2GB/month, but TV consumption per capita was 40 GB/month – assuming 3 hours/day, 1Mbps, no HDTV… so it’s a soft figure.
2. Net neutrality “is about streaming movies” (Jim Cicconi, AT&T, 2006)
“Internet Evolution into the Future”
Another great talk, this time by Lawrence Roberts, one of the fathers of the Internet (responsible for the first packet network, ARPANET), now Chairman of Anagram (he was also the founder of this company).
It is always exciting to listen one of the founding fathers of the Internet (some time ago I listened to a talk by Vincent Cerf and that was also exciting!). In 1965, in MIT, Roberts made the 1st packet network experiment, and then he managed the first packet network, ARPANET. Some notes:
1. He showed a nice slide with the Original Internet Design, with its main activities: file transfer and email. At that time, only packet destination examined, no source checks, no QoS, no security, best effort, video not feasible. And guess what: Not much change since then!
2. But now, some changes: voice is moving to packets, video is moving to packets… Now edge is broad, and core is narrow!
3. P2P is a problem, because it uses TCP unfairness – multiple flows, so multiple capacity, and flows are treated equally (equal capacity per flow)… so it congests network, with 5% users using 80% capacity.
4. So he proposes changes to the Internet: 1) fairness (multiflows – p2p – applications overload network), 2) security (user auth, source check), 3) emergency services (secure preference priorities), 4) cost and power (make network green), 5) quality (video and voice require lower jitter and loss).
5. What we need? Equal capacity for equal pay.
5. Today all security is left to the computer. The network doesn’t even identify the source address (no way to determine who sent spam…). Goal: network to secure each connection (flow) – user and computer ID sent to network to be verified, and network check if source address is correct.
6. Other proposal: Flow rate management – interesting subject: control the rate of each flow individually, ensure congestion does not occur by controlling rate. Use rate control, instead of random drops.
7. Why flow rate management now? Memory cost has come down faster than processing cost – and flow rate management is memory based…
8. Cost and Power: today network equipment is packet based (every packet re-examined)… with flow rate management we process flows (just look at flow record). With flows (not packets), power, size and cost lowers 3 to 5 times…
“Building a zero carbon Internet”
A presentation by Bill St. Arnaud, from CANARIE, Canada. Some notes:
1. They are building “zero carbon” data centres connected by optical networks, powered by windmill and hydroelectric power, etc. These are built in remote locations.
2. Optical networks, especially 100G and 1000G, have modest increase in power consumption, way better than electronics.
3. Funding in UK will be linked to reduction of carbon emissions.
4. They are in a project, PROMPT, that tries to lower cost power by following “sun and wind”…
5. Research ideas: a) dynamic all optical nets with solar or wind powered optical repeaters, b) wireless mesh adhoc nets with mini solar panels at nodes, c) new shortest energy path internet architectures with servers, computers and storage collocated at remote renewable energy sites… This is a great topic, and everybody at OFC was talking about being “greener”. Maybe we could engineer traffic to make the network green? Now that would be a cool idea…
OFC is a very big conference, with several interesting things happening at the same time. Therefore, we have to be very judicious in the choices we make. Some of the talks are quite good, but many of them are really uninteresting, or the speakers are really poor, or are just very nervous. So I will only make notes of talks I think were worthy somehow... which will leave probably half out.
In this first day of technical presentations I seem to have perceived a trend: the optical community is trying to tie itself closer together with the internet community. There was a talk by someone from the National Science Foundation and also another from the GENI project where this was quite clear. But let me leave in here some details on a number of talks.
"Global Load Balancing of Zero-Bandwidth TE LSPs in MPLS Networks"
The first author of this paper is Filippo Cugini (from CNIT), but the presenter was Francesco Paolucci, from Scuola Superiore Sant’Anna, Italy. Cugini is one of the co-authors of the paper I will present tomorrow. It is interesting to note that Francesco's group is probably the "winner" of this year's OFC. They have 11 papers here, which is certainly the highest number for a single institution. Our paper is part of the list, since besides Cugini, Alessio Giorgetti and Prof. Castoldi are also co-authors.
They proposed an iterative algorithm to achieve global load balancing of zero-bandwidth TE LSPs (keeping it simple, these are low priority LSPs). They have realised that commercial MPLS routers present very poor balancing for this type of traffic. The algorithm is neat, and it seems to work, but I had some doubts about its scalability. I think they didn't dive into to this problem in-depth. I will probably chat with them over dinner one of these days and try to clarify this... :)
"Availability-Guaranteed Connection Provisioning with Delay Tolerance in Optical WDM Mesh Networks"
The first author and presenter was Cicek Cavdar, from Istanbul Technical Univ. (ITU), Turkey. They have exploited delay tolerance to decrease blocking in availability guaranteed shared-path-protected optical WDM networks. They reduce blocking probability without sacrificing additional resources. This is the first proposal using the time dimension to help mitigate this problem.
"LSP Request Bundling in a PCE-Based WDM Network"
The first author of this paper is Jawwad Ahmed, Royal Inst. of Technology, Sweden, but I am not sure if he was the one presenting it (I didn't pay attention in the beginning of the session). They propose the bundling of LSP requests to improve network optimization, at the expense of an increased connection setup delay. Basically these requests are sent to a centralised element (the PCE) and it may make sense to bundle these requests, to reduce the overhead in the network. Their approach showed that for a connection holding time of 20 seconds or less it makes sense doing this. But 20 seconds is a very low figure, I think, for a core network. I was not totally convinced with their results, and I think this topic is worth exploring a bit more.
"IP and Optical Integration in Dynamic Networks"
This was an invited talk by Ori A. Gerstel, from Cisco Systems (I attended a short course by the speaker on Sunday, but the talk was more interesting). The main question addressed was: "how to have a fully dynamic optical network?". He discussed the real-world needs for such dynamism, and how toenable IP/optical capabilities that will make this technology deployable. Some notes:
1. He thinks this will be a stepwise evolution (not a revolution). Some of the steps he mentioned and that interest me more are a) the creation of an automated control plane inside the optical network, b) the possibility of triggered lightpaths setup from the router, and c) optical packet switching. About this last point he is very sceptical. Me too, at least for the short-medium term: where are optical buffers and packet header processing at the optical level?
2. Some papers worth reading: "Handling IP traffic surges...", by Pongpaibool et al., and "Towards deployment of signalling based approaches...", by Salvatory e al. (ECOC 2008).
"Optical Communication Challenges for a Future Internet Design"
Another invited talk, this time a more "political" one. The speaker was Darleen Fisher, from the Natl. Science Foundation (NSF), USA. She talked about the challenges faced by the optical research community. She mainly talked about FIND (Future Internet Design), and their clean slate approach for network research. It was interesting to note that the NSF is attempting to join the optical networking people with the Internet people. One example is the DOCS project, where they are mixing people from the optical area with internet researchers (like Nick McKeown).
The questions in the end were quite interesting, with two optical researchers asking the speaker why optical research doesn't get more money, and specifically why it gets way less money than wireless projects... :)
"Survivable Logical Topology Design for Distributed Computing in WDM Networks"
Xiang Yu, from SUNY Buffalo, USA, presented an MIP formulation and an efficient heuristic for the problem of logical topology design for a distributed computing application to survive one computing cluster failure and one fibre link failure in WDM networks. This is a NP-hard problem, so he presented a MIP formulation (not scalable) and then also a heuristic. Unfortunately, he didn't manage his time well, so he skipped the most important part - the results. The little time he used for this was to say the obvious, so in the end it was not at all useful. This was the example of a talk that, for not being well prepared, wasn't able to pass a message that could be of interest to some of the audience.
"Experimental Demonstration of SIP and P2P Hybrid Architectures for Consumer Grids on OBS Testbed"
Lei Liu from the Key Lab of Optical Communication and Lightwave Technologies(OCLT), Ministry of Education, Beijing Univ. of Posts and Telecommunications, presented a paper where he proposed three types of SIP and P2P hybrid architectures for consumer grids. The idea of a hybrid architecture was interesting, turning the centralised client/server model of SIP into a P2P one. Unfortunately, again, he spent all his time explaining the architectures, and skipped the results section...
"GENI: Overview and Plans"
This was an invited presentation where, again, one could see the attempt to join the optical community with the computer nets community. GENI, the Global Environment for Network Innovations, is a suite of experimental network research infrastructure being planned and prototyped in work sponsored by the NSF, and was presented by Kristin Rauschenbach, from BBN Technologies. The idea is that researches can use a "slice" of the infrastructure for long running, realistic experiments, so simultaneous experiments, radically different, will be running in parallel. They aim to glue together heterogeneous infrastructure (avoiding technology "lock in").
On the optics side, the GENI goal is to enable computer people direct access to optics (by virtualisation of optical capabilities and programmable optics). Someone asked what the cost for a singular researcher using GENI is, and the speaker answered that the cost would probably be part of the NSF funding contracts. So if a researcher wants to use a "slice" as part of his or her work, they do not seem to have a "cost table" for that. At least not now.
"Light-Mesh: An Evolutionary Approach to Optical Packet Transport in Access Networks"
Ashwin Gumaste, from the Indian Inst. of Technology at Bombay, India, has presented a paper proposing an alternative solution to PONs. PONs require lots of optical fibres (hence, high costs), and so their system is a "light mesh" concept, with the objective of reducing the number of fibre links needed. They have built a test bed, and they were able to achieve optical packet transport using mature off-the-shelf components. Though their system is prone to collisions, with the algorithms they use (CSMA-CD-like), they seem to achieve low delays.
This Sunday the conference program was only some short, 3-hour long, courses, and some workshops.
The short courses are very interesting, but they have a cost, which is always a problem when you're a PhD student... but there are advantanges of being a student: for those courses that are not full, students receive an e-mail asking if they want to participate, free of charge. :) That was what happened this time, and I've enrolled to 3 short courses:
SC216 An Introduction to Optical Network Design and Planning, Jane M. Simmons; Monarch Network Architects, USA
This was a very interesting course. It started by explaining the most important optical network elements, and then it moved to algorithms - for routing, regeneration, and wavelength assignment - which was very interesting and I was not expecting in an OFC course. Then it also talked about traffic grooming. With all these things mixed, there is really a great need of good algorithms and protocols to make the network as efficient as possible. Some notes:
1. Long haul networks use roughly 80 lambdas per fibre, and regional/metro-core use roughly 40 lambdas. These are US figures, I guess, but I was impressed with these high values. Some of the networks I am aware of (from network providers) have only something between 10 and 40 Gbps, which means probably use only 1 to 4 lambdas... I guess maybe these are normal values for walled networks from a single network provider alone.
2. Today there is no wavelength conversion on optical add drop multiplexers. Wavelength conversion is only possible on OEO (Optical - Electrical - Optical) architectures. So it makes all sense the assumption we've used in the paper I will present here. We assumed there was no wavelength conversion in our all optical network scenario...
3. The optical reach (distance an optical signal can travel before necessitating regeneration) of legacy networks is between 500-600km. Today we can have 2000-4000km.
4. Having or not regeneration has immense cost implications in a long haul network. For this reason, the best routing mechanism to find the best path between two nodes should: a) from all available paths, check the one that has fewer number of regenerations, and b) then check for the shortest path in terms of number of hops.
5. Dynamic path routing may not be the best option in a long haul network - it can lead to too much "wavelength interference".
6. Using the noise figure instead of number of hops or distance as the metric for shortest path algorithms is probably the best option.
7. When a node performs regeneration, it is possible to have wavelength conversion in that node (because there is OEO conversion in that node, so we can make use of that).
8. Routing and Wavelength Assignment: doing it in separate steps (run shortest path, check if we have wavelengths available that satisfy the wavelength continuity constraint) is probably better than doing all in a single step.
9. Grooming (bundling of sub-wavelength traffic to form well-packed wavelengths) switches should be deployed in the edge, not core. And we probably don't need all nodes to be grooming-sites (maybe 20 to 40% is enough).
10. In a metro network you usually have a lot of fibres, so it's different from a long haul one, where you have less fibres but several lambdas.
SC243 Next Generation Transport Networks: The Evolution from Circuits to Packets, Ori A. Gerstel; Cisco Systems, USA
This was the course I was more interested in, but unfortunately it was the less interesting. There were two main reasons, I think, for that. The first was that most of the subject was not new to me, and since it was all just a superficial touch in the matter, I learned nothing new. The second was that the speaker wasn't particularly inspired, and the way he approached the subject was just not interesting (although one could see he really was an expert on the issue...).
Mainly Ori Gerstel talked about the two types of multiplexing used in transport networks: time division (SDH and related), wavelength (WDM) and packet (MPLS). He talked about the possible paths of evolution of all these technologies, and how they are somehow being put closer together. I was hoping he would talk more in-depth of MPLS and GMPLS, but he didn't say much rather than the very generic stuff.
There was, however, a very interesting point. He mentioned a study that looked at the household needs in 2010, in the US. In this study they consider these services would be offered to every household soon: HDTV, SDTV, PVRs, and VoIP. And they concluded that twenty such homes would generate more traffic than what traveled the entire Internet backbone in 1995! And he finished this slide using the same words I have recently used in my talks: "the bottleneck is moving to the aggregation and core parts of the network".
SC114 Passive Optical Networks (PONs), Paul Shumate; IEEE Lasers & Electro-Optics Society, USA
This was an excellent course by Paul Shumate. The objective of the course was to explain why PONs are becoming the key network approach to deliver Fiber to the Home (FTTH). Some notes:
1. The normal splitting ratio in current PONs is between 8 and 32. Although in the US they are satisfied with these values, n Europe there is a growing interest in much larger ratios, up to 2048 (European SuperPON).
2. The US seems more interested in this technology than Europe. The main reason is that in Europe the copper distances are shorter, and the copper is better (due to the reconstruction after World War II), so VDSL is seen as a good option.
3. The speaker showed the nice picture from Claffy et al. paper "the nature of the beast: recent traffic measurements from an Internet backbone", where the authors got to the conclusion that most packets in the Internet where 1500 bytes-long or less.
4. FTTH status: a) 4M homes already connected in the US (2008); b) highly successful in Asia, with 3M new subscriber each year in Japan (40-50M users in 2010/2011, probably); c) growing interest in Europe, especially Scandinavia, but also France, UK, Spain, etc.
5. How much bandwidth is necessary to the home? A: In the near future, with HDTV, many channels, TiVo, etc., at least 50-100Mbps.
6. Studies are now sugesting that symmetry is needed (asymmetric connections will be a thing of the past soon).
7. Average broadband speeds: Japan = 60 Mbps, Korea = 45 Mbps, Finland = 20 Mbps (the top three). Further below the US = 5Mbps, and the UK = 3Mbps. Portugal is in between these, with 8Mbps.
I will take some notes and write a short report everyday, so if you are interested in what's hot on optical networking and optical communications, I invite you to watch this space during the next few days.