What's on IPTV's neighbourhood?

Tuesday, 1 December 2009

CoNEXT, Day 1

I am attending CoNEXT these days, and as usual I will maintain some notes on the talks I find more interesting.


ReArch'09

During the morning I decided to attend the ReArch workshop. Some interesting talks in the morning sessions, with Mark Handley and Van Jacobson figuring at the top of the list.

Keynote Presentation by Mark Handley (University College London)

A similar talk to the one Mark gave when he received the Royal Society-Wolfson Research Merit Award (as far as I can remember).

The Internet Control Architecture can be divided in three parts: routing, congestion control, and traffic engineering. These three things have been a bit disconnected because they were not planned together: they were an accumulation to fixes to specific problems (TCP with Jacobson addition for congestion control, BGP for routing, etc.). The control plane was made of incremental fixes, where the power of legacy had an important role.

It would be nice to have an architecture where all control protocols fit well.

Main question: is it possible to change the internet architecture in a planned way, so as to achieve long term goals?

By looking back to Internet control architecture, are there opportunities to change the game? In routing, it’s nearly impossible to replace BGP. It would have a huge network effect. For congestion control it is nearly impossible to replace TCP. But there is lots of stuff to do in-between these “layers”.

A wish list for control mechanisms: very high robustness (no downtime, robust to attack), load dependent routing (move traffic away from congestion), diverse traffic mix, and sane economics (reward net operators for investment).

Also, consider the problem of multi-homing. Multi-homing provides redundancy (more than one path provides robustness). However, routing is not a good way to access that redundancy. It would be good to access the redundancy at the transport layer, where congestion control can see it. With multi-homing (multipath) servers can do load balancing. Idea of pooling resources: several links working as if they were a single pool. Multipath TCP pools multiple link resources

Economics: what is the marginal cost of a packet? If no one else wanted to send at that time, no cost. It only makes sense to charge if our traffic displaces another user’s traffic. We should be charging for congestion.

Common models – rate based or volume based charging (x giga per month) don’t offer the right incentives. They are inefficient economically. Also, some apps are latency sensitive, some only care about long term throughput. Charging for congestion volume would thus make sense, encouraging machine to machine traffic to move to uncongested times.

ISPs can’t charge for congestion because they can’t see it properly. But end systems can – and some routers too. Bob Briscoe’s re-feedback and re-ECN ideas (congestion exposure): to indicate to an ISP the congestion seen by traffic downstream. Congestion exposure is thus an enabler for sane economics.

VoCCN: Voice Over Content-Centric Networks

Van Jacobson (Palo Alto Research Center (PARC), USA); Diana Smetters (PARC, USA); Nicholas H. Briggs (Palo Alto Research Center (PARC), USA); Michael Plass (Palo Alto Research Center (PARC), USA); Paul Stewart (Palo Alto Research Center (PARC), USA); James D. Thornton (PARC, USA); Rebecca L Braynard (PARC, USA)

Everybody knows that content based networking is great for content dissemination, but can’t handle conversational or real time traffic: everybody is half right... In fact, content networking is more general than IP, and does anything that IP can.

They implemented VoCCN, a VoIP functional equivalent based on CCN to prove this.

VoCCN: why bother? VoIP works badly for multipoint, multi-interface and mobility. VoIP wants to talk to an IP address, and in a voice call we want to talk to a person. Also, VoIP security is poor (only SSL between proxies).

With CCN, no need to map from a user to an address: we have very flexible names. And sender can encrypt its identity, with only the receiver able to decrypt it. No need for the user to reveal its identity. Supports secure VoIP.

They’ve built the voice app on top of CCN (instead of IP), and in the end the app performance was very similar to VoIP. They flipped frantically from one Ethernet connection to another, and the shift was almost imperceptible, and no packets were lost.

All this is available open source: www.ccnx.org. The VoCCN linphone code should be there by the end of the week.

Classifying Network Complexity

Michael H. Behringer (Cisco, France)

Network complexity is increasing: everybody knows that. But what is “network complexity”?

You need some complexity to achieve a certain level of robustness. How to deal with complexity? Divide and conquer (layering, o-o: “classes” matter, not instantiations), shifting complexity (e.g., away from the human – make simple user interfaces – look at the iPhone), meta-languages, structural approaches (reduce dependencies by design).

The “complexity cube” idea: a cube with three axes that represent the three areas of complexity: operator, physical network, network management.

Future work: quantitative metrics, impact of the rate of change, investigate human factors.

IP Version 10.0: A Strawman Design Beyond IPv6

Kenneth Carlberg (SAIC, USA); Saleem Bhatti (University of St Andrews, United Kingdom); Jon Crowcroft (University of Cambridge, United Kingdom)

Once upon a time: internet unknown by the general public, best effort was the only game in town, people used telnet. But now: need for security, and need for a Next Generation IP.

We were running out of address space. CIDR and NAT are near term solutions. Also, associated routing table size explosion. Solutions: new lookup algorithms to reduce impact, and faster hardware. But multi-homing has renewed the problem...

New Generation IP: 1) Simple IP (Steve’s IP) – minimise header, adding more extensibility (one or more Next Header), flow ID, larger flat address structure. 2) The P internet protocol (Paul’s IP) – change addressing to locater and identifier split, and hierarchical and variable length locator (implied source routing). Finally, 3), the grand compromise of 94: the simple IP-Plus. Simple IP with hierarchical addresses of Paul’s IP: IPv6.

Critique to IPv6: not much of an architectural change. 1) Large 128 bit address (retains locator & identifier, provider is still cling to NATs – they have no economic incentive to migrate). 2) Same size diff-serve field. 3) Multiple next headers (encapsulations or MPLS). 4) End to end flow labels (“market” uses islands to cut through routing (MPLS)). Note: recent report show IPv6 traffic is 1/100 of 1% of all IP traffic... Main questions: Does “more” qualify as architectural change? Where are the “must have” features?

Four significant discussions on “location/Identifier split”: 1) 1977 – tcp and mobility, 2) 1992-93: Paul’s IP work, 3) 1996 (O’Dell 8+8 proposal, 4) 2007 (IAB report). Three efforts now: HIP, LISP and ILNP.

Multi-homing problem: provider independent prefixes tend to be popular, but are non aggregatable.

IPv10 design: retain minimalism and extensibility of IPv6. Incorporate identifier/locator split. Besides headers, also introduce tails (change state insertion model: temporary headers and tails...). In the header include header navigation and forwarding info. In the trailer: trailer navigation , end to end info, diff-serv.

Impact of tails: change the end to end model of constructing headers (facilitate temporary insertion of overhead info). Avoid inefficient encapsulation. Forster the need to go beyond current ASIC header lookup limitation.

Should we be more radical in our design? Are there any must have features in IPv10?


ACM CoNEXT Student Workshop

During the afternoon we had the poster session, where I presented the poster “Relative Delay Estimator for Multipath Transport”. After that, a nice panel session to offer advice to PhD students, which included a discussion on what is best: to publish in conferences or journals? Contrary to most other fields, CS researchers tend to prefer conferences... However, there was some important points in favour of journals - namely the idea of creating a "scientific archive". Keshav, for instance, defended strongly journals, and invited everybody to publish their research in CCR... :) Also, TON wants to add bigger size papers and reduce the costs to add extra pages. We may see a move to papers in the meanwhile. A nice idea seems to be to publish in a top conference, and then to publish a longer version as a journal paper. Let's do it! :)

No comments:

About me

e-mail: fvramos at gmail dot com