What's on IPTV's neighbourhood?

Wednesday 19 August 2009

SIGCOMM Day 2

Keynote Speech

Great keynote speech by the winner of the SIGCOMM award (well deserved!): Jon Crowcroft. Here are the slides:

http://conferences.sigcomm.org/sigcomm/2009/pecha-kucha-dozen.pdf

Session 1: Wireless Networking 1 (Chair: Brad Karp, University College London)

Cross-Layer Wireless Bit Rate Adaptation

Mythili Vutukuru (MIT), Hari Balakrishnan (MIT), Kyle Jamieson (UCL)

· We have time varying wireless channels: due to large scale attenuation, small scale fading and interference

· So we need online bit rate adaptation: varying modulation and coding.

· Currently we have frame-based and SNR based algorithms for this.

· They have problems: slow, need look up tables, so they propose SoftRate. Use per-bit confidences, no need for look up tables – they get interference free BER.

· SoftPHY design more general than earlier work.

· Adapts to channel accurately and quickly, robust to collision losses, 2x gains over existing protocols.

· They propose using a soft output decoder (instead of the normal decoder) in the receiver, and use a different protocol, SoftRate.

· They created a GNU radio with USERP. Physical layer was from real traces, but then used ns-3 for TCP. They used a channel simulator for some scenarios (like train travelling).

· Good results predicting the BER of the channel.

· The comparison was made with other protocols: static best (best for each packet), SNR-based and Frame based.

· Compared to the optimum (static best): was within 10% of the optimal.

· Compared to the frame based: up to 2x over best frame based (these are very slow).

· Compared to SNR based: 4x over untrained SNT based algorithms

SMACK - A SMart ACKnowledgment Scheme for Broadcast Messages in Wireless Networks

Aveek Dutta (University of Colorado at Boulder), Dola Saha (University of Colorado at Boulder), Dirk Grunwald (University of Colorado at Boulder), Douglas Sicker (University of Colorado at Boulder)

· Question: Can we reduce the ACK time for broadcast/multicast scenarios?

· Instead of each user answering at its time, multiple users response at the same time to reduce the ACK time – using OFDM.

· The objective is to speed up group communication, like route discovery, neighbour info, etc.

· Nodes are assigned unique sub carriers. They send a tone to say “yes”.

· No packet transmission + concurrent response = faster ACK

· They have made an implementation of the system.

· In summary, main idea: PHY layer signalling can be used to innovate new protocols for wireless networks.

White Space Networking with Wi-Fi like Connectivity – Best paper award

Paramvir Bahl (Microsoft Research), Ranveer Chandra (Microsoft Research), Thomas Moscibroda (Microsoft Research), Rohan Murty (Harvard University), Matt Welsh (Harvard University)

· Main objective: How to build a wireless network using the white spaces?

· Spectrum allocation: there is more spectrum for broadcast TV than to WiFi.

· Moving from analog TV to digital TV.

· White spaces: unoccupied TV channels. Let's use them!

· We must not interfere with TV and mikes that are using that part of the spectrum: so we can use it iff no one else is using it.

· So we have more spectrum (3x that of 802.11g) and longer range (at least 3 to 4x WiFi)

· Goal: deploy infrastructure wireless – give good throughput without interfering with incumbents (TV and mike)

· Problems: fragmentation of spectrum (so we have variable channel widths), location impacts spectrum availability (spectrum exhibits spatial variation), and there is also temporal variation (incumbents appear/disappear over time).

· They’ve built the WhiteFi system – to evaluate by deployment of prototypes and by simulations.

· How do the new clients know which channels to use (discovery time)? They infer by analysing for how long the amplitude of a received signal is increased. They achieve a 2x reduction of discovery time for 30MHz width.

· Spectrum assignment algorithm: they implement MCHAM – a multi channel airtime metric. They consider not only if the channel has room, but also how much it has.

Session 2: Datacenter Network Design (Chair: Stefan Saroiu, Microsoft Research)

PortLand: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric

Radhika Niranjan Mysore (University of California San Diego), Andreas Pamboris (University of California San Diego), Nathan Farrington (University of California San Diego), Nelson Huang (University of California San Diego), Pardis Miri (University of California San Diego), Sivasankar Radhakrishnan (University of California San Diego), Vikram Subramanya (University of California San Diego), Amin Vahdat (University of California San Diego)

· Portland is a single logical layer 2 data centre network fabric. It separates host identity (IP) with host location (a “PMAC”).

· Data centres are growing in scale

· Goals for data centre network fabrics: plug and play, scalability, small switch state, seamless VM migration

· Layer 2 data centre fabrics. Advantages: plug and play, and seamless VM migration.

· Layer 3 data centre fabrics. Advantages: scalability, small switch state.

· With flat address you need about 100MB of chip memory (150 times what we put in a chip today).

· Other network fabrics: SEATTLE (SIGCOMM08) – problems: large switch table and broadcast based routing protocol. VL2 (SIGCOMM09).

· Portland: Plug and Play + Small Switch state.

· Main assumption: Hierarchical structure of data centre networks; they are multilevel, multi-routed tree.

· They impose a hierarchy on a multi-rooted tree.

VL2: A Scalable and Flexible Data Center Network

Albert Greenberg (Microsoft Research), Navendu Jain (Microsoft Research), Srikanth Kandula (Microsoft Research), Changhoon Kim (Princeton), Parantap Lahiri (Microsoft Research), David A. Maltz (Microsoft Research), Parveen Patel (Microsoft Research), Sudipta Sengupta (Microsoft Research)

· Cloud service data centre need to be agile (assign any servers to any services) and must scale out.

· VL2: First DC network that enables agility in a scaled out fashion.

· They analysed a large cluster, and realised that traffic patterns are highly volatile and unpredictable – so optimisation should be made frequently and rapidly

· We need a huge L2 switch, or an abstraction of one

· VL2 achieves agility at scale via 1) L2 semantics, 2) uniform high capacity between server, and 3) performance isolation between services

· Lessons: 1) randomisation can tame volatility, 2) add functionality where you have control, 3) there’s no need to wait.

BCube: A High Performance, Server-centric Network Architecture for Modular Data Centers

Chuanxiong Guo (Microsoft Research Asia), Guohan Lu (Microsoft Research Asia), Dan Li (Microsoft Research Asia), Haitao Wu (Microsoft Research Asia), Xuan Zhang (Tsinghua University), Yunfeng Shi (Peking University), Chen Tian (Huazhong Universtiy of Science and Technology), Yongguang Zhang (Microsoft Research Asia), Songwu Lu (UCLA)

· Novel network architecture for container based, modular data centres.

· BCube design goals: high network capacity for various traffic patterns (one to one unicast, one to all and one to several groupcast, and all to all data shuffling); only use low-end commodity switches, graceful performance degradation

· BCube is a server centric network.

· They compare their system with Tree, Fat-Tree, and DCell+, achieving higher performances.

No comments:

About me

e-mail: fvramos at gmail dot com