What's on IPTV's neighbourhood?

Thursday, 3 December 2009

CoNEXT, Day 3

Again, some cool and useful papers, and a nice panel in the end of the day moderated by Paul Mockapetris (the inventor of DNS). Other info: next CoNEXT will take place in Philadelphia, from Nov. 30 to Dec. 3, 2010.

Content Availability in Swarming Systems: Models, Measurements and Bundling Implications

D. Menasche (University of Massachusetts at Amherst), A. Rocha (Universidade Federal do Rio de Janeiro), B. Li (Tsinghua University), D. Towsley, A. Venkataramani (University of Massachusetts at Amherst)

This paper received the best paper award.

Bittorrent: huge success because of the self-scalability property. However, there is a problem: availability of less popular content.

Swarm: set of peers interested in the same content. Publisher (seed): interested in dissemination. Peers (leechers): interested in download.

Their setup: monitors in around 100 planet nodes. 45 thousand swarms monitored for 8 months.

Finding 1: content unavailability is severe; 40% of swarms unavailable 50% of the time. Swarms are more available during the first month after they are published.

Finding 2: bundling is common (a single swarm consisting of multiple files – songs in an album, episodes of a TV shown). In music around 75% are bundles.

Finding 3: bundled content is more available. Among all books for instance 62% swarms had no seeds. But in collections of books the figure drops to 36%. “Friends” episodes: 52 swarms. 23 with at least one seed (from these 21 bundles), and 29 with no seeds (only 7 bundles).

For the availability model, they ask the question: “How long is the content available without a publisher?” They use an M/G/inf queuing model for this.

Other questions the model tries to answer: What is the fraction of time in which content is unavailable? What is the mean download time of peers?

Questions for the validation part: 1) can bundling reduce download time? 2) What is the impact of bundling if different files have different popularities?

Answers: 1) bundling reduces download time (idle waiting plus active download time) but only to a certain instant. The model captures this trend. 2) Bundling reduces download time for unpopular time at the expense of popular content.

FairTorrent: Bringing Fairness to Peer-to-Peer Systems

A. Sherman, J. Nieh, C. Stein (Columbia University)

Problem: 1) free riders and low contributing peers consume bandwidth and cause slower downloads, and 2) high contributors lose. Not fair.

Inherent flaw of existing schemes: all based in rate allocation. Inherently imprecise, perform poorly in realistic scenarios. What if we don’t use rate allocation?

Idea: FairTorrent algorithm. 1) For leechers: each leecher maintains deficit counter with each other leech. Always send data to leecher with lowest deficit. The effect is that it ensures fast rate convergence. 2) For seeds: pick next leecher using round roubin. Evenly splits bandwidth among leechers.

Properties: 1) fast rate convergence of upload/download rates, 2) resilience, 3) fully distributed, 4) requires no estimates of peers intended rates allocation.

Results: Really fast convergence compared to other systems. Download times also better.

More info in http://www.cs.columbia.edu/~asherman/fairtorrent

Scaling All-Pairs Overlay Routing

D. Sontag, Y. Zhang (MIT), A. Phanishayee, D. Andersen (Carnegie Mellon University), D. Karger (MIT)

The goal: find best routes in full-mesh overlay networks. Motivation: rapidly route around fails (resilient overlay networks) or find lower latency (skype).

Prior approaches to scaling routing: choose from a few alternate paths (by eliminating the high latency ones for instance). They show how to find optimal routes in a more scalable manner.

Scaling to large overlay networks. Where is the bottleneck? 1) each nodes probes all other, 2) per-node communication thus scales linearly. Link state routing sends info to all nodes. Quadratic scaling.

They present an algorithm that finds all optimal routes with only nsqrt(n) per-node configuration. They prove in the paper that their algorithm is theoretically the best possible.

T heir approach to one hop routes: send a link state to a subset of nodes only, and not all. These are the rendezvous nodes (RN). Then each RN will calculate its best interface to all places he received info for, and then send this recommendation for the same guys that sent stuff to them. So, with fewer rendezvous servers, you have less communication.

How to assign rendezvous servers? 1) every two nodes should have a rendezvous server in common. 2) They use quorum systems to provide this property (they use grid quorum). Each node has 2sqrt(n) rendezvous servers.

Summary: round 1) size n link state tables sent to 2sqrt(n) nodes; round 2) size 2sqrt(n) recommendation tables to 2sqrt(n) nodes.

Strategies to deal with RS fails: built in redundancy (every pair of nodes has 2 RS in common).

Multi-hop algorithm: repeat the one-hop algorithm using modified link costs.

They applied the algorithm to find optimal one hop routes in a Resilient Overlay Network (RON). Key questions: how do rendezvous failures affect bandwidth requirements? They deployed this in planetlab with 140 nodes for 2 hours. They observed link failures: on average 50% of nodes experience over 8 concurrent link failures. Double rendezvous failures are infrequent.

StarClique: Providing Privacy Guarantees in Social Networks using Graph Augmentation

K. Puttaswamy, A. Sala, B. Zhao (University of California, Santa Barbara)

Social apps are growing. You share valuable info with friends (recommendations, social bookmarking). Example: social web caching app. By sharing web cache with friends, you improve performance and also it works as a collaborative phishing filter.

Major concern of the user using these apps: privacy. 1) Friends receive data, but if a friend’s computer is compromised, then the info can be used for other purposes; 2) botnets are targeting social network credentials; 3) effective spamming via social engineering; 4) rising of effective phishing attacks.

Naive defense: remove user identity from the data shared. But it is still possible to have a social intersection attack: several attackers intercept the social network topology. Properties of the attack: 1) need 2 or more attackers, 2) application independent, 3) works within the applications bounds.

They measured the attack’s effectiveness (in facebook), and realised that more than 70% of the users can be exactly identified by other users... so the attack can be really effective.

Now, solution to defend to this attack. 1) Intuition: increase the number of common friends (more common friends lead to better privacy); 2) adapt the k-anonymity approach; 3) anonymise the social graph to achieve k-anonymity

Cost of solution: users receive additional data, and app servers process and send additional data

Goals of their system model: minimise number of latent edges added, and keep the latent edges close in social distance. They evaluated their system in 6 different traces from previous studies (facebook, flickr, etc.).

Lockr: Better Privacy for Social Networks

A. Tootoonchian (University of Toronto), S. Saroiu (Microsoft Research), Y. Ganjali (University of Toronto), A. Wolman (Microsoft Research)

The state of privacy on Online Social Networks (OSN): 1) no control over what facebook does with my social info; 2) users need to duplicate their social info; 3) must register to see friend’s content; 4) privacy terms can be contradictory, confusing, and make some users very nervous. Sense of privacy is really not very good.

Lockr’s goal: today users have no control of their social info. They want to put the user in control of their info. How? By using social attestation. You can say that a certain user is your “friend”, or “colleague”, or “drinking buddy”. The social network is created by passing out attestations. You can store your content anywhere, but you set a social Access Control List when you upload this stuff. Everything revolves around you. Also, there is an attestation verification protocol (for you to download info from a user).

Locker works with any OSN. Also works with P2P systems, for instance.

Privacy benefits: 1) users form one social network using social attestation; 2) they continue sharing content via the normal OSN or P2P mechanisms.

What privacy problems are solved by Lockr? 1) “users have no control of their social info”: with lockr users can choose where to store their attestations; 2) “user must register to see friend’s content”: in lockr it removes this, users just show their attestation(only requirement is to map ID to public key); 3) “social deadlock” (in P2P each user wants to verify the other attestation’s first), or “who goes first?”: in lockr you also have a relationship key (all your friend’s have your “friends” relationship key); 4) “hard to revoke attestations”: in lockers the attestation has expiration dates; 4.1) “expired attestation still have valid relationship keys”: a) rotate relationship keys daily (until it expires) – but, lots of keys, so 2) they use a one way hash chains; 5) “OSN sites can now ‘sell’ attestations”: they use zero knowledge proofs for attestation verification. So you keep a signature of your attestation (not sharing the signature). But you can prove to anyone that you can sign the attestation, without sending the signature.

Is lockr a real system? Yep - they implemented Lockr center as a facebook app, and added lockr for bittorrent and flickr.

Code: http://www.lockr.org

ARES: An Anti-jamming REinforcement System for 802.11 Networks

K. Pelechrinis, I. Broustis, S. Krishnamurthy (University of California, Riverside), C. Gkantsidis (Microsoft Research, Cambridge)

Launch DoS attacks in WiFi nets in easy. Jammers can disrupt communication. How to deal with jammers? Frequency hopping was a possibility, but this is weak (WiOpt2009: only 4 jammers can block entire 5GHz spectrum).

So, how to alleviate this without frequency hopping?

Main contributions: 1) fixed rates often preferable in the presence of a jammers; 2) clear channel assessment (CCA) is important.

1) Interaction random jamming/rate control. Jamming effects last beyond the jamming period when rate control is used. In the presence of an intermittent jammer the rate control algorithm might be slow to converge to optimal rate. Remedy: fixed rate assignments increase immediately throughput. So it is important to decide when to perform rate adaptation. This can be done analytically. They show that fixed rates provide high throughput gains. However, perfect knowledge is not realistic, so they include a Markovian Rate Control (MRC) without requiring knowledge of any of the parameters needed for than analytical solution. This can be tuned to be close to optimal.

2) Rate control removes transient jamming effects. What about constant jamming effects? Solution: power control. Power adaptation helps only when transmitter is not in the jammer’s range, and low transmission rates are used. Increasing power helps when transmission rate is low.

How to deal with high power jammers (compared to a legitimate wireless card)? Transmitter needs to ignore jamming signals, and receiver must decode legitimate packet. Using CCA helps in here. There are, however, side effects to this: the transmitter may become a jammer unintentionally, and the receiver can look at legitimate signals and think they are noise. So, they present a simple heuristic for setting the CCA on the link.

By using these observations on rate and power control they built ARES. It works like this: if jammer is detected, check if CCA is tuneable. If so, they invoke power control. If it doesn’t solve the problem, do rate control. If it is not tuneable, just do rate control.

They implemented their system in an indoor testbed. With rate control only it improves performance up to 100%. With a mobile jammer the system can increase performance (using power control) by 150%. Note: by adding more APs the benefits are reduced.

ThunderDome: Discovering Upload Constraints Using Decentralized Bandwidth Tournaments

J. Douceur, J. Mickens, T. Moscibroda (Microsoft Research), D. Panigrahi (MIT)

Motivation: upload constraints limit scale performance in online games, for instance. How to discover these constraints? Bandwidth estimation is non trivial...

ThunderDome provides estimates that are fast, accurate and decentralised.

Many tools for one way bandwidth discovery: packet rate (pathload, PTR). What is the problem with these? 1) Directional ambiguity; 2) what’s the proper probing schedule?, 3) how to deal with probing error?

Solution: 1) pair-wise bandwidth probe: we measure b(X->Y and Y->X) between pairs of nodes to determine upload speed; 2) decentralised bandwidth tournaments, 3) probe aggregation.

Participant model assumptions: 1) peers gather and then stay for 10s of minutes: ex. games. 2) hosts have public IP addresses or cone NATs, 3) hosts do not lie. Network model assumptions: 1) peers connected by hub and spoke topology, 2) asymmetric bandwidth constraints.

The pair-wise bandwidth probe: hosts X and Y transmit to each other. With a single exchange they can get the upload speed of the loser. You do these between pairs of nodes (the “tournament”), and in the end you have info of all (except one).

But we may have measurement errors: they may misjudge one way bandwidth, confusing the upload of two hosts. This occurs when separation between up and down speed is too small (quite common in dial up). To recover from the problem we lose nodes in the “tournament”: basically, we add more samples.

They also have a system to coordinate the “tournament”, basically building a tree. These coordinators can be regular peers, so it is mostly decentralised.

Where the Sidewalk Ends: Extending the Internet AS Graph Using Traceroutes From P2P Users

K. Chen, D. Choffnes, R. Potharaju, Y. Chen, F. Bustamante (Northwestern University), D. Pei (AT&T Labs Research), Y. Zhao (Northwestern University)

Currently the public available AS graph is incomplete. ISPs do not want to disclose their connectivity.

Existing approaches: passive (BGP routing tables/updates), active (traceroutes). Limitation: small number of vendor points (ASes with monitors inside).

Promising platform should: 1) have more VPs, 2) allow active measurement, 3) scale with the Internet growth. So, P2P seems a natural fit.

Their approach: 1) increase the number of VPs using P2P users, 2) active traceroutes from edge of the internet, 3) map IP paths to AS paths, extract AS links. They also offer many corrections to false AS links, bad traceroute results, etc.

It seems a more complete AS map is now available online, which is cool:

http://aqualab.cs.northwestern.edu/projects/SidewalkEnds.html

Exploiting Dynamicity in Graph-based Traffic Analysis: Techniques and Applications

M. Iliofotou, M. Faloutsos (University of California, Riverside), M. Mitzenmacher (Harvard University)

Focus on the network wide behaviour of internet apps (IP to IP communication patterns), and build Traffic Dispersion Graphs (TDs). Goal of these graphs is to do behavioural profiling (assign apps to different groups, and detect unusual behaviours).

Previous work: static graphs to capture network-wide behaviour. These can distinguish client-server from p2p apps. But how to distinguish different flavours of these two?

In here they use dynamic graphs characteristics to extract more app properties. It is dynamic because nodes and edges change with time. Questions that arise: how to express the dynamicity of traffic graphs? What metrics to use?

How to express dynamicity: 1) graphic represented as a sequence of graph snapshots taken over time, 2) use metrics that quantify differences between graphs.

Metrics: static, like average degree, degree distribution, etc., but also binary ones that allow checking how nodes or edges change with time.

To quantify dynamic behaviour, they create a new metric: 1) detailed edge similarity, which shows both persistent and non persistent edges, for instance.

Application: detecting polymorphic blending. Action: an app mimics the traffic of another app to avoid detection. Effect: traffic of polluting app will be added to the target app. This will change the dynamics of the graphs.

They looked at many metrics (both static and dynamic), to see which detect polymorphic blending better. For small blending the dynamic metrics are more sensitive and result in more alarms. However, the best is to mix the static and dynamic metrics.

RFDump: An Architecture for Monitoring the Wireless Ether

K. Lakshminarayanan, S. Sapra, S. Seshan, P. Steenkiste (Carnegie Mellon University)

Popularity causes crowding. Look at WiFi unlicensed spectrum: what if we turn the microwave oven? How to troubleshoot these problems?

Sniffers don’t work well in the physical layer due to this interference of all sources. How could we sniff such wireless network? A possibility would be to add lots of interface cards (WiFi, Bluetooth, etc).

What about using software defined radio (SDR)? SDR hardware exposes physical layer info and supports programmable analysis modules.

Challenges of SDR: 1) how to process 256 Mbps of info in real time? 2) How to differentiate between samples? The requirements: 1) real time, 2) multiprotocol, extensibility.

Other naive solution: demodulate all signals received. Good thing is that it is protocol extensible. Problem: real time capability missing. Demodulation is costly, so all demodulators have to process everything. How to make it more efficient?

A better solution: use a energy filter after the SDR. This filter decides what signals it can use (this can remove noise, for instance). Demodulators do less work. But it is only OK when medium utilisation is low. If it is high, it can’t work on not real time.

Their solution: RDump. After the energy filter, insert a fast detector. This detector detects to which demodulator the signal should be sent. This solves everything: it is protocol extensible, and real time. Detectors can be faster because they can tolerate false positives, and can tolerate delay.

The tricky thing is how the detector works. How to detect which protocol it is? 1) Check time: 802.11 uses SIFS and DIFS, Bluetooth uses TDD slots; 2) check how phase changes: 802.11b uses DBPSK, Bluetooth GMSK; 3) also check frequency: look at channel width, 802.11b uses 22MHz, Bluetooth 1MHz.

They implement this in GNU radio and USRP SDR platform, and achieve pretty nice results.

Panel: What will be the origin of the next network revolution?

Moderator: Paul Mockapetris (inventor of DNS, ISI)

Members: Bruce Davie (Cisco), Van Jacobson (one of the main contributors to TCP/IP, PARC), Dirk Trossen (BT Research), Kenjiro Cho (IJJ Japan), Max Ott (NICTA)

More info here: http://conferences.sigcomm.org/co-next/2009/panel.php

In general, everyone seemed to agree that research needs more money, that betting in risky projects is important (not only incremental research), that more cooperation of industry is needed (it is increasingly going down), and that more freedom is needed (don’t expect great results in six months).

An interesting point by Keshav: it is easy to get money from industry, but then the company doesn’t have anyone to follow the research project. “No one answers my calls!” The company seems to give the money but then doesn’t care with the results. Bruce Davie agreed, saying that it is cheaper for Cisco to give that money “away” than to find a person inside the company to follow the research work.

No comments:

About me

e-mail: fvramos at gmail dot com