Invited talks
, Information Technology and Electrical Engineering Department, ETH Zurich
Two fundamental philosophies to build fault-tolerant distributed systems exist. One is mentioned in every other newspaper: Satoshi Nakamoto's blockchain is a fault-tolerant data structure to organize transactions. Its main advantages are its simplicity and scalability; the main disadvantage is that a blockchain is only eventually consistent. On the other hand, we have the various consensus and agreement protocols designed by the distributed systems community, e.g. Paxos. These protocols usually provide strong consistency, but have scalability issues. Can we marry the two worlds in a natural way, inheriting the best features of both?
Roger Wattenhofer is a full professor at the Information
Technology and Electrical Engineering Department, ETH Zurich, Switzerland.
He received his doctorate in Computer Science in 1998 from ETH Zurich. From
1999 to 2001 he was in the USA, first at Brown University in Providence, RI,
then at Microsoft Research in Redmond, WA. He then returned to ETH Zurich,
originally as an assistant professor at the Computer Science Department.
Roger Wattenhofer's research interests are a variety of
algorithmic and systems aspects in computer science and information
technology, e.g., distributed systems, positioning systems, wireless networks
, mobile systems, social networks. He publishes in different communities:
distributed computing (e.g., PODC, SPAA, DISC), networking (e.g., SIGCOMM,
MobiCom, SenSys), or theory (e.g., STOC, FOCS, SODA, ICALP). He recently
published the book "Distributed Ledger Technology: The Science of the
Blockchain".
, INESC-ID, IST, U. Lisboa
The problem of ensuring consistency in applications that manage replicated data is one of the main challenges of distributed computing. Most consistency criteria require updates to applied and made visible respecting causality. Techniques to keep track of causal dependencies, and to subsequently ensure that updates are delivered in causal order, have been widely studied and typically offer the following tradeoff: either resort to small amounts of metadata, which permits to achieve high message throughput at the cost of increased latency, or resort to large amounts of metadata, which permits to achieve lower latencies while sacrificing throughput. This talk reports on research aiming at breaking this tradeoff, by providing simultaneously high-throughput and low latency, even in face of partial replication.
Luís Rodrigues is a Professor at the Departamento de Engenharia Informática,
Instituto Superior Técnico, Universidade de Lisboa. He is a member of the
Distributed Systems Group at INESC-ID Lisboa, where he also serves as head
of the board of directors.
His current interests include fault-tolerant
distributed systems, concurrency, replicated data management, cloud
computing, dynamic networks, information dissemination, and autonomic
computing. He has more than 200 scientific publications in these areas.
He is co-author of two books on distributed computing.