Paving digital highways

9 April 2025
As global data generation grows exponentially, the need for efficient computer processing methods has never been greater. Bernhard Haeupler has returned to Europe from the United States to carry out research on distributed and parallel computing – with an unmatched freedom to explore ideas his own way.
Paving digital highways

By Bryony Baker


You can think of computer networks like roads – in the same way that highways and streets connect cities and allow traffic to flow, networks connect computers and enable data to travel between them.

As networks have grown, connecting billions of devices across greater distances or tens of thousands of processors across a data centre, the need for efficient distributed computations has become crucial. This is where network algorithms emerged, to ensure that computers can work together across vast and complex systems like the internet.

‘Unlike traditional algorithms that run on a single machine, distributed algorithms allow multiple computers to work together, efficiently processing large volumes of data. Optimising these roads ensures that information moves quickly and smoothly, just like well-planned transport routes,’ explains Haeupler.


How algorithms behave with big data


Theoretical algorithmic research measures the efficiency of multiple computers by analysing how an algorithm performs or ‘scales’ as the size of the problem it is solving grows. To evaluate an algorithm’s performance, theoretical computer scientists examine how many computational steps it requires as the problem size increases. This concept, known as asymptotic complexity, helps computer scientists understand how an algorithm will behave when faced with large datasets.

Some instances are easy for computers to solve quickly, while others take a long time. To make sure an algorithm works well even in the toughest situations, computer scientists use a method called worst-case analysis – to analyse how it performs in the most difficult instances. This is the only way to guarantee that an algorithm always performs well - and not only just if one is lucky, points out Haeupler.

However, in distributed computing, even the best algorithms can sometimes be slow - not because they fail in practice, but because rare, artificially tricky networks can slow down any algorithm, even if it runs smoothly most of the time. Networks with an excessive number of edge crossings and highly unnatural bottleneck structures can make even the best algorithms slow.

 

The quest for universally optimal algorithms


The more fine-grained notion of universal optimality was suggested back in 1998 - to address this conundrum and develop algorithms that adapt to the specific network they are running on. Instead of performing as well as possible in the worst-case scenario, a universally optimal algorithm would be as fast as the best possible algorithm for any given network. If the network is well-structured, the algorithm should run significantly faster, but if the network is complex, the algorithm will run slowly—because it has no choice, but it would still be as fast as it can be given the constraints of that network.

However, despite this notion being proposed over 25 years ago, no progress had been made in designing universally optimal algorithms, leaving it uncertain whether such strong guarantees were even achievable. Haeupler’s research directly tackles this universal optimality question for a wide array of important and classical problems, such as computing optimal flows or shortest paths. The goal of his ERC-funded project, DistOpt-BydWorstCase, is to make sure algorithms perform well in real-world situations without getting slowed down by rare problems that are unlikely to happen - operating as efficiently as possible under different (and indeed all) conditions.

The big question this notion of universal optimality addresses, is how to design an algorithm that provably adapts and works well on real-world networks - despite the lack of a clear, formal definition of what a “real-world” network actually is. Developing accurate models or mathematical definitions for natural networks of interest remains a major challenge in the field. Without such definitions, it becomes difficult to design algorithms that offer provable performance guarantees. Universally optimal algorithms overcome this challenge by achieving strong performance without requiring a predefined network structure.


Real world applications


All this may seem far removed from everyday life, but it has a significant impact on day-to-day experiences.

Take something as simple as using Google maps - when you request directions, the system is running complex shortest-path computations across massive networks in real time. Similarly, every phone call, video stream, or online transaction relies on algorithms that ensure smooth and responsive experiences. If video calls had a two-second delay - live conversations wouldn’t be possible.

 
And faster algorithms don’t just mean quicker results - they also translate to lower energy consumption and reduction in the computational infrastructure needs. If a company can make computations 10 % faster, that translates into 10 % less energy used and 10 % fewer processors or data centres that need to be built and maintained to handle the same computations. At the scale of global tech giants, that spend tens of billions of dollars on computing infrastructure and billions on electricity, even small efficiency gains can lead to massive savings.


Although the transition from fundamental research on algorithms to practical applications takes time - sometimes even years - the influence is undeniable and as Haeupler explains, ‘sometimes, an algorithm problem just feels important - like an aspect of the world that should be understood better, even if I don’t yet know where it will lead.’

 

Returning to Europe


An ERC grant has given Haeupler the opportunity to return from Carnegie Mellon University in Pittsburgh, United States, to teach and lead a research group at ETZ Zurich as well as accept an exciting opportunity to spend half of his time helping to build and grow a world-class research institute in computer science in Sofia, Bulgaria.

When approached about this project, he was immediately drawn to the idea of encouraging mathematical talent to pursue higher studies and research. The country has a strong tradition of supporting gifted students through specialised high schools, summer programmes, and participation in the International Math Olympiad. However, it currently offers limited opportunities for pursuing world-class research or PhD-level graduate education.

To address this, the Bulgarian government is investing over €85 million in the Institute for Computer Science, Artificial Intelligence, and Technology (INSAIT) to create a world-class research environment that draws talent from around the world and gives national students and researchers a reason to stay.


The power of a restless mind


Building a world-class research institute from scratch requires the ability to embrace a certain level of chaos, explains Haeupler. And it is the kind of environment he thrives in because his research journey has been shaped by something he did not fully understand at first - having ADHD. He now realises that his curiosity, ability to hyperfocus and unconventional way of thinking - are all tied to his brain working differently.

Often, an algorithm is so fundamentally intriguing that his brain simply refuses to let it go and the problem will run in the back of his mind for weeks or months. One of the biggest lessons he shares about neurodiversity is that for him success is not just about pushing through the challenges of living in a society and world designed for neurotypical minds - it is about trying to focus unapologetically on your strengths and actively shaping an environment where you can be happy and can make your uniqueness shine, as well as make the world a better place.

Being able to pursue his research without the requirement of formal professorship or the need to follow rigid academic structures, is allowing Haeupler the freedom to pursue research independently - in a way that suits his distinct way of thinking.

‘It's hard to overstate the impact the freedom and flexibility an ERC grant is giving me’

This freedom, granted by the ERC, has allowed Haeupler to return to Europe, take on new opportunities to retain and attract talent in Eastern Europe and thrive on pursuing a less conventional career path - shaping both his research and the future of collaborative computing.

 

Biography


Bernhard Haeupler is currently a Faculty at INSAIT, the Institute for Computer Science, Artificial Intelligence and Technology in Sofia, Bulgaria. He received his PhD from MIT in 2013. Previously he was a professor at the Department of Computer Science at Carnegie Mellon University, as well as a senior researcher at ETH Zurich. His research interests focus on algorithm design, distributed computing, and (network) coding theory.  His research, spanning more than 100 papers, has been recognised with numerous awards, including the ACM-EATCS PhD Dissertation Award in Distributed Computing, the George Sproul Dissertation Award at MIT, and others. His research has been funded by a Sloan Research Fellowship, an NSF Career Award, and an ERC Starting Grant.
 

Project information

DistOpt-BydWorstCase
Distributed Optimization Beyond Worst-Case Topologies
Paving digital highways
Researcher:
Bernhard Haeupler
Host institution:
Swiss Federal Institute of Technology Zurich (ETH Zurich)
,
Switzerland
Call details
ERC-2020-STG, PE6
ERC funding
1 500 000 €