Finding the Best Match Between Clients and Freelancers on Online Platforms

Professor Jiaming Xu develops algorithms to infer important information from complex network data.

November 21, 2022
Big Data
Jiaming Xu_Duke Fuqua School of Business

During the past few years, the world has seen the rise of the gig economy, a labor market that heavily relies on temporary work by independent contractors and freelancers.

According to a recent survey, 59 million Americans, or more than one-third of the entire U.S. workforce, have performed freelance work in the past year. Many find those gigs through online platforms such as Upwork, TaskRabbit, or Fiverr, which help connect customers with freelance service providers.

One of the greatest challenges these platforms face is finding the best match between customers and freelancers. Clients often have specific needs that not all workers can properly fulfill. This type of problem is one of the many lines of research being developed by Jiaming Xu, associate professor in decision sciences at Duke University’s Fuqua School of Business.

Xu’s main research interest is developing algorithms to infer useful information from network data. “We encounter many different kinds of networks in business applications, engineering, and even the natural sciences,” he says. “The key question is how to extract useful information from these networks to guide downstream decision making.”

These networks, as they are found in the real world, tend to be very large and complex, sometimes involving millions of nodes and different types of links between them. On top of that, the observed data may be noisy or partial. “I work to develop scalable algorithms that can run very fast and, at the same time, extract this type of information even when there's only a very weak signal in the data,” Xu says.

Dealing with uncertainties

In the case of freelance platforms, matching clients and service providers can be especially difficult because of the uncertainties inherent to the process. First of all, the platform doesn’t know before a service is performed how efficient a given freelancer will be in completing a certain task assigned by a client. In other words, the client’s payoff is unknown.

Another issue is that the population of clients is very dynamic. They usually arrive at the platform to fulfill a certain need, stay for some time and depart after obtaining the service. The statistics for arrivals and departures of customers are also unknown beforehand. Furthermore, each freelancer has a limited capacity to fulfill services, a constraint that also needs to be considered. “That's the second uncertainty—how to match customers with freelancers in a way that does not cause congestion in the system,” Xu says.

Along with his co-authors—Wei-Kang Hsu, a machine learning algorithm engineer currently at Apple, Xiaojun Lin, a professor of electrical and computer engineering at Purdue University, and Mark R. Bell, also a professor of electrical and computer engineering at Purdue University—Xu looked into this problem in the paper “Integrated Online Learning and Adaptive Control in Queueing Systems with Uncertain Payoffs,” published by the journal Operations Research.

“We studied this as an online matching problem,” he says. “The goal is to find this match and, at the same time, learn the unknown payoffs and also make sure the system is stable and not congested. Then we can maximize the total payoff for the online platform.”

In an ideal case scenario, the platform would gradually learn each client’s preferences from trial and error. In the real world, however, the system can’t afford too many errors. If the client’s needs go unfulfilled, they will simply leave the platform after a few attempts, so the learning curve must be fast. “The challenge is that you want to somehow very quickly learn the customer’s preferences based on the feedback or the outcome of the assignments,” Xu says.

In machine learning, this dilemma is known as the exploration vs exploitation tradeoff. If you keep exploring new matches, you may sacrifice the satisfaction of the customer. But if you do not explore, you may also miss the chance to find the best match possible. “That's why you want to explore, but not too much, because you may end up losing a lot of the payoff or benefit.”

Thinking optimistically

To help solve this dilemma, Xu and his colleagues used the upper confidence bound algorithm, which helps combine exploration and exploitation to get the best result as fast as possible.

Under that approach, when the performance of a potential match is unknown, this algorithm optimistically assumes there is a higher chance for that to be a good match. In other words, when the uncertainty is high, the results are optimistically “inflated.” After you had the chance to observe the performance of a match over and over again, you don’t need to inflate the results as much because there’s higher confidence that you are observing something close to the actual average performance of that match.

“You always pick the best match based on the inflated results, not the actual observed results. This is called the upper confidence bound and that's basically how we learn the preferences of the customer while doing the matches,” Xu says.

Matching fairly

While finding the best possible match for each customer, the algorithm must also account for the limited capacity of every service provider and the uncertainty in the arrivals of clients. Simply greedily matching to maximize the current estimated payoffs turns out to be highly suboptimal. “We formulate this as an optimization problem. There are some capacity constraints for every server and you have to make sure that you do not violate them. In addition, every client is associated with a utility function of the received service rate and you need to maximize both the total utilities and the estimated matching payoffs.” The utility function promotes fairness in matching, which is desirable in two ways. First, it has an eye for the future, so that we can strike the right balance between the current and future payoffs. Second, it also controls the learning processes of all clients in a fair manner, so that even clients with low estimated payoffs can still receive some service and get their payoff estimates improved.

To evaluate the performance of the algorithm, Xu and his colleagues calculated the regret rate, which compares the results of the new algorithm with that of an oracle that knows all of the clients’ dynamics and preferences beforehand. “We showed that the regret is very small and it decreases if you run the system for a longer time,” Xu says. The regret also decreases if a particular customer assigns multiple tasks. In that case, the system gets increasingly good at learning the client’s preferences.

The main contribution of this paper is to propose a solution that tackles the uncertainty inherent to these types of platforms. Previous works in the literature assumed a scenario where the arrival rates of different types of customers to the platform and matching payoffs were known beforehand. “In our case, we don't need to know this information. We can dynamically allocate our assignments in response to these different arrival rates and matching payoffs. That's the interesting thing about our algorithm and policy.”

Xu says he is especially drawn to studying networks because many systems and platforms with business applications can be modeled as networks. One of his lines of research is network data privacy and how easily information can be traced back to individual users. “Networks are visually very appealing because you can actually draw the nodes, the edges, and easily explain them to an audience,” he says. “At the same time, there is very deep mathematics behind them.”

This story may not be republished without permission from Duke University's Fuqua School of Business. Please contact for additional information.

Contact Info

For more information contact our media relations team at