An Open Science Initiative to Explore the Attack Vectors of Quadratic Funding In our previous articles on the Gitcoin Grants research collaboration at BlockScience, we openly explored research directions aimed at identifying collusion patterns in the Grants ecosystem using network science techniques implemented in cadCAD. In this article, we explore hypothetical collusion scenarios that exploit the mechanism of Quadratic Funding and Pairwise Quadratic Funding, and propose…
An Open Science Initiative to Explore the Attack Vectors of Quadratic Funding
In our previous articles on the Gitcoin Grants research collaboration at BlockScience, we openly explored research directions aimed at identifying collusion patterns in the Grants ecosystem using network science techniques implemented in cadCAD. In this article, we explore hypothetical collusion scenarios that exploit the mechanism of Quadratic Funding and Pairwise Quadratic Funding, and propose to mobilize the community to uncover more hypothetical scenarios. Those hypothetical attack vectors will then be used to test the effectiveness of the “Optimality Gap” algorithm — our proposed solution to detect collusion patterns with data science. Our next article will detail the design and challenge of the optimality gap for a more technically inclined audience.
Gitcoin’s mission is to help open source developers collaborate and financially benefit from their contributions to community grants via Quadratic Funding (QF) matching of sponsor funds.
As previously identified in an earlier article, Quadratic Funding is prone to several attack vectors. The most prominent among them are sybil attacks, where the attacker creates many fake accounts to game the system, and collusion, where malicious real users secretly coordinate among themselves to game the system. The difficulty of addressing these attack vectors is compounded by the overlap with transaction patterns resulting from legitimate, organic contributions to grants as well.
In the following article we will briefly explain the Quadratic Funding algorithm, demonstrate a few types of attack vectors, and briefly introduce the Optimality Gap as a metric for flagging potential colluders.
Our goal is to help the community to understand the mechanism of QF as well as potential ways to design and mitigate attack vectors, so that the community can have stronger detection tools for reporting potential attacks.
Open source projects, like other public goods, receive less funding than the value they generate for others due to the abundant (open source) nature of the products they deliver. The reason behind this is public goods are non-excludable (meaning that no one can prevent you from using it even if you don’t pay), and non-rivalrous consumption (meaning, my using of the public good won’t hurt yours).
For example, national defense, city parks, and public roads and buildings are all public goods. The difficulty with public goods is that there is a high opportunity for people to choose to free-ride rather than pay their fair share, due to the non-excludable nature of the goods produced. Just like that slacker in your highschool group project, who didn’t contribute anything but still got credit for all your hard work, free riders are the reason we don’t have adequately supported public goods in our society.
Traditionally, enclosure of the public good via taxation or privatization (e.g. land ownership or copyright law) are used to solve the free-rider problem. However, those problems often involve a centralized institution — a government, corporation, or a non-profit committee — to coercively collect payments and allocate funds.
Yet, large centralized institutions are not necessarily efficient at allocating resources — they often don’t have the best idea of what is the most important public good, or how much support each project actually needs. This information comes more effectively from the bottom up, rather than from the top down.
In 2018, Vilatik, Zoe and Glen proposed a new solution for this problem: Quadratic funding. It has an algorithm that matches sponsor funds to quadratically favor (and thus disproportionately reward) small donors — because small donors as a whole might have greater collective insight on which open source projects are most beneficial to the community.
Also, encouraging small contributions is healthy for the long-term development of the Ethereum community ecosystem, much of which is producing public goods and thus suffers from the same revenue model difficulties discussed above.
Gitcoin uses QF as its fund matching algorithm in the ongoing Gitcoin Grants ecosystem, one of the primary sources of funding for Ethereum projects and startups.
Quadratic Funding takes the square root of each community contribution, adds them up, and takes the square of its sum. After that, the grant agency (Gitcoin) pays for the difference between the “QF” result with the matching fund from large institutional donors like the Ethereum Foundation and other prominent DeFi projects.
As a result, this algorithm disproportionately awards matching funds to grants that have a lot of small donors over grants that have few large donors, in effect giving more credence to the number of people supporting a project rather than the number of dollars supporting it.
To understand how to protect against the attack vectors inherent in Quadratic Funding, it helps to first think like an attacker. Uncovering the different patterns for attack vectors allow us to design simple yet rigorous tools for detecting these patterns in a live and potentially high stakes funding environment, with millions of dollars on the line.
In this section, we will explore different ways to game Quadratic Funding by:
As suggested in Vitalik’s blog posts, the QF mechanism can be vulnerable to sybil and collusion attacks. An attacking actor could decide to create a fake grant, donate to himself, and collect matching funds as “interest”. Since an increased number of contributions results in more matching funds, the simplest form of attack will be splitting the contribution into multiple accounts, and donating to themselves.
In the figure above we can see that as the number of green squares (original contributions) decrease in size but increase in number, the ratio of yellow area (matching funds) to green area (donations) increases.
In the most extreme case, if the number of green squares increases to infinitely many contributions of infinitesimal amounts, you can theoretically attract massive amounts of matching funds with a small original contribution, provided you split up contributions across many accounts.
Of course, many of the above attack vectors are already mitigated by security measures the Gitcoin team has put in place, which we’ll discuss below.
As one example of this kind of attack, see the below tweet from the @GitcoinDisputes account regarding collusion on a Gitcoin Grant in July 2020.
To mitigate the sybil attack vector, which is about creating fake accounts to fish matching grants, Gitcoin does currently uses Sybil detection algorithms (assigning users a Sybil Score), as well as incentivizing user identity verification on a number of different platforms (notably including BrightID, POAP, and 3box), offering increased grant matching for each layer added.
This has the effect of ensuring additional grant matching funds go towards identities that have a higher probability of being a real person. However, there are other forms of collusion attacks that are beyond the scope of mere sybil attacks.
While creating fake accounts to attract matching funds can be prevented by sybil resistant design, colluders can easily up their game by coordinating a group of real accounts to “mine Gitcoin matching funds” and split the “interest” among the group.
To mitigate the collusion attack vector, Gitcoin adopted a new ‘Pairwise Funding’ mechanism, as proposed by Vitalik. The basic idea is to reduce matching for grants that seem highly coordinated in contributors donating to the same pool of grants. If two donors (a pair) donate to a set of similar grants, then the matching fund for that grant will be discounted by a certain amount. You can find the details of the Pairwise Funding mechanism here.
There may be further considerations regarding the Pairwise mechanism — this solution assumes that organic contributions tend to have very little overlap with each other, but we often see the opposite in organic communities who support each other’s work. The Pairwise mechanism may also penalize ‘Collections’ of grants, which are a tool so users can donate to a preselected list of projects. Like any meticulously designed solution to address a specific type of collusion, there are always other ways to game it.
To further mitigate the collusion attack vector, Gitcoin has adopted a flagging mechanism wherein users who have been asked to cheat can report the cheaters to the Gitcoin dispute resolution process. It only takes 1 user to report collusion to “bust” the whole scheme, and around 35 flags have been reported in round 8.
Another way to fly under the radar of sybil detection and pair-wise funding is to split up existing grants into many grants, or to create fake grants and coordinate real accounts to donate to them.
As long as the colluders can find some ways to split a large “contribution” into many small accounts and recollect them with small grant proposals, they will be able to attract a lot more matching funds since the QF mechanism disproportionately rewards the number of participants over the amount each participant donated.
For example, a malicious actor can split their grant into several small grant proposals, and split their initial “seed” money into several dozens of small contributions that each donate to these small grant proposals without overlap.
While this type of collusion has not yet been reported by the community, it’s very likely that with sufficient incentive and ever growing matching poolsize, colluders like this will emerge.
This type of behavior might also be noticed in tightly-knit, organic communities like the Commons Stack or Metagame, where lots of small interconnected groups collaborate closely to fund and achieve larger goals. We should be careful that any system mechanisms introduced to reduce collusion minimize negative impacts on organic community engagement, or we risk disenfranchising the users we are seeking to serve in the first place.
Readers might notice by now, this endless hide and seek game between attackers and Grant moderators. As new mechanisms to prevent those attacks are put into place (like MACI design, sybil resistance, SybilScores, community flagging or Pairwise Funding), colluders come up with increasingly complex strategies, from fake grants and accounts, to mass coordination of bad actors. It is important for us to note here, that any custom designed analytical solution targeting a specific type of collusion problem can (and will likely) be rigged by another carefully designed collusion strategy.
It is for this reason that we recommend algorithmic governance policies that are simple and generalized, yet rigorous and mathematically sound. We aim for hard fought simplicity.
“Simple, clear purpose and principles give rise to complex intelligent behavior. Complex rules and regulations give rise to simple stupid behavior.” — Dee Hock
We propose that instead of trying to identify and catch all types of collusion, we design a mechanism that makes highly efficient collusion impossible. When the cost of collusion is greater than the benefit, malicious actors will be naturally disincentivized to collude.
First, we want to identify what type of grant contributions are maximizing attracting matching funds. The ultimate purpose of collusion is to attract as much matching funds as possible with limited original funding.
Thus, instead of trying to enumerate countless collusion strategies and design specific solutions to address each one, we instead approach the collusion problem from the other end. By flagging grants that have highly efficient matching fund contribution patterns as suspicious, we are guaranteed to catch all grants that are siphoning away large amounts of matching funds, so we can pass them along to the Gitcoin dispute process to review.
Second, we aim to empower the collective intelligence of the Gitcoin team and community to catch collusion. While colluders gang up to game the system, we believe that by explaining the QF mechanism collusion patterns in plain terms and providing tools to recognize and act on that information, we can empower and coordinate the community as a whole to stand up against the bad apples.
The Optimality Gap is a measure of how “optimized” is a given community towards match funding. The basic idea is that it is always possible to rearrange and recombine the contributions associated with a community (or subgraph) in such a way that you maximize the total funds being funneled away from the general pool.
On a conceptual level, highly efficient communities would have a relatively low optimality gap, while most communities would have sort of a median optimality gap. Given that colluders have a deliberate strategy, when can hypothesize that in general they are going to have highly efficient grants.
But how do we define exactly the Optimality Gap or a Community? For the first, we define the Optimality Gap as the difference between the maximal matching fund and the actual matching fund in a preselected neighborhood subgraph. Given the existing grants and contributors and their original contributions, the optimality gap calculates the difference between the matching fund it could have gotten and the matching fund it actually got, if the contributors donates to the project in a different pattern.
As for the community, there are several ways of defining it, like by using community detection algorithms or unsupervised learning. This is an open research problem, and right now we are using a heuristic approach of assuming that a community can be proxied by the usage of the Neighbors Subgraph, which is essentially a degree of distance method for determining the relevant community of examination.
Making use of the Optimality Gap allows for the generation of signals for flagging suspiciously optimized grants for closer inspection. You can find more technical detail in our next blog post.
But we aren’t satisfied doing the analysis ourselves — we aim to empower researchers in the Gitcoin community to use these tools to test and iterate their own research questions to unlock the wisdom of the crowd in exploring how to mitigate the attack vectors of Quadratic Funding.
Aside from our work on the simulation and cadCAD model, we have been hosting public coding sessions and open academic sessions to share our work with the community, and the Token Engineering Academy even spun up a research group full of mathematicians and data scientists analyzing different research questions along the same lines of inquiry.
Beyond just being win-win, we believe these kinds of open collaborations produce whole multiplicative factors — the #OpenScience infrastructure and models built with these tools allows us to stand on the shoulders of giants. As a small example, in order to build the Gitcoin cadCAD model as required, Danilo ended up submitting a PR to the NetworkX repository for an improvement to their Open Source network visualization library. Open Source repositories are made richer and more useful by each such contribution, and the true power of Open Source is that the models we produce today will be available to be iterated on and taken further in the future than we are able to at this point in time.
For more information on how to get involved, find calendar links & past recordings on this Notion doc: https://www.notion.so/blockscience/Gitcoin-Modelling-Co-Lab-TL-DR-Public-4e115525dbc0430289bb78d60ad79a04
As demonstrated, our solution can detect most efficient collusion schemes, but now we want to test it. All the code we’ve produced in this research collaboration are open source, and we believe in the wisdom of the crowd and the power of #OpenScience.
In the below repo, you can find a few attack vector implementations that we discussed in the article, test them out with cadCAD and even build your own! We would love to see people design their own collusion pattern, play with these algorithms, and propose pull requests if you find improvements. If you need any help, jump into the TE/Gitcoin Research channel on Discord!
Our on-going research aims to identify possible collusion patterns. In the first stage of the research we constructed simple attack scenarios, implemented them in our cadCAD model and tested to see if they could be caught with the Optimality Gap algorithm. In the upcoming research, we plan to scale up so that we flag collusion attack scenarios by making full usage of the live Gitcoin data.
As always, we encourage the community to explore and experiment with the Gitcoin cadCAD model repository, where you can access much of the data explored in this analysis.
GitHub Repository: https://github.com/gitcoinco/gitcoin_cadcad_model
Stay tuned for our next research digest in this exciting collaboration:
Previous articles on this topic can be found here:
Special thanks to Kevin Owocki and Michael Zargham for edits on this piece, and all the hard work of contributors from the Token Engineering Academy, TE Commons, and cadCAD communities in making this open research initiative an ongoing success.