An increasing number of proposed applications in addition to Ethereum rely on some kind of incentivized, multi-party data provision – whether voting, random number collecting, or other use cases where obtaining information from multiple parties to increase decentralization is highly desirable, but also where there is a risk Strong for collusion. RANDAO can certainly provide random numbers with much higher cryptoeconomic security than simple block hashes – and certainly better than Deterministic algorithms with publicly known seeds, but it is not infinitely collusion-proof: if 100% of RANDAO participants collude with each other, they can set the outcome to whatever they want. The most controversial example is the prediction market Augustwhere decentralized event reporting is based on a highly advanced version of a Schelling diagramEveryone votes on the result and everyone in the majority gets a reward. The theory is that if you expect everyone to be honest, your incentive to be honest is also to be in the majority, so honesty is a stable equilibrium; But the problem is that when more than 50% of participants collude, the system collapses.
The fact that Augur has an independent token provides a partial defense against this problem: if voters colluded, the value of the Augur token would be expected to fall to near zero as the system is viewed as useless and unreliable, so the colliders lose a significant amount of value. . However, it is certainly not a complete defence. Paul Sztorc’s Truthcoin (and also Augur) has an additional defense, which is very smart economically. The basic mechanism is simple: instead of just awarding a fixed amount to each person in the majority, the amount awarded depends on the level of disagreement between the final votes, the greater the disagreement the more majority voters get, and the minority voters get the right to vote. An equally large amount is taken from their security deposit.
The intent is simple: if you get a message from someone saying “Hey, I’m initiating collusion; even though the actual answer is A, let’s all vote B,” in a simpler scheme you might be tempted to move on. However, in the Sztorc chart, you may come to the conclusion that this person is In reality I’ll vote A, he tries to persuade Only a small percentage of people to vote B, in order to steal some of their money. Thus, it creates a lack of trust, which makes collusion more difficult. However, there is a problem: precisely because blockchains are excellent tools for cryptographically secure agreements and coordination, it is very difficult to make collusion impossible. It can be proven.
To see how, consider the simplest possible diagram of how vote reporting works in Augur: there is a period during which everyone can submit a transaction that provides their votes, and at the end the algorithm calculates the score. However, this approach is fatally flawed: it creates an incentive for people to wait as long as possible to find out the answers of all other players before answering themselves. Taking this to its natural equilibrium, we would have everyone vote in the last possible block, resulting in the miner of the last block essentially controlling everything. A scheme in which the ending comes randomly (for example, the first block that exceeds 100x the usual difficulty threshold) mitigates this somewhat, but still leaves a lot of power in the hands of individual miners.
The standard cryptographer’s response to this problem is a hash:every-player commit detection system s[i] It determines their reaction R[i]There is a period during which everyone must submit free[i]) where H It can be any pre-defined hash function (such as SHA3). Then everyone must apply R[i]The values are checked against previously provided hashes. For two players, Rock Paper Scise, or any other purely zero-sum game, this works great. However, for Augur, it still leaves open the opportunity for credible collusion: users can voluntarily disclose R[i] Before the fact, others can verify that this actually matches the hash values they provided to the chain. Allowing users to change their hashes before the hash submission period expires does nothing; Users can always lock up a large amount of money in a specially crafted contract that only releases if no one provides proof of the contract’s Merkle tree, culminating in a previous Blockhash, showing that the vote has been changed, and thus committing not to change their money. vote.
New solution?
However, there is also another way to solve this problem, one that has not yet been sufficiently explored. The idea is this: Instead of making prior detection for collusion purposes costly within the underlying game itself, we Introducing a parallel game (albeit mandatory, backed by oracle participants’ security deposits) where any person discloses in advance any information about their vote to any other person, and exposes themselves to the risk of (potentially) betrayal, Without any way to prove That this specific person was the one who betrayed them.
The game, in its simplest form, works as follows. Suppose there is a decentralized random number generation system where users all must flip a coin and provide either 0 or 1 as input. Now, suppose we want to discourage collusion. What we do is simple: we allow anyone To register a bet against Any player in the system (note the use of “anyone” and “any player”; non-players can join as long as they put up the deposit), essentially stating “I am confident that this person will vote for 0 or 1. The betting rules are simply that if the target provides It is possible to bet on an intermediate stage between commitment and disclosure.
Probably, any Providing information to any other party may now be very expensive; Even if you convince someone else that you will vote 1 with a 51% probability, they can still take coins from you probabilistically, and will win in the long run with such a scheme repeated. Note that the other party can bet anonymously, and can therefore always pretend that it was a casual gambler who made the bets, not him. To enhance the scheme further, we can say that you He should Bet against N different players at the same time, and the players must be pseudo-randomly selected from the seeds; If you want to target a specific player, you can do so by trying different seeds until you get your desired target along with a few others, but there will always be at least some plausible deniability. Another potential improvement, although it has its costs, is to require players to only record their bets between commit and disclosure, and to only disclose and execute bets long after several rounds of the game have occurred (we assume there is a long period before security deposits are even withdrawn). This works.)
Now, how can we convert this into an oracle scenario? Consider again the simple binary case: users report A or B, some part P, unknown before the end of the process, will report A and the remaining 1-P part will report B. Here, we change the scheme to a limit What: The bets now say “I am confident that this person will vote for X with a greater probability than P.” Note that the language of betting should not be taken to mean knowledge of P; Rather, it indicates the opinion that, Whatever the probability of a random user voting X, the specific user targeted by the bettor will vote X with a higher probability than that. The betting rules, which are processed after the voting phase, are that if the target votes Goal.
Note that in the normal case, the profit here is more guaranteed than in the binary RANDAO example above: Most of the time, if A is the truth, everyone votes for A, so the bets would be low-risk profit grabs even if only complex zero-knowledge-resistant protocols were used to give Probability Guarantee that they will vote for a certain value.
Technical side note: If there are only two possibilities, why can’t you specify them R[i] from free[i]) Just by trying both options? The answer is that users are already posting free[i],n) And (R[i],n) For some big random nonce n They will be eliminated, so there is plenty of room for their population.
As a further point, note that this scheme is more or less a superset of Paul Stork’s countercoordination scheme described above: if someone convinces someone else to falsely vote for B when the real answer is A, they can secretly bet against them with that information. . In particular, benefiting from the immorality of others would no longer be a public good, but a private good: an attacker who tricks another person into false collusion would get 100% of the profit, and would thus be more suspicious of joining in. Collusion cannot be proven with encryption.
Now, how does this work in the linear case? Suppose users vote on the price of BTC/USD, so they do not need to provide a choice between A and B, but rather a numerical value. The slow solution is to simply apply the binary approach in parallel to each binary digit of the price; But the alternative solution is to bet on scale. Users can bet on the form “I am confident that this person will vote between X and Y with a higher probability than the average person”; In this way, disclosing even an approximate value at which you would vote to anyone else is likely to be costly.
problems
What are the weak points of the scheme? Perhaps the biggest of these is that it opens up the opportunity for “second-rate grief” for other players: although one cannot, expectantly, force other players to lose money in this scheme, one can certainly put them at risk by betting against them. This may open the door to opportunities for blackmail: “Do what I want or I will force you to gamble with me.” However, this attack comes at the cost of putting the attacker himself at risk.
The simplest way to mitigate this is to limit the amount that can be gambled, perhaps even setting it in proportion to the amount bet. That is, if P = 0.1, allow bets of up to $1 saying “I am confident that this person will vote for 0.12″. “probability”, etc. (mathematically advanced users may note that devices such as logarithmic market scoring rules are good ways to implement this function efficiently); In this case, the amount of money you can extract from someone will be quadratically proportional to the level of private information you have, and performing large amounts of grief ensures in the long run that it will cost the attacker money, not just risk.
The second is that if users are known to use multiple specific sources of information, especially regarding more subjective questions like “vote on the price of token A/token B” and not just binary events, then these users will be exploitable; For example, if you know that some users have a history of listening to Bitstamp and others to Bitfinex for their voting information, once you have the latest feeds from both exchanges, you can potentially extract some amount of money from a participant on your estimate of which exchange they are listening to. mechanism. Hence, the research problem remains to know exactly how users respond in this situation.
Note that such events are a complex issue in any case; Failure modes, such as everyone’s focus on a particular exchange, are very likely to arise even in simple Sztorcian schemes without this kind of probabilistic grief. Perhaps the existence of a multi-tiered scheme involving a second-tier ‘court of appeal’, with voting at the top, and which is so rarely invoked that the effects of centralization never end, may mitigate the problem, but it remains largely an empirical question.




















.jpg)


