I love coordination. For the last 15 years I have been involved in many projects to either build consensus (coordinating with others) or forecast (coordinate with reality). Whether this was getting people to vote on complex topics on Facebook as a teenager or writing prediction market questions.
Sometimes such projects are called “epistemics” in my circles. Or relatedly, “truth-seeking”.
Here are some epistemics projects I am excited about:
Prediction markets and Nate Silver - It looks to me that forecasting was .1 - 5% of the Democrats dropping Biden from their ticket. There were a number of ways the world with forecasting was different to the one without it: surfacing the badness of the debate more quickly, making clear how weak biden was and when Kamala was gaining momentum1. It seems plausible to me, that without Nate Silver and prediction markets2 in some universes, Biden is still the nominee, and he is a very poor one. And whether I like Trump or hate him, I want his opposition to be competent. To affect this seems soooo valuable3.
X Community Notes (Twitter Community Notes) - Twitter has a collaborative process by which factchecks appear. But rather than using some misinformation non-profit, “community notes” only appear if groups agreee on the correction that would that would normally disagree. This is great and is something I would perhaps pay a billion dollars to create, given that X is one of the largest platforms in the world. For all Musk's faults, he has pushed this and it is to his credit. I think someone could run a think tank to lobby X and other orgs into even better truth seeking4.
The Swift Centre (large conflict of interest, I am paid to forecast for them) - As a forecasting consultancy that is managing to stand largely (entirely?) without grant funding, just getting standard business gigs, if I were gonna suggest consultants to help a company improve their truth-seeking process, I’d recommend us. The Swift Centre is professional, works quickly and provides both quantitative and qualitative forecasts. The have worked with DeepMind and the Open Nuclear Network.
Discourse mapping - Many discussions happen often and we don't move forward. Personally I'm really excited about trying to find consensus positions to allow ‘locked’ focus to be released for more important stuff. Here is the site my team mocked up for Control AI, but I think we could have similar discourse mapping for the AI bill SB 1047, on how to build more housing or fix immigration.
The EA Forum's AI Welfare Week - I enjoyed a week of focus on a single topic and the voting tool (below) was great. I agree with Effective Altruism as an ideology5, but I find the community suffers from some kind of hive-mind ADHD and it’s executive function (EA elites) are not as good as I used to think. I reckon if we did about 10 of these we might really start to get somewhere. Perhaps with clustering on different groups based on their positions on initial spectra.
Sage's Fatebook.io - a tool for quickly making and tracking forecasts. The only tool I've found that I show to non-forecasting business people that they say "oh what's that, can I use that”. I think Sage should charge for this and try and push it as a standard SaaS product.6
What have I missed here? What epistemics projects are you excited about?
Longer version.
There was an additional, faster signal that Biden’s debate performance was terrible, alongside the reactions of the political class. And the Democratic political class couldn’t so easily hide or coerce this reaction.
Biden’s chances seemed to recover twice during this time (see graph above). Without Nate Silver and liquid markets (Polymarket and Betfair) signalling he was likely to lose he might have stabilised.
Being able to see when Kamala was recovering gave her momentum. These markets made a clear distinction between the times either participant seemed to be leading and when it was neck and neck.
Honourable mention to Ezra Klein, who, like Silver, said this months ago and put his credibility on the line.
As always, I wish we could have the benefits of prediction markets without the costs of addiction.
I’ve chatted at length to the Community Notes team and they are very receptive to ideas.
Which I’d roughly define as acknowledging the following:
Scale matters
Consciousness matters, regardless of where, when or what it is
Some altruistic interventions are much much better than others
Using these three axioms we can think about how cause the most good to happen per $.
Though maybe we let Adam finish his honeymoon first. Congratulations to the happy couple!
Maybe you implicitly cover it in one of the above points, but your tool that helps identify cruxes or disagreements is something I like. I saws its use directly during discussion/argument around whether EA should distance itself from manifund events / rationality.
I really like that discourse mapping! I also liked the idea behind the AI welfare week slider, but I thought it had a pretty important flaw. Footnote 2 was something like "a priority means we should be spending >=5% of EA resources on it". Being in a footnote, I think some people accounted for that specific definition and some did not. It was also not clear to me whether that means ramping up immediately to 5%, being above 5% in the medium termo, or something else.