experiments_in_making_reasoning_explicit_and_careful
Experiments in making reasoning really explicit and careful
Copy-pasted from a doc, not organized:
- Paul Christiano and Katja’s ‘Structured Case Project’
- AI Impacts: arose out of relative failure of structured case project. Still involves some attempting at relatively watertight argument e.g. https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/
- Generally intended as experimental epistemic object:
- collection of editable pages
- Each page is a modular node attempting to answer a question
- Each page has summary answer at the top, then the page supports that answer
- Lower level pages answer questions that are helpful for answering questions on higher level pages
- Turning into a wiki: wiki.aiimpacts.org/
- Someone made software that did the thing Katja is always asking for, and it somehow didn’t seem great, so that’s evidence against Katja’s intuitions on this
- Probably this?
- Attempt to analyze what is actually hard about making structured arguments
- Katja has done this some, still feels mysterious
- Improving people’s epistemics
- Pastcasting - forecasting past results so you get instant feedback
- Calibration - see how justified your own confidence is
- Open Phil one
- Critch (I think?) made an app
- Historical Base rates - much thinking requires knowing roughly how often things have happened in the past
- Seems like Our World In Data might do some work here
- There is at least one other effort
- Increasing summarisation - making it easier to get a basic understanding on EA/LessWrong topics
- Nonlinear offer a range of prizes for summaries of such works
- Nathan Young
- Fighting ‘Moloch’
- What is Moloch exactly?
experiments_in_making_reasoning_explicit_and_careful.txt · Last modified: 2023/01/24 06:09 by katjagrace