Select Page

Research

(My CV tends to have the most up to date list of my publications, while this page is more discursive.)

My research combines two areas: severe uncertainty and scientific models, particularly as they interact in the context of public policymaking. I study mostly climate science and meteorology, though in the past two years I have gotten interested in broader questions relating to expertise and its role in decisions.  My work draws on general philosophy of science, philosophy of climate science, decision theory, and formal epistemology. 

My PhD thesis, Policymaking under scientific uncertainty is available online via LSE eTheses. 

Severe uncertainty 

Agents are in situations of severe uncertainty when they face significant difficulties in forming reliable beliefs about their situation. Put more formally, decision theorists sometimes say that agents in severe uncertainty lack the resources to form probabilistic assessments of the possibilities they face. For some agents, this involves understanding what the possibilities are but lacking the information to assign them probabilities. For others, it involves a lack of awareness of the relevant possibilities. In my research on this topic, I am interested in developing decision-theoretic and epistemic accounts of, and tools for, realistic agents who don’t meet the exacting idealisations of theories like Bayesianism.

In my work on unawareness and awareness growth, I have been thinking about problems with and extensions to “reverse Bayesianism”, a proposal due to economists Edi Karni and Marie-Louise Vierö. I have argued that reverse Bayesianism should be understood as a rule for extending an agent’s probabilities from an old space of possibilities to a new space, and not as a belief revision rule. I have also defended my version of reverse Bayesianism against recent challenges due to Katie Steele, and Orri Stefánsson. 

I am currently engaged in a year-long research project, entitled Rational foundations for longtermism, which explores some of these issues. Longtermism is a growing family of views, which advocate for decision-making focused on the very long-run. A longtermist about value thinks that the majority of the good or bad which might result from a decision resides in the long-run. A longtermist about decision-making thinks that we ought therefore to make our decisions with a primary focus on the very long-run. Such longtermist decisions are, naturally, subject to severe uncertainty. Current longtermist thinking is heavily associated with the Effective Altruism and Rationalist movements, and often takes place in a consequentialist ethical framework. Perhaps as a result of these facts, longtermists often express their views in the language of expected utility theory. I am sceptical that EUT is the right normative theory for such decisions, and that it provides agents like ourselves with the tools necessary to think them through. 

Older work extending Bayesianism includes a paper in Synthese about expert deference. I criticise two approaches to expert testimony: supra-Bayesianism and deference as a constraint on priors. I develop an alternative model of expert deference: deference as a belief revision schema. Inspired by Richard Jeffrey’s strategy for model uncertain belief revision, I show how deference can be considered as an exogenous constraint on a posterior belief state in a way that does not require any particular priors. I argue that this approach resolves or partially mitigates the problems I raised for the two, more standard, approaches. 

Works in progress:

  • A paper arguing that we should separate out several processes in awareness growth, and that Anna Mahtani’s problem cases put pressure on a stage I call “awareness revision”, rather than challenging reverse Bayesianism, as she proposed. [Accepted]
  • A paper arguing that Steele and Stefánsson’s arguments again reverse Bayesianism fail [Under review, draft here]
  • A paper arguing that longtermism is subject to a particularly severe version of the problem of cluelessness, due to increasing unawareness about the future. [slides from a presentation to GPI]

Models and policymaking

Models are a central scientific tool, and they are increasingly used to inform policymakers. But modelling is a subtle and complex methodology, and involves significant and often poorly understood uncertainty. In my work on models, I consider how they should be used by policymakers, or by scientists advising policymakers. I am particularly interested in the expert/non-expert interface, and the way model results are understood and used across that interface. 

My main piece of work on this is my paper Making Confident Decisions with Model Ensembles, co-authored with Richard Bradley and Roman Frigg. The paper came out of a research collaboration with in-house scientists at an insurance company, who were concerned that the natural catastrophe models they bought from risk consultancies had unquantified but significant uncertainties. In particular, they were concerned about an ensemble of models used to price hurricane insurance for the USA’s Atlantic coast. We applied the “confidence” decision theory, developed by Brian Hill, to the case, adapting it to take input from a collection of models. The purpose of the study was to show that a decision-theoretic, as opposed to scientific or epistemic approach, could help users manage the uncertainty in the model ensemble. Our method can be generalised to other cases, and provides an interesting contrast to current discussions about ensembles of global climate models such as CMIP6.  

Together with Frigg and Bradley, I also developed a more accessible, public-oriented paper titled Environmental Decision-Making Under Uncertainty. It is forthcoming in Wenceslao J. González (ed.): Current Trends in Philosophy of Science. A Prospective for the Near Future. Synthese Library. Cham: Springer.

In 2020, I worked with Richard Bradley on an analysis of the UK’s Covid-19 response. We were interested both in their use of models and in their decision procedure. Our first output was this paper, Following the Science: Pandemic Policy Making and Reasonable Worst-Case Scenarios, which discusses various aspects of the UK government’s interaction with experts on its SAGE advisory panel. We criticised two aspects of the government’s use of scientific advice. Firstly, pandemic policy making is not based on a comprehensive assessment of policy impacts. And secondly, the focus on reasonable worst-case scenarios as a way of managing uncertainty results in a loss of decision-relevant information and does not provide a coherent basis for policy making.

Policymaking, values, and climate science

Policymakers seek input from scientists to make informed decisions. This requires understanding both what the science says is likely to happen, and uncertain it is. However, scientists developed their tools for exploring and communicating uncertainty for their scientific peers. This can lead to policymakers not knowing how to use scientific information, and so not knowing what to believe or how to act. I explore how to improve this situation, looking at how science is done, expert advice is elicited, and results are communicated.

One area of recent interest is the usability challenge for climate information, and in particular climate services. “Climate services” refers to climate science done to support local or regional adaptation efforts and other policymaking. In the scientific literature, concern is rising that the information provided by mainstream, academic climate science is often not taken up by decision-makers. This has led to calls for climate information to be made more “usable”. Rather than being conceived of as a communication issue, this is increasingly framed as a call for changes to how climate science is done—so that its products are more credible, salient, and legitimate to users (Kirchhoff, Lemos, and Dessai 2013). One issue is how the science incorporates stakeholder interests and whether doing so will undermine the objectivity of science (Parker and Lusk 2019; Lusk 2020). With Julie Jebeile, I recently argued that climate science needs to radically rethink how it approaches research initiation and collaboration, in order to produce usable scientific information. This paper, entitled Usability of climate information: Toward a new scientific framework, appeared in WIREs Climate Change.

I also have an expository piece, entitled Roles for scientists in decisions, forthcoming in a collected volume. 

Modelling in philosophy

In a meta-philosophy project, I am exploring the idea that philosophy does and should make use of modelling. I think that philosophers are increasingly engaged in modelling: we build idealised representations of agents, cases, and concepts to facilitate our inquiry. This has recently been recognised (Godfrey-Smith 2006; Williamson 2017), but the scope and desirability of modelling is contested. I apply philosophy of science to analyse the methodology of modelling in philosophy, aiming to illuminate and improve our practice.

This project began as a methodological reflection on my own work during my PhD. The first area of philosophy that I thought about was thus decision theory and formal epistemology. In Normative Formal Epistemology as Modelling, forthcoming in BJPS, I argue that normative work in decision theory and formal epistemology should be understood as modelling. I do so by developing a dilemma for the methodologist trying to rationalise current practice: either philosophers are constructing theories or they are constructing models. If their products are theories, then formal epistemologists are doing very poorly indeed. So, I argue, they must be modelling. I discuss how normativity can be incorporated into an account of modelling, which is typically understood as a part of positive science. 

Recently, I have extended this analysis to a larger normative domain: ethics. In Modelling in Normative Ethics, in Ethical Theory and Moral Practice, I argue that elements of modelling are widespread in ethics. I propose that we can construe general moral “theories”, like utilitarianism, as models—idealised, domain-specific, workings out of particular rightmakers and goodmakers—and that doing so offers us (1) a plausible interpretation of these theories, (2) an explanation of the diversity of theories on offer, and (3) an explanation of their limitations. There are several consequences to this view. First, we get a new way of understanding what is going on (and going wrong) in the theory/anti-theory debate in ethics. Second, we get a new way of understanding impossibility theorems in population ethics, and their bearing on ethics as a whole. Finally, I claim that the fact that ethicists have (unknowingly) been modelling comes with certain methodological constraints for them. Most notably, models are not sensitive to counterexamples in the way that much of ethical theory is taken to be.

With a collaborator, I am developing a follow-up paper, arguing that modelling is a good method for normative ethics. In this “manifesto” we outline various uses of models, and defend their fruitfulness as a part of the ethicists methodological toolbox. 

Works in progress [contact me for drafts]

  • Co-authored paper making a plea for more modelling in first-order normative ethics [contact me for draft]

Physics research

My Master’s research was in particle physics.