Abstract: A long history of thought holds that stubbornness can be good for science.  If individual scientists stick to their theories, even when they are not the most promising, the entire community will consider a wide set of options, engage in debate over alternatives, and, ultimately, develop a better understanding of the world.  This talk looks to network modeling to address the question: is intransigence good for group learning?  The answer will be nuanced.  A diverse set of models show how some intransigence can improve group consensus formation.  But another set of results suggests that too much intransigence, or intransigence of a stronger form, can lead to polarization and poor outcomes.

(Sponsored by the History of Philosophy Group) Aristotle sometimes makes remarks about relations between kinds of living organisms. Such remarks are not typically taken to be expressing what I call 'natural facts' about the kinds—facts of the sort that an Aristotelian science would attempt to explain. Afterall, it is hard to see how Aristotle could account for such facts, given that he does not believe that species evolved (or co-evolved) to stand in such relations, and does not think they were created by any providential deity or intelligent designer. However, simply ignoring these remarks isn't appealing, either. I propose that Aristotle has a much richer conception of the natures or essences of living beings than has traditionally been thought, and consequently he does have the resources to explain the relations between living kinds using the principles that his science countenances.

In her Foundations of Physics (1740 & 1742), Du Châtelet makes important contributions to the philosophy of space and time. Recent scholarship has begun to investigate her chapter on space, but – so far as I know – there hasn’t been a careful investigation of her chapter on time. I start that here. Her discussion of time opens with a short paragraph asserting an analogy between space and time, and there are some clear parallels between her treatments of the two. But there are also some differences – as there should be, since time is different from space. I pinpoint where time and space are the same for Du Châtelet, and where they differ. It turns out that Du Châtelet’s metaphysics and epistemology of space and of time are analogous in very deep ways beyond the appearances of a first reading. However, there are also two important disanalogies. The first is that space concerns coexistence whereas time concerns non-coexistence. That’s not a surprising disanalogy, but it plays a very interesting role in Du Châtelet’s analysis of how we get our ideas of space and time, with implications for her metaphysics of space and time. The second concerns how we measure space and time. This is a disanalogy with a long history, and one that persisted for another 150 years. I explain the form it takes in Du Châtelet’s account.

Instances of the law of large numbers are used to model many different physical systems. In this paper, I propose and defend a non-standard interpretation of those instances of that law. Roughly put, according to this interpretation, the law of large numbers is best understood as being about typicality. In particular, the content of that law, when used to model physical systems, is that the probability of an event typically---rather than probably---approximates the frequency with which that event occurs.

Scientists aim to produce effects that can be reproduced at other times and places. Historical research has shown how difficult this can be and has raised questions about the differences between the replication of an effect and the reproduction of an experiment. In this talk we'll explore the question of whether we today can reproduce experiments performed in the past that are seen to have discovered influential, novel effects. And we'll ask just what we can learn in so doing about the nature of scientific practice itself.

A central task of developmental psychology and philosophy of science is to show how humans learn radically new concepts. Famously, Fodor has argued that it is impossible to learn concepts that are not definable in terms of concepts one already has. Critics of Fodor, inspired by Quine, have proposed alternative accounts of what concepts are like and how they are learned, but these accounts have been criticized as underspecified, circular, and unduly nativist. I will argue that there are learning processes that can generate genuinely novel concepts, and that we can understand these processes through an examination of several modern machine learning algorithms. These begin by mapping inputs onto a feature space with an underlying geometry and yield transformations of the fundamental feature space to generate new similarity structures which can underlie conceptual change. This framework provides a tractable, empiricist-friendly account that unifies and shores up various strands of the neo-Quinean approach.

As Allan Franklin has long argued, calibration plays an important role in the epistemology of experiment. When investigating new phenomena, scientists rely on surrogate signals to calibrate their instruments and thereby help justify their ultimate results. This talk pursues two new ideas for the epistemology of experiment literature on calibration. First, I evaluate a striking recent proposal (from 21-cm cosmology research) to largely sidestep surrogates and lump calibration parameters together with other unknowns under a unified Bayesian framework. Would such an approach lose the justificatory power provided by a more traditional approach to calibration? Second, I discuss the relationship between calibration and the ‘commissioning phase’ of experiments, arguing that the latter deserves further attention from philosophers of experiment.

When one higher-level phenomenon is ontologically reduced to some lower-level phenomena, what does this entail about the ontological status of the phenomenon being reduced? For instance, if composed entities are reducible to their components, then does this mean that the composed entities do not exist? And if so, how can we continue referring to the reduced higher-level phenomenon in our talk and theories? There are two popular strategies used to regiment reduction: grounding and truthmaking. I will examine these strategies and propose that ontological reductionism is best formulated in terms of minimal truthmakers. I will then put this strategy to use in a case study at the biology-chemistry interface.

​The problem of detecting multiple changes at the same site in a DNA sequence is a fundamental epistemic challenge facing anyone who wishes to infer how a DNA sequence has evolved.  In response to this problem, biologists first formulated a range of models of sequence change, then a number of methods for choosing among those models, and then automated the process in a series of computer programs.  This paper analyzes the results of that automation in terms of how many users made incorrect inferences with these software packages.  I argue that the division of labor necessary in science creates certain responsibilities of expertise that could have prevented these kinds of errors but come at a cost of limiting the "democracy" of scientific inquiry.

 Eliminative reasoning is an appealing way to justify a theory: observations rule out all the competitors, leaving one theory standing. This only works, however, if we have taken all the alternatives into account. There have been long-standing debates in philosophy regarding the upshot and limitations of eliminative arguments. In this talk, I will defend the virtues and clarify the limitations of eliminative reasoning, based on seeing how it has been used in gravitational physics. I will consider one case study of eliminative reasoning in detail, namely efforts to show that general relativity (GR) provides the best theory of gravity in different regimes. Physicists have constructed parametrized spaces meant to represent a wide range of possible theories, sharing some core set of common features that are similar to GR. I draw three main points from this case study. First, the construction of a broad space of parametrized alternatives partially counters the “problem of unconceived alternatives” (due to Duhem and Stanford). Second, this response is only partially successful because the eliminative arguments have to be considered in the context of a specific regime. Solar system tests of gravity, using the PPN framework, favour GR — or any competing theories that are equivalent to it within this regime. But, third, eliminative arguments in different regimes may be complementary, if theories that are equivalent in one regime can be distinguished in other regimes. These three points support a qualified defense of the value of eliminative reasoning.

​As the title suggests, I will be arguing that Inference to the Best Explanation is a form of non-deductive reasoning in mathematics. I will have something to say about the roles that non-deductive reasoning plays in mathematical practice, about the nature of IBE in science, about (one kind of) explanation in mathematics, and about the way that IBE operates in mathematical research. I will also discuss how IBE in mathematics can be reconciled with a Bayesian picture of the confirmation of mathematical conjectures.

In this paper I ask whether there is a definition of delusion which encompasses how the word ‘delusion’ is used in lay talk, where delusions are implausible or mistaken beliefs, and how ‘delusion’ is used in psychiatry, where delusions are symptoms of mental disorders. Using a variety of examples, I show that often talked-about features of delusions—such as being false, bizarre, or pathological—should not be regarded as defining features because they are not necessary conditions for a belief to be delusional. Next, I propose a unified notion of delusion as a belief that is irresponsive to counter-evidence and central to a person’s identity.

I discuss the openness of the future in a relativistic setting in which there are deterministic laws. I argue against many kinds of common wisdom.

For more than a century, historians of astronomy argued that Giordano Bruno’s theory that the stars are suns surrounded by planets was not a reason why the Roman Inquisition condemned him to be burned alive in February 1600. However, a systematic analysis of all extant primary sources from Bruno’s trial, plus hitherto unknown sources, surprisingly shows that Bruno’s cosmology of innumerable worlds was the main canonical, formal “heresy” that led to his execution. For Catholics, heresies were crimes against God. By studying treatises on heresies and Catholic Canon Law, I found that Bruno’s beliefs about the existence of many worlds and about the soul of the universe had been officially categorized as heretical before he even voiced them. Previous accounts of Bruno’s trials had not taken into account these key facts. I will show that Galileo’s most prominent critics in 1616 and 1633 were very concerned about Bruno’s condemned “Pythagorean” views. The Copernican theory was connected to the pagan belief that the Earth is animated by a soul, a view that Cardinal Bellarmine rejected in his writings. Moreover, by 1616, nine prominent individuals linked the plurality of worlds to Galileo’s telescopic discoveries. Such concerns also affected the censorship of Copernicus’s work in 1620. These considerations all seemed entirely absent from Galileo’s trial of 1633, yet I have found an extensive, previously unanalyzed and unpublished Latin manuscript by the consultant for the Inquisition who provided the most critical expert opinion against Galileo, Melchior Inchofer, which explicitly reveals that Galileo’s works were offensive, scandalous, and temerarious especially for defending the “Pythagorean” heresies about the soul of the universe and of many inhabited worlds.

This is an HPS-style outline of a new interdisciplinary project at the interface of philosophy of language, linguistics, and artificial intelligence. I aim to explore the complicated relationship between human and machine translation. The project includes: (i) a theoretical part focused on the representation of linguistic meaning in various human, machine, and hybrid human-machine translation systems; and (ii) a practical part focused on the different forms of human-machine symbiosis in technical (non-literary) translation areas and ways of improving them. I see the two parts of the project as closely related: a better understanding of the theoretical foundations of the mechanisms and processes at work in human and machine translation may suggest new ways of leveraging their strengths and overcoming their weaknesses; on the other hand, a closer look at how human and machine translation interact in real life may offer new insights into how physical systems represent linguistic meaning and, more ambitiously, what linguistic meaning consists in. In this talk I intend to introduce some problems of this kind in a historical context, based on a brief history of machine translation and an overview of recent developments. My primary goal is to raise awareness of this research agenda and to convey the importance of applying the conceptual tools of analytic philosophy, logic, and cognitive science to the analysis of the current situation in the translation industry. 

The emerging conversation around “big data” biology or “data-centric” biology (Leonelli 2016) and its implied contrast, “small data” hypothesis-driven biology, needs enriching because there are other ways biological research is reorganizing around data in this age of online databased scientific knowledge. I call one of these other ways “dataset-centric biology.” In this talk, I will describe a data journey drawn from a case study of human population genomics research. The case is part of a larger project on what has been called the “re-situation” of scientific knowledge (Morgan 2014). In this larger project, we seek to track a variety of knowledge “objects”: not only facts but also data, models and software. I offer a tentative model of data journeys to interpret the case. The model is comprised of three kinds of components: scientific data structures, data representations, and data journey narratives.

Concepts of levels of organization and their use in science have received increased philosophical attention of late, including challenges to the well-foundedness or widespread usefulness of levels concepts. One kind of response to these challenges has been to advocate a more precise and specific levels concept that is coherent and useful. Another kind of response has been to argue that the levels concept should be taken as a heuristic, to embrace its ambiguity and the possibility of exceptions as acceptable consequences of its usefulness. In this talk, I suggest that each of these strategies faces its own attendant downsides, and that the pursuit of both strategies (by different thinkers) compounds the difficulties. That both kinds of approaches are advocated is, I think, illustrative of the problems plaguing the concept of levels of organization. I end by suggesting that the invocation of levels can mislead scientific and philosophical investigations just as much as it informs them, and that levels should be explicitly treated as one limited heuristic or axis of analysis among many.

One tradition in moral philosophy depicts human moral behavior as unrelated tosocial behavior in nonhuman animals. Morality, on this view, emerges from auniquely human capacity to reason. By contrast, recent developments in theneuroscience of social bonding suggest instead an approach to morality thatmeshes with ethology and evolutionary biology. According to the hypothesis onoffer, the basic platform for morality is attachment and bonding, and thecaring behavior motivated by such attachment. Oyxtocin, a neurohormone, is atthe hub of attachment behavior in social mammals and probably birds. Not actingalone, oxytocin works with other hormones and neurotransmitters and circuitryadaptations. Among its many roles, oxytocin decreases the stress response,making possible the trusting and cooperative interactions typical of life insocial mammals. Although all social animals learn local conventions, humans areparticularly adept social learners and imitators. Learning local socialpractices depends on the reward system because in social animals approvalbrings pleasure and disapproval brings pain. Acquiring social skills alsoinvolves generalizing from samples, so that learned exemplars can be applied tonew circumstances. Problem-solving in the social domain gives rise toecologically relevant practices for resolving conflicts and restrictingwithin-group competition. Contrary to the conventional wisdom that explicitrules are essential to moral behavior, norms are often implicit and picked upby imitation. This hypothesis connects to a different, but currentlyunfashionable tradition, beginning with Aristotle’s ideas about social virtues and DavidHume’s 18th century ideas concerning “the moral sentiment”.

Struggling to make sense of persistent vaccine hesitancy andrefusal, commentators routinely bemoan scientific illiteracy among the generalpublics or fret over a destructive cultural embracing of “anti-intellectualism”and the resulting “death of expertise”. This is allegedly part of a largercultural war on science that threatens the future of liberal democracies.Science, it is assumed, cuts through partisan politics, and the publics fail insofaras they refuse to see this. This talk challenges popular framings of vaccine hesitancy as“public misunderstanding of science” and “death of expertise”, demonstratinginstead that public resistance stems from poor trust in scientificinstitutions. Working with an understanding of science as socially situatedhighlights the importance of trust and credibility in the successful operationsof science—both within research communities and in relation to the publics.Public mistrust of science is thereby not a problem with the publics but aproblem of scientific governance; specifically, a failure of scientificinstitutions to maintain the credibility required to achieve their social andepistemic aims. The talk ends with general recommendations regarding howvaccine outreach efforts can be modified in light of this insight.

Biological brains are increasingly cast as 'prediction machines': evolved organs whose core operating principle is to learn about the world by trying to predict their own patterns of sensory stimulation. Rich, world-revealing perception of the kind we humans enjoy occurs, these stories suggest, when cascading neural activity becomes able to match the incoming sensory signal with a multi-level stream of apt 'top-down' predictions. This blurs the lines between perception, understanding, and imagination, revealing them as inextricably tied together, emerging as simultaneous results of that single underlying strategy. In this talk, I first introduce this general explanatory schema, and then discuss these (and other) implications. I end by asking what all this suggests concerning the fundamental nature of our perceptual contact with the world.

2016: John Norton (Professor, University of Pittsburgh), '1, 3, 5, 7, ... What's Next?'

2015: Melinda Fagan (Professor, University of Utah), 'Explanation, Unification, and Mechanisms'

Existing efforts to understand human moral psychology in evolutionary terms fail to explain why we externalize or objectify moral demands. I argue that this distinctive tendency emerged as a way to establish and maintain a crucial connection between the extent of our own motivation to adhere to a given moral demand and the extent to which we demand that appealing partners in cooperative or exploitable forms of social interaction adhere to it as well. This hypothesis is supported with a diverse array of empirical findings and used to explain a number of otherwise somewhat puzzling features of human social interaction.