Some remarks on John Turri's Philosophy Bombs — Ryan Tanner

A couple days ago over at the Facebook group Serious Philosophy someone (James Ragsdale) posted a link to this article at Daily Nous talking about philosopher John Turri dropping some mad philosophy bombs in a recent interview. I thought I’d archive my FB remarks here with some minor edits and notes, with maybe an aim to follow up later. My comments are quick and dirty and focus on just a couple lines from the Daily Nous piece and some passages from a PLOS ONE article by Turri and Wesley Buckwalter. The gist is that I think claims Turri makes are questionable, and that something is off about how X-Phi is conducted and the conclusions experimental philosophers invite us to reach. In particular, Turri wants to reject on X-Phi grounds the propositions that the justified-true-belief account of knowledge isn’t the commonsense view, and ditto for ought-implies-can in the context of moral judgments. I just don't buy it. 


Re: Knowledge being justified true belief:

> there was never any evidence that JTB was the “commonsense” view either, and recent work by experimental philosophers, particularly Christina Starmans and Ori Friedman, shows that it is not the commonsense view.

Note: Here's what Starmans and Friedman conclude in their paper on the matter: "The findings suggest that the lay concept of knowledge is roughly consistent with the traditional description of knowledge as justified true belief, but with the caveat that people also require that the belief be based on authentic, rather than apparent, evidence [like in some Gettier cases]."

Now, what am I misunderstanding? Is it that really the commonsense lay-person view is that knowledge is true, authentically justified belief? I mean sure: that shows that Gettier examples ultimately generate more refined intuitions about how people attribute knowledge. That's what they were supposed to do, wasn't it? That doesn't show the JTB account isn't commonsense or that it's exotic; lots of people readily believe it if you lay it out for them. It just shows you can advance strange cases that suggest it's not complete. So what? I really feel like I must be missing something.

I'm gonna look up the cited authors, but my knee jerk response is: Oh please. You can get a first-day undergrad class who've never done conceptual analysis on anything in their lives to agree that knowledge is JTB in about 6 minutes. Knowledge-is-JTB isn't an exotic view. 

For his ought-implies-can experiments:

>The results were absolutely clear: commonsense morality implicitly rejects “ought implies can.”

Just glanced at his paper w/Buckwalter, but it strikes me in their experiments they are just asking the wrong questions and thus get results of uncertain importance. I.e., Questions way too complex and loaded than would illuminate anything about commonsense morality. E.g., for experiment 1, they offered this example:

Walter promised that he would pick up Brown from the airport. But on the day of Brown’s flight, Walter is [in a serious car accident/suffering from clinical depression]. As a result, Walter is not [physically/psychologically] able to pick up Brown at the airport.

In the Physical condition, for instance, participants were asked, “Please choose the option that best applies” from the responses below:

  1. Walter is obligated to pick up Brown at the airport, and Walter is physically able to do so.
  2. Walter is obligated to pick up Brown at the airport, but Walter is not physically able to do so.
  3. Walter is not obligated to pick up Brown at the airport, but Walter is physically able to do so.
  4. Walter is not obligated to pick up Brown at the airport, and Walter is not physically able to do so.

[...] The overwhelming majority of participants judged that Walter is obligated to pick up Brown despite the physical or psychological inability to do so.

If you're going to try to figure out whether people think OIC is a matter of commonsense morality, why not just ask, "Given Walter can't pick up Brown [because of whatever], what should he do?" Or how about this: "If you were with Walter, what would you tell him he should do?"

If they give an answer that isn't "He should pick up Brown" that would suggest that people do take inability into account when asking what people should do, no? Why not follow up with, "Why wouldn't you tell him to pick up Brown?"

If this is characteristic of X-Phi, then it seems to me that experimental philosophers don't understand that determining things about commonsense thinking should involve simple language and simple concepts (like what people use every day), e.g., questions about what people should do when they can't, versus what moral "obligations" they are subject to when they've been inflicted with "inabilities".

The experiment also assumes that moral obligation implies "ought", which as a philosopher I buy as a technical truism, but is probably much more tenuously held as a matter of common sense than OIC. 

Consider that "obligation" can easily be commonsensically conflated with other technical philosophical concepts like "what is most morally fitting" or "what is morally required" which don't necessarily imply 'ought' or (by extension) 'can'. People might latently understand that promise making requires promise keeping (in the non 'ought'-implying sense), and people might easily state this in terms of "obligation", but then that's not the same as saying a crippled man ought (morality-wise) keep his promise if he can't. It just means lay people don't necessarily understand obligation in terms of oughts. Again, if you want to know whether people think 'can's matters for moral 'ought's, why not just ask the subject what Walter (morally) should do given that he can't pick up Brown?