Pages

Wednesday, August 29, 2007

Moral Deliberation

Hi all... Thought I'd start off the year's first true post...

So, I'm working on a paper for Steve Wall and I'd love a bit of help on a couple of questions...

First: Can we think of any reasons NOT to admit the rationality of a certain degree of skepticism with regard to our own moral deliberative capacities? It seems to me almost obvious (which is a sure sign I've overlooked something), that we ought to recognize the possibility that we're wrong when we come to moral judgments. Moreover, I'm thinking, its got to be almost rationally necessary to recognize this possibility... i.e., its a rational defect not to realize the possibility that we're wrong. N.B. I'm just assuming the falsity of things like expressivism and the like which turns all moral assertions into truths no matter what. [EDITED: In deference to the well-resting of Charles Stevenson in his grave... I retract the "into truths no matter what" - leaving it in place only so all can see my shame. For laziness' sake, I'll hope the reader knows what I was after and leave it at that]

Second: I'm wondering about the "starting points" we take our cues from in moral deliberation. After a fashion, for instance, Rawls construes these as our "considered judgments" but we could certainly well think that thats not really the best place to start... But, it seems to me, we've certainly got to start from 'somewhere', since the denial of that seems to indicate that our moral deliberations are 'a priori' in such a way that, carrying out such deliberations, we'd be unable to come to justified conclusions about concrete moral matters. Clearly we do come to such conclusions, so, are we starting from something like Rawlsian considered judgments? An abstract background conception of the "good life" that, for the purposes of the deliberative process, has to take the form of an unchallenged assumption (if only provisionally)? How do we start the process rolling so to speak? And, relatedly, what justifies using the starting-points that we do, whatever they are? Is there a "best" place to start? Would starting from there, wherever it is, and deliberating with perfect rationality, necessarily yield true moral judgments? Or is the 'best' place still inadequate to that task?

Third: Assuming the rational demand for a certain degree of moral skepticism, can we nevertheless say that perhaps our moral conclusions, imperfect though they might be, at least might have 'something' right about them (think like a Millian partial-truth)? i.e. does skepticism necessarily imply that we're entirely wrong, or just that we're at least always 'partially' wrong in some way, and that we just don't know which part it is? Moreover, can we justify, on such grounds, the prima facie acceptance of other disagreeing moral views as themselves probably partially-right? Can we then go on to think that, perhaps, in encounters with others of this sort in situations of disagreement, we might have some rational hope of making at least some epistemic 'progress' even if we think that we'll never get morality "totally right"?

Sorry its a bit inchoate at the moment... if it makes anybody feel better having to sort it out, its only a bit more coherent in the fully-expanded version of the paper as it stands right now....

I'd love anybody's thoughts, and any suggestions for articles to look at would be more than welcome too....

Thanks! Happy start-of-the-year!
Corwin

7 comments:

  1. Expressivism doesn't turn all moral assertions into truths "no matter what"!

    It would be closer to the truth to say that it turns them into imperatives. It doesn't even turn them into equally good imperatives (though of course bad imperatives are understood as those of which I do, and one should, disapprove).

    Sorry to disregard the point of your post, but I just rolled over in my grave.

    ReplyDelete
  2. Charles:

    Sorry for my hasty circumlocutions... I was typing with my fingers more than my head and turned a sloppy phrase... My apologies...

    ReplyDelete
  3. 1) I can't think of any reasons to think that our capacities are infallible. We might be tempted to say that a particular view is the "best that we can do for now." Is Wall urging that you find some of these, or are you just curious about it yourself?

    2) I'd have to know more about your meta-ethical position to address the idea of a "true moral judgment." As far as an ideal starting point for moral deliberation I don't think that there is a best place to start. The "doing" of philosophy as I understand it is the process of moving from where ever you are towards a better position relative to the world around you. Thus, the fact that one starts out with a particular view has no bearing on the final "product" that one might come away with. The language may be a little clumsy there, but I hope the point comes across.

    With regard to (3) I don't think that skepticism requires that you think that you are constantly wrong. It requires that you entertain the idea that you might be wrong.

    ReplyDelete
  4. Wall isn't urging me.. but they're related to the paper I'm writing, and I always get nervous when I think that something is obvious....

    I may have phrased what was asking in #2 a little akwardly... so whatever akwarness results is entirely my fault. Certainly, for instance, we hope to make progress with regard to our moral views. But when we start deliberating at all, I'm thinking, not all starting points are created equal. I could, for instance, start from a random declaration like, "Belching is morally valuable" and go from there. Or I could (perhaps better) start from a reflection on various examples of lives that I think to be good, and try to 'extract' something from these... Or I could think that, whatever morality is, it's got to be "universalizable" and see what I can get out of that.... None of these are exclusive, of course... but does that make it any clearer what I'm trying to get at... something like, how and where we find the presuppositions we use.. and if any sources are "better" than others...

    Certainly you're right that skepticism doesn't demand that... sorry, I was writing much less clearly than I'd hoped at the time.. I'm really trying to get at the question as to what 'reason' may demand of us, granting at least a modest acknowledgment of fallibility, in cases of moral disagreement and conflict. What degree of 'credence' or 'respect' do I owe the positions of others, if they're no more likely to be right than I am?

    ReplyDelete
  5. (About Comment 2)

    I don't really know if there is a better or worse place to start. That really just falls out of my idea that philosophy is a process/skill that can be learned.

    The move from "Belching is morally valuable" to "Belching is morally neutral" might take longer for one person than another. This might be based on their philosophical skill, the range of options that they consider worthy of consideration, or some other factors. I kinda doubt that there is a particular discreet number of steps between a proposition and a conclusion that varies according to starting place. Also, what is your original judgment (maybe the suspicion that belching is valuable) which makes you examine other sorts of valuable things for comparison? I suspect that you don't start with a blank slate.

    (About Comment 3)
    Why do you think that your propositions aren't any more likely to be right than theirs? I might be missing something, but I think that there are answers which are "better" and "worse" than others w/regard to the context. The fact that you might each be wrong doesn't entail that you are each equally likely to be wrong. At least I don't think so...

    ReplyDelete
  6. Let us agree that our moral judgment is quite fallible. So right there is a reason for me to believe any particular moral judgment of mine might be wrong. But this does not on its own indicate that I always have MOST reason to hold any particular judgment as possibly wrong. I can imagine a circumstance in which someone with fallible moral judgments has most reason to believe herself to be infallible

    Consider some kind of soldier or firefighter whose job requires quick action and snap judgments. Any hesitation or waffling on her part could result in catastrophe so she trains herself to act out of instinct when on the job. I know that her judgment is fallible. Perhaps she even understands this in a quiet moment, but when on the job she has most reason to believe her moral judgment to be infallible.

    This is probably rare. Perhaps I've described something psychologically impossible, but it seems realistic to me. I sometimes am in the business of childcare/education and when I'm in charge of a group of kids I certainly enter a different mindset as to the fallibility of my judgment. In circumstances of great responsibility I act with an enhanced sense of confidence. Maybe others know what I'm talking about.

    I'm pointing to a hierarchy of reasons here. Many things may be reason-giving, but sometimes they will conflict, and some reason-giving feature loses out. I may have theoretical reasons to believe in the fallibility of my moral judgment, but these may be overridden by other reasons.

    ReplyDelete