Bull by the Horns


You might find it difficult to believe, but this blog started, seven years ago, as a place to put essays on philosophy.

After four rather laboured posts, I thought "I'm not actually any good at this, but I do have a real life - so I'll write about that instead", deleted the essays and started again.

I'd also discovered the philosophical aspect of marxism, which seemed to neatly cut through all the imponderables and knotty questions I spent my teens and twenties bashing my head against.

Now though, I've belatedly realised that:
(1) the philosophy is 19th century hermetic mysticism.
(2) It's rubbish.
(2) It's actually nothing to do with the political project of marxism, and
(3) However insightful the project's other theories may be, it's moribund.

So, I'm picking up some of my old books again, and I'm setting myself the task of writing one informal philosophically inspired essay per week - perhaps on Philosophical Phridays. And we'll see where it goes.

Starting with one of my favourite topics.

"On Bullshit" is a 1986 essay by philosopher Harry Frankfurt. He treats bullshit as a verb rather than a noun. Instead of asking "What is bullshit?", he asks "What is it to bullshit someone?"

His answer is that it's to attempt to persuade, without caring whether the argument is sound, logical or even meaningful. That means it's quite possible to bullshit someone with nothing but clear, reasoned arguments based on evident truths. But it's equally possible to use half-truths, emotive language, empty cliches, appeal to authority, intimidating technical terms, insinuation...or just plain lies.

A bullshitter is someone who doesn't care about truth or the ethics of debate - if the truth is useful, they'll use it, but they'll try to win by any methods available.

The obvious examples are advertising and politics. An advertiser might truthfully claim that users of toothpaste brand X have 66% fewer cavities than the average, according to a recent study - while failing to mention the study had a grand total of three subjects, and had to be done 50 times to get the result they wanted. An election candidate might constantly mention that their competitor was once accused of adultery - as though the accusation proved guilt, and as though it somehow related to the competitor's competence.

So, is it possible to bullshit oneself? Frankfurt doesn't ask, but there seems no reason why not. Indeed, we've all watched people take a few seconds to invent rationalisations after they've realised what they're doing or saying is wrong. People are actually very squeamish about violating their ethics, so they avoid the problem by building their ethics out of rubber.

But is it bullshit to use violence or the threat of violence? What about threatening someone with financial ruin if they don't join your ponzi scheme? Or eternal damnation if they don't join your religion? That's certainly persuasion.

Indeed, what about threatening to stab someone if they don't give you their money? A mugger may be lying about their willingness to use a knife, but it's difficult to describe being robbed as being bullshitted.

What about actually being stabbed? What about torturing detainees to "persuade" them to tell what they know?

Again, Frankfurt doesn't go there, but there is a crucial implicit distinction - between being persuaded, by whatever means, to believe something, and being persuaded, by possibly identical means, to do something.

The ponzi seller and the preacher both use threats and promises to try to make you believe you should join their scheme, but you may or may not act on that belief. The interrogators don't really care what you believe - they just want you to tell them what they think they already know, and not resist whichever government is in power at the time.

To borrow terms from a different philosopher - Louis Althusser - bullshit is ideological, as opposed to repressive.

But the distinction isn't entirely clear cut, for two reasons:
(1) There'd be no point in changing someone's belief if that didn't sometimes change their behavior as a result. Frankfurt says bullshit is everywhere, but doesn't go on to say that the only reason it's everywhere, is that people who want you to do things for them are everywhere.

(2) It's emotion, not belief, that moves us. And the reason people sometimes act on their beliefs is that beliefs are intimately mixed up with feelings. The advertiser doesn't want you to believe the product is effective - they want you to desire it, for whatever reason, even if you believe it's ineffective. So much that you'll buy it. How often have you bought something which you knew wouldn't do its job as well as a competitor, but felt a kind of loyalty to?

To be bullshitted is to have your emotional buttons pushed - including buttons connected to your feelings about truth, rigour and logic. In fact, I suggest that Harry Frankfurt had it the wrong way round: The bullshitter doesn't bullshit us into believing. Rather, they bullshit us into feeling, hoping this will lead to us acting, with beliefs later catching up with both as a by-product.

19 comments:

  1. To be bullshitted is to have your emotional buttons pushed - including buttons connected to your feelings about truth, rigour and logic.

    In that one sentenced you caputured and clarified my problem with the church.

    More Philosophical Phridays please!

    ReplyDelete
  2. Yay for Fillosoffical Frydaze!

    To be bullshitted is to have your emotional buttons pushed - including buttons connected to your feelings about truth, rigour and logic.

    The hypothetical toothpaste ad you mention is clearly bullshit, but I'm not sure it's trying to press my emotional buttons. It's trying to persuade me to believe the product is better for my teeth, but it's trying to persuade me there's empirical evidence it is objectively better. Hence the cloak-and-dagger stuff to do with sample sizes and cherry-picking of results, which are attempts to manipulate the evidence itself, not my response to it. Which I think makes the ad a carefully contructed attempt to appeal to my faculty of reason, doesn't it, not an attempt to appeal to my emotions?

    ReplyDelete
  3. @Shaz: Thanks, and glad I could help.

    I've no idea what I'll write about next. But probably it'll be a train of though tangentially triggered from reading something else. Probably by Christoper Hitchens, who I'm trawling through at the moment.



    @Aethelread: The most obvious response I can make is the sentence you quote.

    Do people have emotions about reason? Of course they do.

    Creationists wouldn't try to give the impression of 'creation science' if science didn't have associations of reliability and truth.

    Indeed, some creationists, rather than use the tactic of trying to userp the kudos of science, try to destroy that kudos, because it's a threat to their beliefs.

    The only reason for a toothpaste advertiser to present evidence - manipulated or not - is to use the positive resonances that evidence has to give the product an aura of desirability.

    But what would happen if they did that in a time or culture where evidence, quantification and skepticism were not valued? Then they'd do better by associating the product with sex, or holiness, or the notion of a wholesome nuclear family, or whatever.

    Incidentally, I took the toothpaste example from reality - a book I'm sure you've come across, called 'How to lie with statistics'.

    ReplyDelete
  4. @Kapitano

    Hope you don't mind me extending this to Shut-the-fuck-up-already Saturdays, but I'm enjoying the discussion. Feel free to ignore me as an irrelevant annoyance. :o)

    Do people have emotions about reason? Of course they do.

    Agreed. But this is not to say that the faculty of reason is nothing more than 'emotions about reason', or that it can't be directly appealed to by an advert. A person might be influenced by a 'sciencey' ad because they enjoy the feelings of reassurance and intellectualism it conveys. But they might be influenced by what appears to be an objective study comparing the merits of different toothpastes because they want to buy the toothpaste that has been objectively demonstrated to be the best - not for any emotionally-mediated reason, but just because they want the toothpaste that is most effective at doing its job. This is, after all, something that can be objectively measured and quantified, and decisions can be reached on this basis by the faculty of reason, independently of any emotional response. And the faculty of reason can be led into making faulty decisions by erroneous or misleading data - hence why the advertisers go to all the trouble and expense of manufacturing bullshit data, rather than opting for an appeal to the emotions.

    But what would happen if they did that in a time or culture where evidence, quantification and skepticism were not valued? Then they'd do better by associating the product with sex, or holiness, or the notion of a wholesome nuclear family, or whatever.

    Advertisers - bullshitters in general - will of course choose the technique that is most likely to persude the bulhshitee, and if reason becomes devalued they'll stop appealing to reason. But that doesn't mean that appeals to reason, when they were done, were appeals to emotion. It just means that appeals to reason have ceased to be effective, and the bullshitters have adopted new tactics.

    ReplyDelete
  5. @Aethelread: Shut-the-fuck-up-already Saturdays

    No need to worry there. Philosophy involves discussion, pretty much by definition.

    they might be influenced by what appears to be an objective study comparing the merits of different toothpastes because they want to buy the toothpaste that has been objectively demonstrated to be the best - not for any emotionally-mediated reason

    To want the toothpaste which cleans best is to place the highest value on good cleaning - and therefore a lower value on cheapness, pleasant flavour, having a nice colour etc. Indeed, there are plenty of competing values which have nothing to do with the toothpaste itself - it may be made by a company which recognises pension rights for domestic partners as well as apouses, or it may have an association with patriotism, or youth or sexiness.

    It may just be fasionable - how many of us hand over our cash for that and no other reason? Or you might vaguely feel that you're disrespecting the memory of your dear dead father if you don't use his preferred brand.

    Some of these values may be more sensible than others, some not completely conscious, and some may be downright idiotic, but they're all emotional.

    (When I was very young, I used to like 'Blue Minty Gel' toothpaste, mainly because it had a viscous, almost dough-like consistency, which made it move differently from any other paste. I used to drip it from a great height onto the brush, in a long, gooey strand. Hey, I was six.)

    It's certainly true that the most sensible way to choose a toothpaste is to discard all these other considerations and ask only 'does it work?'. But that doesn't make the high valuing of reason an unemotional state - it just makes it a sensible, productive, useful, emotional state.

    And therefore one which can be seduced or subverted by sciency lies and pseudoscientific gibberish - as you point out.

    that doesn't mean that appeals to reason, when they were done, were appeals to emotion.

    Well, as I hope I've said, being appealed to is an emotional state.

    In fact, I'm not sure what it would mean to appeal to someone's reason without also appealing to their emotions about their reason.

    If the vulcans of Star Trek really did repress all their emotions, it's difficult to see what could motivate them to...repress all their emotions.

    ReplyDelete
  6. @Kapitano

    And now I take it into Still-wittering-on Sundays... ;o)

    Some of these values may be more sensible than others, some not completely conscious, and some may be downright idiotic, but they're all emotional. [...] that doesn't make the high valuing of reason an unemotional state - it just makes it a sensible, productive, useful, emotional state.

    I'd draw your attention to certain words you've used here: 'sensible', 'productive', 'useful'. What do you mean by them? On first impression, they look like an assertion that some values produce quantifiable real-world effects that can be assessed independently of the emotional significance that is attached to them. Doesn't that imply the existence of a faculty of reason that can make these rational, non-emotional assesments? And doesn't that in turn raise the possibility that the high valuing of reason is (or may be for some) not an emotional position, but a rational one - i.e., that reason is highly prized because it tends to facilitate the generation of beneficial real-world effects that can be objectively qauntified?

    I'm not sure what it would mean to appeal to someone's reason without also appealing to their emotions about their reason. If the vulcans of Star Trek really did repress all their emotions, it's difficult to see what could motivate them to...repress all their emotions.

    Well, I agree: no-one can repress what they don't have. A better way of thinking about what an appeal to reason that didn't also involve an appeal to emotions about reason might be like is to try and imagine an entity that is actually emotionless. One possibility for such an entity might be a computer that has been programmed with a rather better learning algorithm than has so far been developed, and then left to develop its own rules for answering any question asked of it. Such a computer would observe empirical data, and draw conclusions from that data untainted by any emotional bias.

    If I put a computer of this type in charge of choosing which toothpaste I bought, any advertiser wanting to influence my toothpaste buying behaviour would have to appeal not to me, but to my computer - and that computer doesn't have emotions, only a faculty of reason. So appeals to emotion - telling the computer that choosing a particular brand would make it feel younger, or sexier, or more patriotic - would fail. But appeals to reason - feeding the computer data which demonstrates a particular brand is associated with quantifiably better oral health - would work. Depending on how naive the computer was - whether or not it knew enough to be sceptical - it might still be taken in by bullshit data, even though it didn't have any emotional buttons to be pressed.

    ReplyDelete
  7. Doesn't that imply the existence of a faculty of reason that can make these rational, non-emotional assesments? And doesn't that in turn raise the possibility that the high valuing of reason is (or may be for some) not an emotional position, but a rational one

    Two points, which I'll try to expand on below.

    First, you talk about 'rational, non-emotional assessments', so you're thinking of an emotional assessment as by definition an irrational one - likely to lead to false conclusions.

    "the high valuing of reason is not an emotional position, but a rational one"

    There are two senses of 'rational' here:
    (1) (of a conclusion) produced by the faculty of reason, as opposed to emotion, intuition, prejudice, habit etc.
    (2) (of a conclusion) analysed in retrospect as being factually correct.

    Second, you talk about 'the high valuing of reason', but valuing is an emotional act.

    "an entity that is actually emotionless...a computer that has been programmed with a...learning algorithm"

    Computers don't have emotions, but they have something else that drives them - programs. The coded algorithm, complete with data input, data parsing and decision trees, is what drives it.

    What humans have instead is motivations, internal states which provke motion. It's no accident that the word 'emotion' contains 'motion' - emotions are what move you. Without emotions we may indeed have good reasons for doing things, but we'd have no motives.

    "Doesn't that imply the existence of a faculty of reason"

    You and I are using this faculty right now. But what prompts us to do so?

    You're thinking of reason as like a car that travels along the roads - a network of decisions taken according to available data. And you're thinking of emotion as a set of winds which try to blow it off course - into irrational, off-road territory.

    But what's powering the car? The petrol, I maintain, is also emotional.

    If you want to know just how emotion-soaked humans are, recall or look up the famous Coutard and Capgrass syndromes.

    In fact, I may write about them next friday.

    ReplyDelete
  8. Allow me to weigh in on Moron Monday. :)

    Even by appealing to our reason, the bullshitters are also appealing to our emotions. A "just the facts ma'am" sell can makes us feel more intellectual than the average consumer.

    Human beings precieve everything through the lens of ego which is strongly coloured by emotion. So no matter how much that toothpaste ad might appeal to reason, it's still going to end with a
    brighter than life smile sending the message that using brand x will make you smile too.

    What a fascinating discussion going on here ... and that's no bullshit. :)

    ReplyDelete
  9. @Kapitano

    In fact, I may write about them next friday.

    I hope so - or anything else that takes your fancy!

    First, you talk about 'rational, non-emotional assessments', so you're thinking of an emotional assessment as by definition an irrational one - likely to lead to false conclusions.[...] You're thinking of reason as like a car that travels along the roads - a network of decisions taken according to available data. And you're thinking of emotion as a set of winds which try to blow it off course - into irrational, off-road territory.

    Actually, I was glossing what you wrote. Your use of the words 'sensible', 'useful' and 'productive' seemed to me to imply a method of assessment that was independent from emotionally-inflected assessments. The division between a non-emotional assessment that allowed an outcome to be described, in your words, as 'sensible', 'productive' and 'useful' and an emotional assessment (that leads, by implication, to silly, non-productive and useless outcomes) was one that seemed to be implied in your own statements. Hence why I drew attention to it, and invited you to clarify whether or not my sense of what you seemed to be saying was accurate. I note that you've criticised the concept, but haven't yet addressed the issue of whether it was implied in your own comments.

    (2) (of a conclusion) analysed in retrospect as being factually correct.

    This seems to be a definition of the word true rather than the word rational. A conclusion can, by coincidence, turn out to be true, even if it has been reached by thoroughly irrational means. For example, a fortune teller will almost certainly reach an accurate conclusion about the future at least once in their career, but this would not imply the conclusion was rational: it just happened to be coincidentally correct.

    Computers don't have emotions, but they have something else that drives them - programs. The coded algorithm, complete with data input, data parsing and decision trees, is what drives it.

    Absolutely. Hence why I suggested this (in reply to your statement 'I'm not sure what it would mean to appeal to someone's reason without also appealing to their emotions about their reason') as a thought experiment for modeling what an appeal to a pure faculty of reason would look like. I went on to suggest that a pure faculty of reason, if it was sufficiently naive, could still be misled by bullshit data. And if a faculty of pure reason can be bullshitted, even though it has no emotional buttons to be pressed, this would seem to undermine your conclusion in the original post that 'To be bullshitted is to have your emotional buttons pushed'.

    you talk about 'the high valuing of reason', but valuing is an emotional act.

    I realise this is your key contention, but you haven't yet presented a convincing argument for it. Forgive me if I again draw attention to the alternative motivation for valuing reason that I suggested in my last comment: 'the possibility[...] that reason is highly prized because it tends to facilitate the generation of beneficial real-world effects that can be objectively quantified'. What about the faculty of pure reason I suggested: the self-taught computer that has no emotions? If it decides to place the highest value on the rational approach, why does it do so? It can't be guided by emotion because, as we've agreed, it doesn't have emotion. It seems to me the only possible explanation of this is that reason can be valued by something other than emotion - the faculty of reason. You're obviously certain I'm wrong, so I'm looking forward to you demonstrating precisely why and how I'm wrong!

    ReplyDelete
  10. @Shaz

    A "just the facts ma'am" sell can makes us feel more intellectual than the average consumer.[...] no matter how much that toothpaste ad might appeal to reason, it's still going to end with a
    brighter than life smile sending the message that using brand x will make you smile too.


    I agree that it can make us feel more intellectual. I also agree that many (but not all) toothapste ads will incorporate explicitly emotional cues. My point is that this isn't the only way such an advert can function: it can also appeal to the faculty of reason idependently of emotion. I don't have to enjoy a feeling of intellectual superiority to have better teeth - I just have to use the toothpaste that is best for my oral health. This is something that can be objectively assessed and presented to me in the form of data - opening up the possibility that I can choose my toothpaste on a purely rational basis.

    What a fascinating discussion going on here

    I'm glad I'm not the only person enjoying it! All hats off to Kapitano for starting and sustaining it, I think. :o)

    ReplyDelete
  11. @Aethelread

    My point is that this isn't the only way such an advert can function: it can also appeal to the faculty of reason idependently of emotion.

    I agree that, in theory, it can. However, I've yet to see one that does.

    From a rational point of view, roughly 97% of the products we buy are unnecessary. Without emotional appeal, there would be no need for advertising ... or religious denominations.

    Emotions are inherent to the human condition. From a logical point of view, there is no need for us to enjoy sex. In the cold hard light of rational thinking, the sole reason for engaging in sex is to ensure the survival of the species. From an emotional standpoint, there are countless reasons to have sex.

    ...I can choose my toothpaste on a purely rational basis.

    Ah, but can you? Like Kapitano pointed out earlier, there is an emotional aspect to all choices. If three brands of toothpaste have been factualy proven to do exactly the same thing, how will you choose which one to use? I suspect there will be some very illogical factors i.e., taste, texture, in your final decision.

    ReplyDelete
  12. Aethelread: "valuing is an emotional act."
    I realise this is your key contention, but you haven't yet presented a convincing argument for it.


    It's rather difficult to argue for an idea which one finds self-evidently true. But that, I suppose, is one of the key reasons to do philosophy.

    I really don't know how to esteem something, or have a preference for one thing over another, without having an emotion about it.

    I suppose it's possible to have a top ten list of toothpastes, arranged in order of effectiveness - however we define that. The effectiveness could be quantified and ranked according to some critereas. And we could then have a rule whereby the top-scoring toothpaste is automatically the one chosen.

    All this could be done by a sufficiently advanced machine. But where does the motive come from to formulate the criterea or perform the tests? From a desire (emotion) to have a good toothpaste. And antecedant to that, a desire to take care of one's teeth.

    But why should we care about the state of our teeth? Because without teeth, eating will, in the future, become difficult and diet restricted. So why should we care about that? We want to reduce the prospect of our future selves living in discomfort.

    And why should I care about my 60 year old self? Or indeed myself at this moment? You can push back the question 'why?' as far as you like, but you never get beyond issues of preferring some futures over others, of wanting to have something and/or avoid another.

    We can easily justify wanting something because 'it's rational', but even accepting that leaves the question 'Why do you want the rational thing?'.

    The response 'Humans are creatures of reason. It's natural to want what's rational' ... is likely to raise a laugh more than anything else.

    Bringing it back down to earth, you and I both know about clinical depression, and the kind of emotional draining involved. We both know how difficult it can be to summon the willpower to get out of bed, even when we know it's very strongly in our best interests to do so.

    ReplyDelete
  13. Aethelread: What about the faculty of pure reason I suggested: the self-taught computer that has no emotions? If it decides to place the highest value on the rational approach, why does it do so? It can't be guided by emotion because, as we've agreed, it doesn't have emotion.

    Such a computer would have to be designed. When you say it's self-taught, you're saying it's been provided with the capacity, not just to gather data, but to filter it (by what criterea?), analyse it (looking for what?), and make decisions (based on movement towards what stated goal?).

    All these questions in parentheses are questions of values - specifically, the programmer's values. If, as I've argued, these values are emotions, then the emotionless computer is just replicating the programmer's values, but in algorithmic form. The computer is immitating or modelling the programmer.

    Incidentally, I don't think this is possible even in principle, because there's no way for a computer program to be 'conflicted'. Computers might receive data they don't have the programming to process, but they can't be uncertain whether their input should be processed one way or another. They can't be vague and they can't be hypocrites.

    Okay, lets say the computer has a different kind of algorithithm - one which lets it make its own values, and modify them according to whatever consequences there are of however it acts in the light of data and earlier values.

    No sneaking in of basic assumptions of what's true or false, meaningful or meaningless, important or trivial, desireable or not. None of what's known as 'frontloading' in Artificial Intelligence circles. Our machine starts with a blank slate.

    I suggest the result would be...an insane computer. One which started by arbitrarily deciding on an initial set of non-contradictory values, then applying its actions in the light of these values and its data input, and modifying these same values, using the values themselves as a guide to how they should be modified.

    In short, a computer which evolves its mindset. Indeed, a machine with something very closely analgous to emotions - though without the kind of emotional messes humans have where they have several incompatible emotions simultaneously.

    If it's now presented with a challenge to recommend a brand of toothpaste to us, then there is no reason why it should prefer cavity protection over colour, or indeed over latent heat or time of day. And no reason why, having ranked the candidates, it shouldn't chose number 8 over number 1.

    ReplyDelete
  14. @Shaz

    I agree that, in theory, it can.

    If you agree that it's possible to conceive of a decision being made on a rational basis without recourse to emotion then you're agreeing with my central argument. It's important to remember that, as in any philosophical discussion, we're talking about abstract concepts here, even if we do make reference to real-world scenarios like toothpaste adverts to illuminate a point we're trying to make. So if you agree 'in theory', that actually means you agree, full stop.

    If three brands of toothpaste have been factualy proven to do exactly the same thing, how will you choose which one to use? I suspect there will be some very illogical factors i.e., taste, texture, in your final decision.

    If there are three toothpastes that can be objectively quantified as being absolutely equal then, clearly, I will have to choose between them on some other basis. That basis may be emotional, but I could also make my choice by some random means - throwing a three-sided dice perhaps. Again, the key point is that it's conceptually possible for the decision to be made on a basis that has nothing to do with emotion.

    @Kapitano

    But that, I suppose, is one of the key reasons to do philosophy.

    Indeed. That's why I'm pushing this - not because I want to prove myself right (or you wrong). I don't really care which of us (or if neither of us) is right, I just want to think about whether something that seems to be true actually is true. I hope that's coming across, and I don't just seem like I'm being argumentative and unpleasant. :o)

    You can push back the question 'why?' as far as you like, but you never get beyond issues of preferring some futures over others, of wanting to have something and/or avoid another.

    This is absolutely right: we can keep pushing this question back, but pushing it back doesn't mean we answer it. The crux of the issue remains the same: are emotional reasons the only reason to prefer one course of action over another? In other words, are decisions always made for emotional reasons, or can they be made by the faculty of reason without having recourse to emotion?

    We can easily justify wanting something because 'it's rational', but even accepting that leaves the question 'Why do you want the rational thing?'. The response 'Humans are creatures of reason. It's natural to want what's rational' ... is likely to raise a laugh more than anything else.

    I agree, that would raise a laugh - it's clearly at odds with observed reality. But that wouldn't be my response. My response would be that it's possible to want the rational thing because it can be objectively demonstrated to be best; that it's conceptually possible to make the decision for rational reasons that have nothing to do with emotion.

    As for the clinical depression point - yes, absolutely. But our periodic inability to apply our faculties of reason doesn't mean that our faculties of reason don't exist. I hope, by the way, that your mention of depression doesn't mean that it's currently looming large in your life. If it is, and there's anything I can do to help, please let me know.

    ReplyDelete
  15. @Kapitano, cont.

    If, as I've argued, these values are emotions, then the emotionless computer is just replicating the programmer's values, but in algorithmic form.

    Her values, yes - but if, as I've argued, her values are (or may be) the product of her faculty of reason rather than her emotions then the computer is not necessarily replicating emotion.

    I suggest the result would be...an insane computer. One which started by arbitrarily deciding on an initial set of non-contradictory values, then applying its actions in the light of these values and its data input, and modifying these same values, using the values themselves as a guide to how they should be modified.

    I agree there's a problem here. But I don't think the problem is to do with the computer's lack of emotion, I think it's to do with its lack of understanding of the concept of mortality. Human beings can, quite rationally, prefer a course of action that defers death, because we understand (independently of how we feel about it) that death equals oblivion - i.e. the end of everything, for us as individuals. Without being guided by that understanding, the computer will not have the same backstop for formulating its rational principles.

    A computer could presumably be programmed to operate as though it understood the concept of mortality, but I'm not sure it could ever actually understand it. Since it's understanding of the world (it's consciousness, if we use that word) is algorithmic, it can be precisely replicated by another computer running the same algorithm. So, for the computer, delaying physical death is not a fundamental necessity for continuing thought, in the way it is for biological entities like us. (I'm assuming toothpaste choice is ultimately guided by the wish to defer death, which, on the face of it, seems melodramatic. But worsening teeth will lead to a worsening diet, which will need to malnutrition, which will lead to decreased immune function, which will lead to a heightened risk of death, so it's not completely absurd. There's a reason to depreciate poor oral hygiene that is separate from the desire to avoid discomfort. And, actually, I'm not sure the desire to avoid discomfort, or discomfort itself, are emotions in the way that happiness and sadness are - but that's a separate discussion.)

    If it's now presented with a challenge to recommend a brand of toothpaste to us, then there is no reason why it should prefer cavity protection over colour, or indeed over latent heat or time of day. And no reason why, having ranked the candidates, it shouldn't chose number 8 over number 1.

    You've successfully demonstrated a shortcoming in my thought experiment - my concrete example of a hypothetical 'pure faculty of reason' isn't actually a pure faculty of reason, since it can come to irrational decisions. But that doesn't mean it's impossible to conceive of a pure faculty of reason, neither does it mean that the computer is making irrational decisions on the basis that it has been emotionally swayed.

    Although you argue that the arbitrary values that sway the computer's judgement are 'very closely analogous to emotions', that's only true of the role they fulfill. Arbitrary values are what cause (or may cause) irrational decision-making in computers, and emotions are what cause (or may cause) irrational decision-making in humans, but they remain separate concepts. Both can contaminate a decision-making process, rendering it less than rational, but they're not the same thing. And it remains possible to conceive of a pure faculty of reason that is not susceptible to either contaminant - perhaps a human being who understands the concept of mortality, but has suffered specific brain damage rendering them incapabble of feeling emotion.

    ReplyDelete
  16. Aethelread:

    it's possible to want the rational thing because it can be objectively demonstrated to be best

    Fair enough. But how can I care about what's the best, if I can't care about anything? Why should I even ask?

    Given a starting imperative - say, "Take care of your health" - it's easy to derive a set of others, even to build a complex, nuanced moral system out of it. But where does the imperative come from, and what's to stop you starting with "Never eat more than five potatoes in February"?

    More on this below.

    Human beings can, quite rationally, prefer a course of action that defers death, because we understand (independently of how we feel about it) that death equals oblivion

    What is rational about avoiding death?

    If death equals oblivion, but I have no fear of oblivion, what cause do I have to try to avoid or postpone it?

    In fact, sometimes suicide is precisely the most rational action to take.

    I'm not sure the desire to avoid discomfort, or discomfort itself, are emotions in the way that happiness and sadness are - but that's a separate discussion.)

    This is the question of why does pain bothers us. I think the answer is to be found in the medical use of morphine - it doesn't lessen the painful sensations, but it seems to stop the patient minding that it hurts.

    In recreational morphine use, the user can - as William Burroughs said - stare at his own foot for hours on end, because even if he has a reason to look elsewhere, he has no impulse to act on this reason.

    Arbitrary values are what cause (or may cause) irrational decision-making in computers, and emotions are what cause (or may cause) irrational decision-making in humans

    Here you're saying that the computer makes irrational decisions because it has a 'wrong' set of values. Who decides what the right values are, and how is it justified?

    You've tried to root the imperatives behind a choice about toothpaste brand in successively more fundamental imperatives, finishing in the imperative to avoid oblivion. Which remains unjustified.

    This is the major problem of a trait of western philosophy called 'foundationalism' - trying to find unquestionable grounds from which to build up. Your ethical foundationalism has shifted the question so far back it probably can't go any further, but it's finished with an imperative that's not self-evident, or even without provisos.

    Karl Popper wrote about this. He said the notion of innate belief was preposterous, but a newborn baby might have innate "expectations" - warmth, food, even company. If one were to construct a 'rational' morality, I think it'd have to be rooted in meeting basic human subsistance needs, with the acknowledgement that there's no antecedant reason why they should be met.

    The founding principle would be something like "minimise the total amount of pain in the world". Because, unless we're on morphine, pain has a negative emotional component.

    ReplyDelete
  17. the medical use of morphine - it doesn't lessen the painful sensations, but it seems to stop the patient minding that it hurts.

    This is based on an unjustifiably reductive model of pain – but it remains a separate discussion. I’ll gladly have it with you if you like, but I’ve cut it from this comment because it took me over the (curiously precise) 4,096 character limit, and it seems like a side-issue to me.

    Here you're saying that the computer makes irrational decisions because it has a 'wrong' set of values.

    No, I'm not - and, what's more, you even directly quote me not saying that. Arbitrary values are not wrong values, they are arbitrary values - hence why I called them arbitrary values, not wrong values. Labelling a value as wrong implies judgement and raises, as you point out, the question of who or what makes the judgement. Which is why I very specifically and deliberately described them as arbitrary - meaning they are selected indiscriminately, which is to say without judgement.

    Did you genuinely not notice or appreciate the difference between arbitrary and wrong, Kapitano? Or were you trying to slip in a straw man in the hope no-one would notice? ;o)

    What is rational about avoiding death? If death equals oblivion, but I have no fear of oblivion, what cause do I have to try to avoid or postpone it?

    One does not have to fear something to wish for it to not come to pass. The emotion can - and does - produce the wish, but this does not mean the wish can only be produced by the emotion. (Is it just me, or has almost the whole of our discussion consisted of this - you making a statement, and me qualifying it by saying that it's sometimes true, but not always true?) Anyway, and as I said last time - because I had anticipated this objection, and had hoped to head it off at the pass - oblivion equals the end of thought. One does not have to fear oblivion to wish for thought to continue. Not because of how it makes the thinker feel, but because the world is quantifiably a better place for thought having taken place.

    This is the major problem of a trait of western philosophy called 'foundationalism' - trying to find unquestionable grounds from which to build up. Your ethical foundationalism [...]

    My ideas are no more and no less 'foundational' than your own, Kapitano. I contend that decisions are made on a rational basis by the faculty of reason, you that they are made on an impulsive basis by the emotions. In both cases we posit a 'foundational' basis to decision-making, whether that foundation be emotion or reason. The 'non-foundational' argument would be that decisions are taken arbitrarily - which is to say, for no reason at all.

    The disdain for 'foundationalism' is part of the postmodern turn in the humanities more generally - the idea that everything is arbitrary and quite meaningless, including the idea of meaning itself. This seemed, briefly, like an interesting idea, but it led ultimately to an intellectual cul-de-sac: if there is no possibility of meaning, there is no possibility of thought - including the thought that produces the conclusion that there is no possibility of meaning.

    Philosophers - all thinkers - are axiomatically engaged in a foundational pursuit, which is to say the search for the inner nature and causes of things. Anyone who doesn't subscribe to the notion of foundationalism has no business doing philosophy - or anything other than staring at a bare wall and keeping their mind quite blank of any thought whatsoever.

    Or, perhaps, watching Made in Chelsea, which would come to much the same thing...

    ReplyDelete
  18. We seem to be arguing in a circle, but consistantly missing each other. I've expressed myself as clearly a I could, but evidently not clearly enough. I'm therefore finishing my comments with this one here, for you to add to if you wish.

    One does not have to fear something to wish for it to not come to pass. The emotion can - and does - produce the wish, but this does not mean the wish can only be produced by the emotion.

    If you're saying that emotions other than fear can produce a wish, that's obviously true. If you're saying that the wish could be produced solely by the intellect, without any emotions being involved at all, that's false, not least because the wish itself has an inextricable emotional componant.

    In short:
    * One can't want something without feeling want.
    * Values can only be grounded in other values, though there has to be something empirical for there to be anything to have values about.

    One does not have to fear oblivion to wish for thought to continue. Not because of how it makes the thinker feel, but because the world is quantifiably a better place for thought having taken place.

    It depends on the quantification. And the notion of 'better'.

    You seem to view the analytical intellect as a part of the mind which is in principle capable of acting without the emotional part being involved at all. Though where it would get its impetus or its premises is a mystery.

    As for you comments on 'postmodernism', I spent half a decade reading, and fundamentally disagreeing, with most of the writers dubbed 'postmodernist'. You've misunderstood 'foundationalism', and I never said all human decisions were 'impulsive'.

    ReplyDelete
  19. the wish itself has an inextricable emotional componant.

    You've asserted this a great many times, Kapitano, but you've not at any stage brought forward a successful argument in support of it. Neither have you at any stage brought forward a successful argument against my counter-suggestion that it is conceptually possible to propose that outcomes can be assessed, ranked, selected and pursued by a faculty of pure reason unaffected by emotion. This counter-suggestion, if it were true, would disprove your assertion. Your inability to prove your own assertion, coupled with your inability to disprove my argument, strongly suggests your case is disproven - or, at least, that you are incapable of proving it.

    Though where it would get its impetus or its premises is a mystery.

    I understand you disagree with my suggestions, but I hardly think you can fairly argue that I have left them mysterious. As I have clearly stated a number of times, my suggestions - which you are apparently either unable or unwilling to refute - are that it gets its impetus from rational assessment and derives its premises from the concept of mortality. Those suggestions, like all the rest of my suggestions, may well be completely, utterly, embarrassingly wrong. If they are, I'd love for someone to prove it to me - and, if it looked like they had, I don’t believe I would respond by insisting they hadn’t, then announcing that I was abandoning the discussion for supposedly unrelated reasons.

    You've misunderstood 'foundationalism',

    No, I haven't. I've disagreed with its unspoken premise (that only other people's ideas are 'foundational', while one's own are natural, obvious, indisputable), and argued against its conclusions - not the same thing as failing to understand it, as I'm sure, on reflection, you can appreciate.

    I never said all human decisions were impulsive

    Kapitano, you've just spent the last few days maintaining that it is impossible to conceive of a decision being taken by any entity anywhere in the multiverse that is not predicated on an emotional impulse. It appears you may be trying, in your closing words, to distance yourself from that position; if so, I’m happy to acknowledge the shift. But this doesn’t change the fact that your previous position is made abundantly clear by the record of your contributions to this discussion.

    As for you comments on 'postmodernism', I spent half a decade reading, and fundamentally disagreeing, with most of the writers dubbed 'postmodernist'.

    Well, if we're going to go down the road of arguing “my reading’s bigger than yours” - not something I would have chosen to do myself - then I've been reading from and about postmodernist thinkers for what'll be 20 years come this autumn. Personally, I’m not sure this means much – surely it’s our understanding of the ideas that matters, not how long we’ve spent reading about them? – but I guess 20 is better than 5, if this is a meaningful comparison for you.

    One final and more general point, Kapitano. I’ve noticed in this and other discussions I’ve had with you over the years that you have a habit of getting belligerent and aggressive when ideas you propose are subjected to criticism by others – almost as though you identify with them as articles of faith, and consequently regard interrogation of the concept as a personal attack. I worry this tendency of yours makes these kinds of discussion a more uncomfortable experience for you than they are for me. I also worry that I don’t do enough to take this into account when I discuss things with you. So I'm sorry if that’s the case, and I have upset or angered you personally, when I was only trying to challenge the ideas you proposed.

    Anyway, thanks for the discussion, Kapitano. It was fun, for the most part. And congratulations, again, on producing such an interesting and thought-provoking blogpost – just one in a much longer line of them, I hope.

    ReplyDelete