Options

Free will might be an illusion created by our brains, scientists might have proved.

12346

Comments

  • Options
    belly buttonbelly button Posts: 17,026
    Forum Member
    ✭✭
    MrQuike wrote: »
    No because it doesn't perceive any fault - it's not a conscious entity.

    What is this perceive, touch, sound, smell ?

    MrQuike wrote: »
    True.... but Self reflection can become Self referral. Then you can get expert opinion. It's all about choices

    You mean like an IT guy ?

    Choices are about knowledge. If you don't know there is an expert you can't self refer. Same for a computer. Difference is that humans can act in a none optimal way because we have a lot more variables to consider and a lot more 'faults' in our motherboards ;-)
  • Options
    AndrueAndrue Posts: 23,364
    Forum Member
    ✭✭✭
    bollywood wrote: »
    Computers only have words.
    First, define 'emotion'.

    If you define it purely in terms of unusual behaviour in response to stimuli then a computer can do that. Anyone who has worked with them knows they don't always do what you expect. And those that are controlling real-world systems often have specific programming for specific events. Aircraft control systems will shut down certain functions and prioritise others if they detect the flight envelope being exceeded. We could label that as 'fear'. If you try and make too tight a turn the fly-by-wire systems will resist it. You could say they have a love for gentle flying.

    If you define it in terms of internal feelings then you're back with the same problem we had with consciousness. Are emotions real or an illusion? They can be influenced by chemicals so that suggests a physical component. They can be triggered by stimulating the brain electrically.

    "A comprehensive review of EBS research compiled a list of many different acute impacts of stimulation depending on the brain region targeted. Following are some examples of the effects documented
    ...
    * Emotional: Anxiety, mirth, feeling of unreality, fear, happiness, anger, sadness, transient acute depression, hypomania, etc."

    If your emotions can be switched on and off with an electrical current, just how real are they?
  • Options
    MrQuikeMrQuike Posts: 18,175
    Forum Member
    ✭✭
    What is this perceive, touch, sound, smell ?




    You mean like an IT guy ?

    Choices are about knowledge. If you don't know there is an expert you can't self refer. Same for a computer. Difference is that humans can act in a none optimal way because we have a lot more variables to consider and a lot more 'faults' in our motherboards ;-)

    "An analogy is a comparison in which an idea or a thing is compared to another thing that is quite different from it. It aims at explaining that idea or thing by comparing it to something that is familiar." Yours falls short because people are conscious entities and computers are not. The awareness for computers is with the humans.No consciousness, no humans, no computers, no emergent properties.. ;-)

    Self referral is an innate "internal" experience of psyche. There are no motherboards and we are not hard wired. Computers do not perceive because they do not become aware or conscious of (something) in and of themselves. They do not come to realize or understand. That's what humans and conscious entities do.
  • Options
    MrQuikeMrQuike Posts: 18,175
    Forum Member
    ✭✭
    Andrue wrote: »
    If your emotions can be switched on and off with an electrical current, just how real are they?


    Just how real is an electric toaster?
  • Options
    belly buttonbelly button Posts: 17,026
    Forum Member
    ✭✭
    MrQuike wrote: »
    "An analogy is a comparison in which an idea or a thing is compared to another thing that is quite different from it. It aims at explaining that idea or thing by comparing it to something that is familiar." Yours falls short because people are conscious entities and computers are not. The awareness for computers jis with the humans.No consciousness, no humans, no computers, no emergent properties.. ;-)

    Self referral is an innate "internal" experience of psyche. There are no motherboards and we are not hard wired. Computers do not perceive because they do not become aware or conscious of (something) in and of themselves. They do not come to realize or understand. That's what humans and conscious entities do.

    http://www.techinsider.io/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7

    What do you think of that ?
  • Options
    bollywoodbollywood Posts: 67,769
    Forum Member
    ✭✭
    Andrue wrote: »
    First, define 'emotion'.

    If you define it purely in terms of unusual behaviour in response to stimuli then a computer can do that. Anyone who has worked with them knows they don't always do what you expect. And those that are controlling real-world systems often have specific programming for specific events. Aircraft control systems will shut down certain functions and prioritise others if they detect the flight envelope being exceeded. We could label that as 'fear'. If you try and make too tight a turn the fly-by-wire systems will resist it. You could say they have a love for gentle flying.

    If you define it in terms of internal feelings then you're back with the same problem we had with consciousness. Are emotions real or an illusion? They can be influenced by chemicals so that suggests a physical component. They can be triggered by stimulating the brain electrically.

    "A comprehensive review of EBS research compiled a list of many different acute impacts of stimulation depending on the brain region targeted. Following are some examples of the effects documented
    ...
    * Emotional: Anxiety, mirth, feeling of unreality, fear, happiness, anger, sadness, transient acute depression, hypomania, etc."

    If your emotions can be switched on and off with an electrical current, just how real are they?

    The emotions are real enough at baseline (before being altered) so I'm not seeing how that would be the same thing as emotions not real.

    We often 'feel' emotions physically and a computer can't do that.

    A computer can only reflect on its condition to the extent the programmer allowed it. It cannot spontaneously reflect on its condition or make reflections not in its program.
  • Options
    AndrueAndrue Posts: 23,364
    Forum Member
    ✭✭✭
    bollywood wrote: »
    A computer can only reflect on its condition to the extent the programmer allowed it. It cannot spontaneously reflect on its condition or make reflections not in its program.
    Not true. There are many ways to divert a computer away from its original programming. To inject new instructions or to reinterpret existing ones (hackers do this all too often). Computers can generate new code or (with things like Inversion of Control) rearrange their code. And with languages like C# and Java the computer can read its own instructions and its own state. Reflection is something C# and Java programmers use frequently.

    I've written programs that are effectively controlled by their data rather than the other way around. They can extend their capabilities based on the metadata describing their data. You only need to define the data in terms of actions and constraints and provide the concepts 'is like' or 'extends'. My code could determine some of those itself and I remember the surprise of me defining a new relationship only to be 'told' when I ran the program that it was superfluous because it had worked that out for itself.

    The only reasons that computer programs don't generally act the way you describe is that until recently there was no value in it and it was too resource intensive. Computing power and storage has now reached the point where things like Reflection are taken for granted. IoC is common practice in Java (and becoming common in C#). We are fast approaching the point where programming will just become linking lots of blocks of code together and that is going to be automated.

    The behaviour you describe has always been possible and is pretty much here.
  • Options
    MrQuikeMrQuike Posts: 18,175
    Forum Member
    ✭✭

    one brain to another - great simulation.

    mind to mind and...really though......what robots?....:p
  • Options
    bollywoodbollywood Posts: 67,769
    Forum Member
    ✭✭
    Andrue wrote: »
    Not true. There are many ways to divert a computer away from its original programming. To inject new instructions or to reinterpret existing ones (hackers do this all too often). Computers can generate new code or (with things like Inversion of Control) rearrange their code. And with languages like C# and Java the computer can read its own instructions and its own state. Reflection is something C# and Java programmers use frequently.

    I've written programs that are effectively controlled by their data rather than the other way around. They can extend their capabilities based on the metadata describing their data. You only need to define the data in terms of actions and constraints and provide the concepts 'is like' or 'extends'. My code could determine some of those itself and I remember the surprise of me defining a new relationship only to be 'told' when I ran the program that it was superfluous because it had worked that out for itself.

    The only reasons that computer programs don't generally act the way you describe is that until recently there was no value in it and it was too resource intensive. Computing power and storage has now reached the point where things like Reflection are taken for granted. IoC is common practice in Java (and becoming common in C#). We are fast approaching the point where programming will just become linking lots of blocks of code together and that is going to be automated.

    The behaviour you describe has always been possible and is pretty much here.

    Yet you say that you wrote the program. How do you know you've included all the possibilities of the human mind. What if the human mind contains, unconsciously, memories of ancestors.

    In addition I'm not sure how data in control is the same as the human, in which there is interaction between the mind and the body. The body state affects the mind to a level of detail we can't yet understand, and the mind affects the body to a level we don't yet understand. So if we don't understand it, how can we replicate it?

    Again, we tend to judge the sincerity of human emotions related to tone, affect and physiological reactions. A human who just produces the right answer with no affect and no physiological reaction is called a psychopath. Or 'as if' personality. Is a computer that simulates emotion, like a psychopath?

    I can only understand this concept in light of saying something like, everything is information. Then maybe.
  • Options
    AndrueAndrue Posts: 23,364
    Forum Member
    ✭✭✭
    bollywood wrote: »
    So if we don't understand it, how can we replicate it?
    Computer AI is not trying to replicate us. It's creating something that responds like us.

    Humans have an innate tendency to label themselves and others as 'conscious' which stems from our own internal assumption/point of view that we are conscious. It is circular reasoning. 'I think you are conscious because I am conscious and I think am conscious because I'm conscious'. That proves nothing whatsoever. The human brain lies to itself all the time.

    And that's the premise of the Turing test. Prevent the subject knowing whether or not they are talking to a human. Disable the automatic labelling mechanism and force them to assess sentience based on the responses only.

    Until/unless medical science finds out exactly what human consciousness is and provides the means to detect it you have no way of saying that a device that passes the Turing test doesn't have it. And even if that happens - I'd still argue that it's speciest to deny my device that label. I think it's arrogant to suggest that only humans can be conscious. Animals can be too. Aliens can be. Computers can be.

    Different types and styles of consciousness but all deserving of respect.
  • Options
    bollywoodbollywood Posts: 67,769
    Forum Member
    ✭✭
    Andrue wrote: »
    Computer AI is not trying to replicate us. It's creating something that responds like us.

    Humans have an innate tendency to label themselves and others as 'conscious' which stems from our own internal assumption/point of view that we are conscious. It is circular reasoning. 'I think you are conscious because I am conscious and I think am conscious because I'm conscious'. That proves nothing whatsoever. The human brain lies to itself all the time.

    And that's the premise of the Turing test. Prevent the subject knowing whether or not they are talking to a human. Disable the automatic labelling mechanism and force them to assess sentience based on the responses only.

    Until/unless medical science finds out exactly what human consciousness is and provides the means to detect it you have no way of saying that a device that passes the Turing test doesn't have it. And even if that happens - I'd still argue that it's speciest to deny my device that label. I think it's arrogant to suggest that only humans can be conscious. Animals can be too. Aliens can be. Computers can be.

    Different types and styles of consciousness but all deserving of respect.

    Yes but what I'm saying is it's not just about talking to, and not just about the conscious mind but the unconscious and the physical. Humans have physiological reactions. The body changes the mind. The mind includes the unconscious and repressed memories.

    How does the computer experience pain and remorse and show remorse, other than as information? Does a computer mourn if a fellow computer crashes?

    The Turing test only judges thinking, and then only with certain interrogators. And thinking more along the lines that we will accept what seems an appropriate response, as thinking.
  • Options
    AndrueAndrue Posts: 23,364
    Forum Member
    ✭✭✭
    bollywood wrote: »
    How does the computer experience pain and remorse and show remorse, other than as information? Does a computer mourn if a fellow computer crashes?
    The same way you do (or don't do). And yes computers 'mourn' all the time. In a client/server relationship the clients will definitely exhibit negative behaviour if they lose contact with the server.

    In the absence of knowing what powers us it is fallacious to say that computers can't do it. What you're basically arguing is that because we don't know how humans do it a computer can't do it because we know how it would. That makes no sense. Occam's razor cuts that idea to shreds. You shouldn't go making things up in order to prove a theory and the idea that consciousness is a 'thing' rather than a collection of behaviours is unproven.
  • Options
    bollywoodbollywood Posts: 67,769
    Forum Member
    ✭✭
    Andrue wrote: »
    The same way you do (or don't do). And yes computers 'mourn' all the time. In a client/server relationship the clients will definitely exhibit negative behaviour if they lose contact with the server.

    In the absence of knowing what powers us it is fallacious to say that computers can't do it. What you're basically arguing is that because we don't know how humans do it a computer can't do it because we know how it would. That makes no sense. Occam's razor cuts that idea to shreds. You shouldn't go making things up in order to prove a theory and the idea that consciousness is a 'thing' rather than a collection of behaviours is unproven.

    Is negative behavior the same as the feeling state of mourning? Does the computer reflect on being cut off from the server? Behavior doesn't always reflect thought. Going back to psychopaths, they could perform an action that appears friendly but has no friendly intent.

    I haven't said anything about consciousness being a thing. Maybe another poster has. I was referring to feeling states, physical states, and yes, essentially saying that if we don't ourselves fully understand what it is to be human, we can't know whether the computer is, or not mistaken for human.

    The Turing test has a limited function in that many will accept a correct response as human thought.
  • Options
    spiney2spiney2 Posts: 27,058
    Forum Member
    ✭✭✭
    The point about the turing test is, none of us can possibly experience noumena, only phenomena! So, that's all we can know ........

    http://www.friesian.com/kant.htm
  • Options
    spiney2spiney2 Posts: 27,058
    Forum Member
    ✭✭✭
    Andrue wrote: »
    Computer AI is not trying to replicate us. It's creating something that responds like us.

    Humans have an innate tendency to label themselves and others as 'conscious' which stems from our own internal assumption/point of view that we are conscious. It is circular reasoning. 'I think you are conscious because I am conscious and I think am conscious because I'm conscious'. That proves nothing whatsoever. The human brain lies to itself all the time.

    And that's the premise of the Turing test. Prevent the subject knowing whether or not they are talking to a human. Disable the automatic labelling mechanism and force them to assess sentience based on the responses only.

    Until/unless medical science finds out exactly what human consciousness is and provides the means to detect it you have no way of saying that a device that passes the Turing test doesn't have it. And even if that happens - I'd still argue that it's speciest to deny my device that label. I think it's arrogant to suggest that only humans can be conscious. Animals can be too. Aliens can be. Computers can be.

    Different types and styles of consciousness but all deserving of respect.

    indeed, many Expert Systems are much better than the human mind, which is why they are used .......

    But, science can't "find out what consciousness is" .....
  • Options
    spiney2spiney2 Posts: 27,058
    Forum Member
    ✭✭✭
    Andrue wrote: »
    The same way you do (or don't do). And yes computers 'mourn' all the time. In a client/server relationship the clients will definitely exhibit negative behaviour if they lose contact with the server.

    In the absence of knowing what powers us it is fallacious to say that computers can't do it. What you're basically arguing is that because we don't know how humans do it a computer can't do it because we know how it would. That makes no sense. Occam's razor cuts that idea to shreds. You shouldn't go making things up in order to prove a theory and the idea that consciousness is a 'thing' rather than a collection of behaviours is unproven.

    ...... this reminds me of prof ivor aleksander appearing on newsnight, and announcing to a bemused paxo that his 286 laptop computer was conscious .......

    https://en.wikipedia.org/wiki/Artificial_consciousness
  • Options
    belly buttonbelly button Posts: 17,026
    Forum Member
    ✭✭
    MrQuike wrote: »
    one brain to another - great simulation.

    mind to mind and...really though......what robots?....:p

    The ability of humans to anthramorphrize is amazing ..... We have done it to ourselves haven't we :p

    I think I'll struggle a bit with the whole thing as I have to make sure Santa isn't left on his own in the attic and wrap him up next to his elf for company. Put that in Jung's pipe and let home smoke it.

    Actually the clip of the robot is a bit strange. How is it that the correct one stood up ? I'll delve into that one a bit I think.
  • Options
    bollywoodbollywood Posts: 67,769
    Forum Member
    ✭✭
    spiney2 wrote: »
    indeed, many Expert Systems are much better than the human mind, which is why they are used .......

    But, science can't "find out what consciousness is" .....

    Only at thinking or simulated thinking. Not at experiencing.
  • Options
    belly buttonbelly button Posts: 17,026
    Forum Member
    ✭✭
    bollywood wrote: »
    Only at thinking or simulated thinking. Not at experiencing.

    I thought we were sticking to awareness of self as a definition ?
  • Options
    bollywoodbollywood Posts: 67,769
    Forum Member
    ✭✭
    I thought we were sticking to awareness of self as a definition ?

    I would say that experiencing is part of self awareness. If I do something wrong and I blush or turn red, I experience the awareness that what I did, isn't a good reflection of me, who I am. It's not just information.
  • Options
    belly buttonbelly button Posts: 17,026
    Forum Member
    ✭✭
    bollywood wrote: »
    I would say that experiencing is part of self awareness. If I do something wrong and I blush or turn red. I experience the awareness that what I did, isn't a good reflection of me, who I am. It's not just information.

    No I can't agree. A psychopath doesn't blush but is still conscious . Lots of people have experiential deficits but are still conscious.
    Once you start expanding the whole thing falls apart as far as definition goes .
  • Options
    bollywoodbollywood Posts: 67,769
    Forum Member
    ✭✭
    No I can't agree. A psychopath doesn't blush but is still conscious . Lots of people have experiential deficits but are still conscious.
    Once you start expanding the whole thing falls apart as far as definition goes .

    A psychopath fools people by pretending to have an experience he doesn't have. He is also, at some level, conscious of the fact that he is being deceitful.

    Is a computer aware of being deceitful? I doubt that.

    So I would say a computer isn't even up to the psychopath level.
  • Options
    Ben_CoplandBen_Copland Posts: 4,602
    Forum Member
    ✭✭✭
    bollywood wrote: »
    A psychopath fools people by pretending to have an experience he doesn't have. He is also, at some level, conscious of the fact that he is being deceitful.

    Is a computer aware of being deceitful? I doubt that.

    So I would say a computer isn't even up to the psychopath level.

    Computers are just zero's and one's and can't do anything without human input, they're dumb.
  • Options
    belly buttonbelly button Posts: 17,026
    Forum Member
    ✭✭
    bollywood wrote: »
    A psychopath fools people by pretending to have an experience he doesn't have. He is also, at some level, conscious of the fact that he is being deceitful.

    Is a computer aware of being deceitful? I doubt that.

    So I would say a computer isn't even up to the psychopath level.

    I'm certainly not going to say a computer is the same as us nor argue about levels of consciousness , but if we don't stick to basics we'll get no where. I'm going with self awareness as far as the freewill discussion is concerned.

    I'm interested in the process of how decisions are made really and think sometime we will break them down to complex processes which eventually will be replicated in a non human .
  • Options
    bollywoodbollywood Posts: 67,769
    Forum Member
    ✭✭
    I'm certainly not going to say a computer is the same as us nor argue about levels of consciousness , but if we don't stick to basics we'll get no where. I'm going with self awareness as far as the freewill discussion is concerned.

    I'm interested in the process of how decisions are made really and think sometime we will break them down to complex processes which eventually will be replicated in a non human .

    Isn't that what the Turing test is though, not being able to tell a computer from a human? So that is occurring on a very superficial level.

    I agree the processes are very complex and hard to break down, when we don't know what they all are.
Sign In or Register to comment.