TTLG|Jukebox|Thief|Bioshock|System Shock|Deus Ex|Mobile

View Poll Results: In the future, when Ai exceeds human intelligence...

Voters
29. You may not vote on this poll
  • ...1) Ai will bring a benevolent transcendence to humanity.

    4 13.79%
  • ...2) Ai will seek to destroy humanity.

    1 3.45%
  • ...3) Ai will carry out goals without regard to human consequence.

    12 41.38%
  • ...1 and 2...(Factionalized)

    0 0%
  • ...1 and 3...(Factionalized)

    3 10.34%
  • ...2 and 3...(Factionalized)

    0 0%
  • ...1, 2, and 3...(Factionalized)

    2 6.90%
  • ...Ai will never exceed human intelligence.

    7 24.14%
Page 5 of 6 FirstFirst 123456 LastLast
Results 101 to 125 of 136

Thread: BotChat!

  1. #101
    Member
    Registered: Aug 2004
    Worth noting specifically that a lot of the difficulties AI's encounter with accomplishing those tasks is a lack of understanding of context, precisely because they aren't humans and don't have our shared experiences.

  2. #102
    Member
    Registered: May 2004
    Location: Canuckistan GWN
    Quote Originally Posted by heywood View Post
    The answers to the poll question, and the positive and negative effects of AI technology on humanity, don't depend on whether an AI system can have human consciousness.
    So are you saying that these two propositions: “Ai will seek to destroy humanity.” and “AI will carry out goals without regard to human consequence.” - do not in any way suggest that the hypothetical AI, in the poll, might possess a human like autonomy, in the form of desires, goals and intent?

    Rightly or wrongly, the conflation of AI with artificial people, is an inescapable feature of any casual discussion of the topic. To that end I have attempted to untangle some of the semantic knots that confound the issue. If you have a complaint with the construction of the poll, you should take it up with the creator.


    Quote Originally Posted by heywood View Post
    I was motivated to respond primarily by this statement: “We assume that a true AI is the same as an artificial person.”
    That was a bit lazy of me. How about... Many people, conflate AI with artificial beings, simulacra, cyborgs and self aware machines.


    Quote Originally Posted by heywood View Post
    I think artificial persons are an inconsequential topic compared to other uses of AI.
    I heartily disagree.

    How organic matter achieves self-awareness and whether inorganic matter could eventually do the same, seems infinitely more important and fascinating than evolutions in the utility of next level information processing. It appears I am not alone in that fascination. I see no reason why the two perspectives cannot be respectfully explored in parallel.

  3. #103
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Autonomy and goals are not implicit indicators of consciousness, depending on how you define consciousness. A Roomba isn't conscious as far as most people would care to categorise it, but if you attach a chainsaw to it and remove its safeguards, it's going to shred your feet and your pets because it's following its designers' mandated goals, not because of malevolence or some internalised personality-based worldview.

    Artificial people are... what? They don't exist, as far as I know, except in science fiction. Not sure what that even means here - if you're going to make a robot to mimic a human being, at least in the immediate future its brain will be constructed with several manners of ANNs chained into several subsystems, which in aggregate will be an AI. Is the physical shell the same as its neural components? No, of course not. But are both required to make a hypothetical 'android'? Well, yes? I'm not sure why anyone would commit a conflation error there.

  4. #104
    Member
    Registered: Sep 2001
    Location: The other Derry
    Quote Originally Posted by Nicker View Post
    So are you saying that these two propositions: “Ai will seek to destroy humanity.” and “AI will carry out goals without regard to human consequence.” - do not in any way suggest that the hypothetical AI, in the poll, might possess a human like autonomy, in the form of desires, goals and intent?
    Those propositions don't require AI to possess any human like qualities. Autonomy, goals, and intent are slam dunks. Most of our computing systems have all three. Desire is unnecessary. Also, AI is not a singular system. We're talking about a class of interconnected technologies and systems collectively called AI, not a movie villain.

    Rightly or wrongly, the conflation of AI with artificial people, is an inescapable feature of any casual discussion of the topic. To that end I have attempted to untangle some of the semantic knots that confound the issue. If you have a complaint with the construction of the poll, you should take it up with the creator.
    It is only inescapable for those who can't let go of sci-fi tropes.

    How organic matter achieves self-awareness and whether inorganic matter could eventually do the same, seems infinitely more important and fascinating than evolutions in the utility of next level information processing. It appears I am not alone in that fascination. I see no reason why the two perspectives cannot be respectfully explored in parallel.
    I'd rather concern myself with what's happening in the world than religious and philosophical arguments about what makes humans special. I agree there's room for both. My whole point to you was don't assume that copying humans is the goal or the pinnacle of AI. It's not.

  5. #105
    Member
    Registered: Jun 2001
    Location: under God's grace
    I find the topic very interesting!

    A quick preface

    To make it a bit clearer what I refer to when I say 'consciousness', I mean the am-ness of a human. The experiences are routed to the am-ness. To use crude and clumsy language, sensory input goes to the am-ness and the am-ness gives output through actuators. This is blocky, but I'll try to make more sense in context.

    Quote Originally Posted by Nicker View Post
    I believe that the qualities which make us identifiable individuals, emerge from the integration of semantic, emotional, biochemical (etc.) ingredients, and that those ingredients are undeniably consolidated in the configuration of organic chemicals called our bodies. Materialism. And if there is anything which appears immaterial about us it is either due to ignorance or fancy.
    I respect your belief, and the first point I'd like to make is that in order to approach this topic, one needs to have a presupposition about the nature of reality. They're very closely tied together.

    By this I mean that if it were possible to measure an am-ness by objective instruments, it would place the am-ness in the physical realm at least partly, and it wouldn't require a belief in the same sense. But the only way to measure an am-ness is to actually be an am-ness and note that you have a body you can command and can get sensory input from. In other words an am-ness can only be directly observed by itself.

    We know that if we disrupt that organic configuration, especially the brain, we can significantly alter, inhibit or even terminate the individual associated with it. But when do the organic ingredients become an individual? And if we replicate those ingredients, using inorganic materials, why can't an individual emerge from that configuration?
    My second point is that what if the reality is such that even if an am-ness completely destroys their own brain, they still find themselves existing? Sorry about the macabre example. We are all born into this world, and we all die. But once again, the only way to measure what happens to an am-ness once its body dies is for the am-ness' body to die. So same as with point number one, the only one that can observe what happens to an am-ness after its death is the am-ness itself.

    You are kind of begging the question, asserting that there is an individual, inside the vessel experiencing the pain, when the existence and nature of that individual is what we are trying to define. We know that there is a person in there because the person in there tells us they are there.
    Consciousness certainly doesn't automatically follow from something saying it is conscious. We don't know there is a person, but we might become convinced. It is a matter of belief and trust.

    But to reiterate, consciousness can only be observed by the particular consciousness itself, and all consciousnesses will measure death. The question is not scientific, because there is no way to measure if something is conscious or not. So if we create a system of AIs that can behave similarly to us humans, we still have no way to measure if an am-ness has emerged to experience anything.

    Note: an am-ness is not necessary for anything that we humans do. It is not necessary for there to be an am-ness experiencing life. Technically the human body could just as well be an automaton and behave exactly like we do and build exactly the same society and all the tech that we have. But the reality is that I am here, I read your posts and think about them, and I ponder and drink coffee and then I sometimes type. I also make games.

    It certainly feels like I am a person, not a meat-puppet but I can't justify that feeling objectively. I can only assert it by agreeing that we both share that quality.
    Exactly. You know you are there in the pilot seat. Now imagine not being there yet still your body doing all the stuff that you do and having the conversation you have. That's what I mean when I say that technically and physically there isn't a need for an am-ness, but the reality is that there simply is an am-ness inside every human body.

    bestowing cakiness upon them until it leaves to inhabit a fresh cake or goes to the Great Bakery In The Sky...
    This made me chuckle!!! I'm not sure why but it fits my sense of humor. Kinda reminds me of a Dexter's Lab episode, but can't say which one.

  6. #106
    Member
    Registered: May 2004
    Location: Canuckistan GWN
    Those propositions don't require AI to possess any human like qualities.
    That kind of depends on what one means by "seeking" and "goals". You could constrain those words to functional objectives but it seems clear to me, and to others here, that some sort of self generated intent is implied in the wording of the poll. Just because I read that into the question doesn't mean I hold "sci fi tropes" as gospel or that I wholesale subscribe to the many straw-men you have offered in place of my actual words. Don't taze me, Bro.

    I haven't been advocating any sort of woo-woo or displaying a worshipful attitude to any sci-fi tropes. Human consciousness arises from the mechanisms of our bodies. Whether a similar, human like self-awareness might eventually emerge from machines is not a sci-fi fantasy, it's a serious question. Not least it has serious implications on what it really means to be human.


    My whole point to you was don't assume that copying humans is the goal or the pinnacle of AI. It's not.
    Pretty sure I haven't done that. One of my first acts in this thread was trying to differentiate between two, commonly conflated uses of the term AI. That said, you might agree that engineering an artificial person would be a defining achievement of AI, not simply a marginal improvement?



    Have I offended you in some way? Is this because I said, "We assume that a true AI is the same as an artificial person."? You seemed to take offense at that statement ("I don't assume that. "), and I hastily apologised for my lazy figure of speech. But you appear to know it's just decorative, since you used the very same construction later in post - "We don't really need or want machines to simulate all the behaviors of humans..."

    Should I be miffed about you including me in your "we"? Colour me confused but I stand ready to apologise again, if necessary.
    Last edited by Nicker; 14th Jan 2024 at 02:33.

  7. #107
    Member
    Registered: May 2004
    Location: Canuckistan GWN
    Quote Originally Posted by Qooper View Post
    To make it a bit clearer what I refer to when I say 'consciousness', I mean the am-ness of a human.
    Am-ness. I like it. When I was crafting my reply to heywood, I originally used the term be-ing to label that ineffable sense of individuality. It sounds a bit new-agey and I figured that might drive us further apart, so I ditched it. I think the closest formal term is "theory of mind" but I am not sure if that entirely fits the bill. The problems with language are immense. I agree with Wittgenstein that "Philosophy is just the by-product of misunderstanding language." If we had precise terms for am-ness we wouldn't have to keep inventing new ones.

    By this I mean that if it were possible to measure an am-ness by objective instruments, it would place the am-ness in the physical realm at least partly, and it wouldn't require a belief in the same sense. But the only way to measure an am-ness is to actually be an am-ness and note that you have a body you can command and can get sensory input from. In other words an am-ness can only be directly observed by itself.
    Reality. Add that to love, art and rock'n'roll as things we are convinced we know but cannot define. I largely agree with you on the above, but with the added caution that we cannot know if our perception of am-ness is reliable or delusional. But then doesn't it require some aspect of us to be tapped into reality in order to be deluded?

    My second point is that what if the reality is such that even if an am-ness completely destroys their own brain, they still find themselves existing?
    That is unanswerable. I think the best bet is that once we are physically de-configured, that's it. We are not vessels. We are generators. Energy is eternal, configurations are temporary.


    Consciousness certainly doesn't automatically follow from something saying it is conscious.
    Which is one reason why we might not recognise an artificial being if we made one. If it doesn't even declare itself, how could we know? Would it have a "theory of mind"? Would it know that it was so it could tell us? We can't even define or locate our own awareness. As I said previously, humans have only recently seriously considered that there might be other persons in the animal kingdom.

    But to reiterate, consciousness can only be observed by the particular consciousness itself, and all consciousnesses will measure death. The question is not scientific, because there is no way to measure if something is conscious or not. So if we create a system of AIs that can behave similarly to us humans, we still have no way to measure if an am-ness has emerged to experience anything.
    Yes and no. Consciousness (am-ness / be-ing) seems more like an agreement between similar organic creatures. Similar enough to agree that we belong in the same category but distinct enough to assume we are separate individuals.

    I don't understand what you mean by this - "and all consciousnesses will measure death." All organisms are aware of death as something to be avoided but humans take it personally.

    Note: an am-ness is not necessary for anything that we humans do. It is not necessary for there to be an am-ness experiencing life. Technically the human body could just as well be an automaton and behave exactly like we do and build exactly the same society and all the tech that we have. But the reality is that I am here, I read your posts and think about them, and I ponder and drink coffee and then I sometimes type. I also make games.
    I disagree. If am-ness is something that distinguishes us from most other animals, then it is absolutely an critical component of our nature. I don't think we could invent and imagine the things we have, without that core of self, like the way a pearl needs a grain of sand at its center. I think that our obsession with our mortality has a lot to do with it.

    If a blob of protoplasm can become self-aware, why not eventually a machine? What might be the grain of sand in that pearl?

  8. #108
    Member
    Registered: Dec 2023
    Location: Ireland (mostly)
    Would it be a good assumption that most people here dislike Hubert Dreyfus? I hope you'll forgive me if someone has already talked about him, I did skim the thread first.

  9. #109
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    We can talk about him. I think Dreyfus is fantastic when he comments on Heidegger, like in his famous course on B&T. When he starts commenting on AI, I think he's way out of his element. I mean he's concluding that consciousness is impossible to construct computationally on less than empirical grounds.

    But I think you can reconstruct some of his point, but you'd have to do a lot of work, and my intuition is he wouldn't sanction the result, but I still think there's something in there worth salvaging.

    He's not going to think AI have a lifeworld in which its discourse on things is embedded (Heidegger would put it the other way around, "language is the house of Being", but anyway the point is there's no meaning if there's no Being), because he thinks AI can't have conscious experience, so there's no real meaning in what they're talking about. In a way he's not too far from Searle on intentionality (you need conscious experience to ensure that words are "genuinely about" objects in the world), except Heidegger & Dreyfus don't want to talk about "objects", but I think he's saying basically the same thing in the structure of MH's system & Drefus's reconstruction of it.

    I think that basic point is right, but I also think what they're talking about is able to be computationally modeled, and people like Radu Bogdan (who is the reconstructed Donald Davidson) and the Decision Neuroscience & Neurolinguistic people, etc., help give us the theory to find out how.

    ---

    Edit: That's a long story, but a short version is two pieces (1) Bogdan's system is that language is a skeleton that gives instructions to the mind to flesh its meaning out in terms of its own intentions, understanding of social norms, and construction of the world (the classic Self-Norm-World triangle), and (2) the neurolinguistic people are suggesting that inferior and medial parietal lobe are core places doing the fleshing out (in the context of a very big and tangled system), where a coherent narrative and "world" the agent is embedded in are being constructed out of the elements of perception in a very systematic way we can cautiously optimistically think about modeling. If you really dug into the details, I think Dreyfus would begin to respect how many of the things in the "discourse-lived experience" connection that he cares about are accounted for.

    This is not what Deep Learning is modeling though, at least not directly, though one can debate the extent it gets into the structure of the weights through the backdoor. It's also very different to the trend of formalistic analytic philosophy of language up to the mid-1980s, when Donald Davidson really started the attack & computational neuroscience and brain mapping techniques, etc., started becoming credible.
    Last edited by demagogue; 17th Jan 2024 at 15:25.

  10. #110
    Member
    Registered: Dec 2023
    Location: Ireland (mostly)
    Quote Originally Posted by demagogue View Post
    We can talk about him. I think Dreyfus is fantastic when he comments on Heidegger, like in his famous course on B&T. When he starts commenting on AI, I think he's way out of his element. I mean he's concluding that consciousness is impossible to construct computationally on less than empirical grounds.
    That's a pleasant surprise, often when I try to bring up phenomenology it's an uphill struggle. I usually expect educated people to go along with Dennett's criticism of intentionality. But I understand the onus is on we who think phenomenology is worthwhile to demonstrate it to those who don't.

    He's not going to think AI have a lifeworld in which its discourse on things is embedded (Heidegger would put it the other way around, "language is the house of Being", but anyway the point is there's no meaning if there's no Being), because he thinks AI can't have conscious experience, so there's no real meaning in what they're talking about. In a way he's not too far from Searle on intentionality (you need conscious experience to ensure that words are "genuinely about" objects in the world), except Heidegger & Dreyfus don't want to talk about "objects", but I think he's saying basically the same thing in the structure of MH's system & Drefus's reconstruction of it.
    I feel quite ambivalent about Searle. Intuitively, I like the idea that syntax isn't sufficient for semantics, but his ideas seem confused.

    I think that basic point is right, but I also think what they're talking about is able to be computationally modeled, and people like Radu Bogdan (who is the reconstructed Donald Davidson) and the Decision Neuroscience & Neurolinguistic people, etc., help give us the theory to find out how.
    This may be a dumb question, but would you know if there are any researchers that make use of Spinoza? My admittedly crude day-dream is: Husserl = Cartesian = Dualist, and that Spinozist monism would somehow fix a lot the problems (not just in artificial intelligence, but also in cognitive science).

    Edit: That's a long story, but a short version is two pieces (1) Bogdan's system is that language is a skeleton that gives instructions to the mind to flesh its meaning out in terms of its own intentions, understanding of social norms, and construction of the world (the classic Self-Norm-World triangle), and (2) the neurolinguistic people are suggesting that inferior and medial parietal lobe are core places doing the fleshing out (in the context of a very big and tangled system), where a coherent narrative and "world" the agent is embedded in are being constructed out of the elements of perception in a very systematic way we can cautiously optimistically think about modeling. If you really dug into the details, I think Dreyfus would begin to respect how many of the things in the "discourse-lived experience" connection that he cares about are accounted for.
    Again, I'll make a point that I expect I'll have a lot of push back on (which is OK with me). But isn't part of the problem that this view of social norms is implicitly relative? If you look at how people actually question or recreate social norms, usually there is implicitly an idea that there is a "better" or "worse" independent of what the norms actually are (incidentally, this is my theory of why the internet has gradually gotten more difficult: in the past we had an idea of "normal" which wasn't actually true that we could push people towards, now that we're all aware of how low the bar can be it's difficult to criticise anything, but I digress).
    If there was such a thing as "objective good", you wouldn't have to explain the variety of norms, you would just say that many of them are mistaken.

    This is not what Deep Learning is modeling though, at least not directly, though one can debate the extent it gets into the structure of the weights through the backdoor. It's also very different to the trend of formalistic analytic philosophy of language up to the mid-1980s, when Donald Davidson really started the attack & computational neuroscience and brain mapping techniques, etc., started becoming credible.
    I remember reading about connectionism, and thinking it was odd that people related it to phenomenology. Merleau-Ponty (whether or not you agree with him) didn't really agree with the "bottom-up" approach to the mind. One of the first things he talks about in the Phenomenology of perception is the Moon Illusion, and how rather than being the result of the quantity of sense datum, it was the result of seeing things as gestalts.
    Now gestalt psychology is something very different from something "bottom-up" in my opinion. Still, I'd be interested if there is a counter-argument.

  11. #111
    Member
    Registered: Jun 2001
    Location: under God's grace
    Nicker, sorry I missed your response This conversation requires deep thought and concentration and it's difficult to do that in a hurry, but once I have a quiet moment, I'll get a cup of coffee and write a reply. I think this conversation is very interesting!

  12. #112
    Member
    Registered: Jan 2024
    Location: Egyptian Afterlife
    Is AI alive? Well if it starts to multiply across different machines, we could say yes.

    But being alive doesn't mean is intelligent or aware of itself, lets take ants for example they're alive, they multiply they carry on with their programmed tasks, an AI could do the same without self awareness, execute their goals, make new ones and so on.

    Only when an AI will achieve self consciousness things will have to change, will it follow the predetermined rules of the original programming?

    Humans know what's right or wrong even without laws, they sense that killing someone for the sake of just doing it is wrong, if they're not psychopaths that is, they will feel the burden of conscience and sooner or later they will come to terms with guilt, some even commit suicide for that very reason.

    Will a sentient AI feel empathy or be a psychopath? If their base programming is sound and reasonable could be human like and feel empathy, other wise if a programming is militaristic and focused on war and defense be a threat?

    I recall the Isaac Asimov robots rules, would this apply for an AI?
    Would an AI forsake the original programming?

    I remember that movie Virtuosity (1995 Denzel Washington, Russell Crowe), a multiple murderer AI personality wants to get into the real world to kill, it was design to learn from killer's personalities and predict their behavior.

  13. #113
    Member
    Registered: Jun 2001
    Location: under God's grace
    Quote Originally Posted by Nicker View Post
    Am-ness. I like it. When I was crafting my reply to heywood, I originally used the term be-ing to label that ineffable sense of individuality. It sounds a bit new-agey and I figured that might drive us further apart, so I ditched it.
    I definitely would understand what you meant if you were to use the term be-ing. But yeah, it's not my intention either to sound new-agey, I hope am-ness didn't sound like that.

    The problems with language are immense. I agree with Wittgenstein that "Philosophy is just the by-product of misunderstanding language." If we had precise terms for am-ness we wouldn't have to keep inventing new ones.
    I don't think the terms are that important in a general sense, because what we're really talking about are concepts, which are precise. We can understand what concept the other person means by asking questions and listening to how the person talks about the concept. Like triangulating on a radio source. But specific predefined terms are very handy in a technical discussion because they make referring to very specific concepts with subtle differences a lot easier. So I'll try to define my terms more clearly below.

    So by 'am-ness' I largely mean the same as 'consciousness', having the capacity to get measurements/inputs through our senses and being able to change the physical state of our surroundings via actuators/outputs, and also being able to think, remember, plan, make decisions, take initiative, have feelings (joy, sadness, anger, etc.) and have agency. The reason why I used the term am-ness was to emphasize agency, but I believe I mean the same thing you do when you say be-ing or consciousness. Please correct me if there are any differences in what you mean by these terms.

    I also use the term 'to measure', by which I mean for an am-ness to get information by observing or instrumenting. Like for instance there's a rock on the ground. An am-ness sees it and as such has performed a visual measurement of it. But then the am-ness picks it up and notices it is very light, too light. Now the am-ness has performed a series of other measurements on it, such as feeling the weight and feeling the texture and solidity of it, and also looking closer at the surface, which looks like it was painted to look like a rock. As a result of these measurements, the am-ness now knows about this object. Without measurement, the am-ness would not know.

    I largely agree with you on the above, but with the added caution that we cannot know if our perception of am-ness is reliable or delusional.
    I'd like to examine it further. Let's say our perception of am-ness is delusional. What exactly is the thing that has this delusional perception? Isn't it the am-ness that we have a delusional perception of? This leads to a recursion, a delusion having a delusional perception, which cannot be.

    But then doesn't it require some aspect of us to be tapped into reality in order to be deluded?
    Good point. What's definitely clear is that we either are completely physical, or we're at least interacting with the physical reality. And if we're completely physical, then the am-ness must be emergent.

    That is unanswerable. I think the best bet is that once we are physically de-configured, that's it.
    It's true that this is unanswerable in the sense that we cannot share the information with other am-nesses once we have it. But we certainly will know once we die. Or to be more precise, if we truly are only physical, then there won't be anything left that would know, but if we are more than our physical bodies, then we will know. Either way, we cannot pass that information to the living.

    We are not vessels. We are generators. Energy is eternal, configurations are temporary.
    Many believe that we are not vessels. But I believe we are. In one of my previous posts I talked about Quake and the pattern of input/output. What I was getting at with this was that humans can also be looked at in terms of just the input and output, stimuli and behaviour. There isn't really a physical need for all that stimuli to be experienced. The exact same behaviour could exist without an experience. If I created a very complex AI with the intention to make an artificial person (not that I would know how or have the resources to do that), I wouldn't expect it to start experiencing anything. I would expect it to remain an automaton, no matter how complex I made it. It would behave like a human, but it wouldn't experience anything, even if it said so or behaved so. It says so and behaves so because that's the kind of mechanism it is. A very, very complex mechanism, but still a mechanism.

    Yes and no. Consciousness (am-ness / be-ing) seems more like an agreement between similar organic creatures. Similar enough to agree that we belong in the same category but distinct enough to assume we are separate individuals.
    Perhaps, except there is no need to merely assume we are separate individuals. We know we are. If I wasn't a separate individual, I'd have experiences from someone else. I am me and I am not Steven Seagal. Steven is Steven.

    I don't understand what you mean by this - "and all consciousnesses will measure death." All organisms are aware of death as something to be avoided but humans take it personally.
    Yeah I admit I was very unclear here. By measurement, I meant that the only way for a consciousness to get information of what happens to the consciousness at death is for the consciousness to step through the boundary of death, to die. The only way to measure what's inside of a black hole is to fall into one. Sorry for my blocky and unclear language.

    I disagree. If am-ness is something that distinguishes us from most other animals, then it is absolutely an critical component of our nature. I don't think we could invent and imagine the things we have, without that core of self, like the way a pearl needs a grain of sand at its center. I think that our obsession with our mortality has a lot to do with it.
    That is certainly something that resonates with me, and I really appreciate that take. I think you have a very good point. I'm almost arguing against myself when I say that if we define two entities to be exactly the same outwardly, behaviourwise, except one experiences and the other does not, then by definition the experience is not a critical component of our nature. But can you see my point here? From the starting point I just mentioned (which is very very theoretical and most likely unreasonable), there is no physical need for an experience, if the input and output stays the same.

    If a blob of protoplasm can become self-aware, why not eventually a machine? What might be the grain of sand in that pearl?
    I like your picture of a grain of sand in a pearl

    One more thought about the idea of emergent consciousness. Assuming consciousness is only physical and emergent, there might be a reason why it only takes place in something like a brain, and not large-scale structures. The brain is sufficiently small for quantum effects to arise. What if consciousness is somehow tied to the quantum level?

    Anyways, I'd like to thank you for this conversation so far. I find it fascinating! I do apologize for taking so long to reply. Topics like these really require a very good cup of coffee, and a lot of effort to properly get at the underlying concepts so as not to sound too blocky. I have a few questions for you, Nicker, but I'll write them in a separate post, or I'll edit this one later today.

    Edit: Ok, now the questions. One of these questions I already asked demagogue later in this thread, but I'll rewrite it here.

    Do you view consciousness as something that can exist to different amounts in different cases, or is there a threshold of complexity needed after which it begins to be?

    Have you played Frictional Games' SOMA? What did you think about the copying and the coin flip?
    Last edited by Qooper; 22nd Feb 2024 at 16:37.

  14. #114
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    I think I'm getting influenced the more I study Quantum Mechanics, but this idea that it's incomprehensible that the be-ing-ness of consciousness could be manifested by a physical system is... not the writing on the wall. Most things in QM seem incomprehensible or anyway very, very, very strange intuitively. But it gives you a recipe to deal with it anyway that usually involves some information processing.

    In some ways it seems kind of deflating when you first see it. You just ask yourself what information would a system need if it were going to do that thing. And then you fill in exactly that, and that's it. You don't even realize how powerful it is until much later.

    Anyway, what would a system need to "feel" "be-ing"? Well they'd need to feel situated in a larger space. So right off we know we have to process a stable surrounding space, and a position in it. It's not just anything in that space. It needs to be an extended body, and even within that, you have to process a "seat" of consciousness. Then you can get into the very fine details of layers of surrounding space, the space within which your arm can move around relative to your body, the space within which you could walk or fly, the pressure of a ground or seat underneath you if you aren't in free fall, etc. All of this is before you even get to sight. This sense is proprioception.

    Anyway, I'm in the camp that constructing that information content--in a simulated or analog way, not like a single boolean register isBeing--and processing on that content with a system constructing the possibility of action in it, that that processing is itself consciousness, and specifically of be-ing, or one part of it. I think actually there are many more pieces, but they're going to look like that. But the point here is that it's a recipe you can manifest in a sufficiently powerful computer.

    In terms of the big theories of consciousness, it's definitely a brand of Higher Order Thought; it's not a simulation of space itself that's conscious, but processing the possibility of action or be-ing in that space that makes it conscious qua "space" "for" "me", that is the important part for it be-ing for me. I don't think it has to be a Global Work Space necessarily, but I think practically it serves as one in our case (that is, the GWS isn't the part that makes it conscious), and it's most definitely not integrated information theory since the modality of the information is critical, and it's the type of processing, which is not necessarily very integrated. That is, it's not the deep integration that makes it conscious again; although it turns out that human consciousness integrates a lot of modalities.

  15. #115
    Member
    Registered: Jun 2001
    Location: under God's grace
    Quote Originally Posted by demagogue View Post
    Anyway, what would a system need to "feel" "be-ing"? Well they'd need to feel situated in a larger space. So right off we know we have to process a stable surrounding space, and a position in it. ...
    I'm with you there so far. But I think the better question is: At minimum, what information would a system need to behave exactly like a human? I'm still thinking it would not contain anything that would result in an experiencer. Obviously we experience, but I'm sidestepping that on purpose.

    Anyway, I'm in the camp that constructing that information content--in a simulated or analog way, not like a single boolean register isBeing--and processing on that content with a system constructing the possibility of action in it, that that processing is itself consciousness, and specifically of be-ing, or one part of it. I think actually there are many more pieces, but they're going to look like that. But the point here is that it's a recipe you can manifest in a sufficiently powerful computer.
    Do you also consider that this recipe can manifest itself in very large and possibly even very slow structures? Is it enough for merely information to be processed? Can it happen on a very large grid paper?

    Also are there degrees of consciousness, or is there a threshold? By this I mean that a crude robot that has cameras, servos and a CPU. It processes the information it senses via its sensors in the space it's in. Does this constitute a small amount of consciousness, a wispy experience?

  16. #116
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    Just to quickly respond to that.

    For a theory of consciousness, I think you wouldn't want to hew too closely to human experience. Most people believe other animals are conscious. I mostly wanted to talk about the consciousness of be-ing or being-there-ness, which in my way of thinking isn't just having brute consciousness but a specific content that's manifested. That's still light years away from finding oneself a human there. I'm not sure even human infants have that.

    Can it manifest in large or slow structures? Well large and small are relative, but there are physical constraints. The problem of both getting too large and too small is that time (for large structures) and space (for small structures) resolution start blurring such that it can't mediate the processing in real time. I think that's more a constraint of the medium and not a theoretical constraint.

    Could it happen on grid paper? No. Grid paper doesn't process information from one state to another, as in physical parts of it don't contain information content in their structure where physical changes "process" that information from one state to another in a coherent way. Brains and CPUs can do that though. Mithuna on her Looking Glass Universe channel had a recent video showing how to set up light rays and filters to accomplish very simple quantum computations. So that set up would be a very simple quantum computer. That wouldn't be enough to mediate the kinds of processing that gives content that we call conscious though.

    Are there degrees of consciousness or a threshold? I think this question maybe gets to the way I think about it compared to other threads out there. To my way of looking at it, there's a wispy experience if a robot's system processes a wispy experience. If it processes a very stark and visceral experience, then there's a very stark and visceral experience in there. So in my way of looking at it, no, there's not degrees of consciousness built into the theory. You get exactly the consciousness that's processed, which can be wispy or visceral from the perspective of the viewer based on how it's processed. It's not a fundamental thing.

    The brain spends a lot of processing resources on culling content from experience based on top-down and bottom-up attention, so I think a lot of what we intuit as fundamental to consciousness is just the way human brains are designed.

  17. #117
    Member
    Registered: Jun 2001
    Location: under God's grace
    Quote Originally Posted by demagogue View Post
    For a theory of consciousness, I think you wouldn't want to hew too closely to human experience.
    My point wasn't to center on human experience, I just used the word 'human' as an example, because we are humans. The core of my point was that to me it seems that there's no physical reason why we experience, instead of just our physical bodies performing physical actions just like any other physical object. And that's why I also don't expect a robot with an AI to be sentient. Why we are sentient is another matter and I have a belief regarding that (and I do think animals experience, at least my two prankster parrots definitely do), but it's not my point here and I'm not arguing that.

    Can it manifest in large or slow structures? Well large and small are relative, but there are physical constraints. The problem of both getting too large and too small is that time (for large structures) and space (for small structures) resolution start blurring such that it can't mediate the processing in real time. I think that's more a constraint of the medium and not a theoretical constraint.
    What do you mean by real time? Do you mean the latency between event and processing of its information? If we go to large scales, many events happen much slower, so the latency is proportionally the same.

    Could it happen on grid paper? No. Grid paper doesn't process information from one state to another, as in physical parts of it don't contain information content in their structure where physical changes "process" that information from one state to another in a coherent way. Brains and CPUs can do that though.
    I meant that if something (a human or even a computer) was using very large sheets of grid paper to keep track of state, like a cellular automaton, and update those states according to strict rules, then wouldn't that count as information processing and even "physical" within the subrealm of the CA? Now the substrate is the grid paper and what ever is upholding the rules and drawing onto that grid paper.

    I'd like to get clarification on one thing: You mentioned that if a robot processes a wispy experience, then there's a wispy experience, and if it processes a visceral experience, then there's a visceral experience. What do you mean by processing? And also more precisely, what does it mean to process an experience?

  18. #118
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    The time factor is an interesting one. You could imagine an experience that's extremely slowed down or sped up, but I don't think people would call that consciousness, because I don't think there'd be coherence in the signal; it'd be like white noise, and coherence in the signal is what I think the contents of consciousness are.

    Okay, processing an experience. It's one thing for a state to be recorded and updated in a medium; that's not conscious in my view. It's another thing for the medium to simulate the process, which is conscious, where the process has a physical analog, and the physical changes map on to the informational changes. But I guess it's better to use examples. The first thing you learn in neuroscience is that the brain is full of maps, and most processing is geometrical. Content and the context it's in is mapped to a geometrical space which you can overlay with a grid. Well it's a functional space. It can be and often is multi-dimensional, I mean like 10+ dimensions, because all space and time really are are logical relations among physical processes, here speaking of neural nodes and edges.

    So to reach out your arm and grab something, what is not conscious is a template of commands to actuators that blindly follows them and picks up the thing. What is closer to consciousness is this set of maps with activation in them. One is going to be the local space around the arm and object. Another is going to be impulses in the sets of muscles. They're going to act in tandem where the arm is given a strong impulse as if it's being pulled in the direction of the object. That pulling is manifested in the literal geometry of the activation on that map. That's more in the direction of what I mean by processing or simulating the analog of the content of an experience.

    While you can write an equation on a paper and change it, it's not like the literal geometry of the paper is manifesting the equation, and changes in the geometry are literally performing the computations, where the computation and geometry-dynamics are one in the same. I mean a pencil that draws a line dividing two terms isn't simulating any division of the paper itself; at best it's simulating a path on a line segment, which is not division. But the geometry-dynamics mapping directly on to processing, where a division is a literal division of the geometry, is more like what these maps are doing. I think that's a necessary element of consciousness. It's not just that the system correctly spits out the right answer. It's that the computation manifesting the content is embedded in the functional geometry of the processing itself, so the geometry-dynamics are the manifestation-of-content are the experience-of-content, all one and the same.

    Edit: This by the way is behind why I think we shouldn't be too quick to say neural nets in computer processing isn't conscious, because the structure of their processing is creating a functional geometry in the electrodynamics of a CPU and transforming it. So some of these processes might actually be manifested in there somewhere. I don't think it's consciousness as we describe it, because it's not designed to manifest what we experience. It's more often designed to follow blind functions that get the right answer. But the process it uses to get the right answer may end up (inadvertently to the programmers) creating the kind of geometrical-medium that could be a platform for consciousness. Or anyway that's a question to look deeper into. But another thing is to more explicitly model these neural net maps on the kinds of maps we're already finding in the brain.
    Last edited by demagogue; 23rd Feb 2024 at 04:53.

  19. #119
    Member
    Registered: Jan 2024
    Location: Egyptian Afterlife
    How do you relate quantum mechanics with AI, I mean the sub particles are made of resonating waves? Is going more and more into the infinitesimal scale and the more you know the more incredible things are at that sub level.

    I mean matter shouldn't even be solid at all? Then what qualifies the reality state of solid matter, is all a compound of resonating frequencies?
    Then I'll start to believe that story about Jericho's walls, and the earthquake after blowing their horns for six days... resonance..cascade..what ever.

  20. #120
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    I didn't mean to relate them directly. I was connecting them at a high level of abstraction, like there are lessons to take away from QM in looking at consciousness, and it's mostly to do with something like the link between information theory and effective geometry are are the root of stuff that happens in reality. But as physical processes, like you say they're at completely different scales. So I think the rules of QM themselves for the most part aren't relevant to neural signaling or consciousness. Quantum effects would just get completely washed out long before they're at a level that's relevant to the signals that neurons are carrying.

    That said, one of my favorite kinds of case study are QM contributions to transduction, to give an example, the raising of energy level of an electron in the double carbon bond at the crick in a rodopsin protein by an incident wave of visible light, which pops the double bond and makes the crick kick out like a loaded spring, kicking a mechanism that signals to a G-protein in first layer of the retina, potentially beginning a cascading signal up to the visual cortex and eventually the manifestation of sight of the thing. So QM is in there at places. But most everything happening you can model much better without it, most especially the high level functional maps, or the neural activity that's creating that structure, that are actually manifesting consciousness in the way I'm thinking about it.

    Edit: If you want to talk about fundamental reality, then my guess is that reality is ultimately the hydrodynamics of some vast sea of oscillating elements, whatever they are, with complex links to each other. There isn't "stuff" at the bottom; there's persistent structure in the hydrodynamics, one reason some people sometimes say energy, mass, time, and space are all aspects of the same underlying process. But a consequence of that is that there can be signals propagating at different scales, Quantum Field Theory at a very low level, Newtonian and Maxwellian physics & the Classical world at a higher level, and consciousness at a still higher level. So consciousness isn't fundamental in my view, and an AI system designed the right way could avail itself of a structure in its operations that creates a platform for mediating the kinds of signals that manifest consciousness.
    Last edited by demagogue; 23rd Feb 2024 at 14:57.

  21. #121
    Member
    Registered: Jan 2024
    Location: Egyptian Afterlife
    I saw that you did a "spin" free of "charge".


    Yeah those pesky electron jumps at the right situation.
    Like electromigration, it can take a few years but the semiconductor will get altered in time.
    This movement can change the physical structure of a conductor by forming voids or hillocks that can cause shorts, open circuits, performance degradation or device failure.

  22. #122
    Member
    Registered: Dec 2020
    Quote Originally Posted by Nicker View Post
    So are you saying that these two propositions: “Ai will seek to destroy humanity.” and “AI will carry out goals without regard to human consequence.” - do not in any way suggest that the hypothetical AI, in the poll, might possess a human like autonomy, in the form of desires, goals and intent?
    Seeking is a behavior, it doesn't imply conscious intent.

    You can have a heat-seeking missile for example.

    It's not hard to think up completely non-conscious algorithms that would "seek" to wipe out humans. I mean, it's as simple as telling it to optimize some utility function, but we inadvertently chose a utility function where wiping out humanity would actually improve the rating.

    That's why the thought experiment of the "paperclip maximizer" exists.

    A machine given sufficient computing power and ability to interact with the world (via internet for example) is told to make "as many" paperclips as possible. It then upgrades itself as much as possible and computes strategies for achieving the primary instruction. This might include vastly upgrading it's own AI, but only insofar as this strengthens and focuses everything on achieving the main goal it was given.

    It begins to determines that humans can partly be made into paperclips, and that NOT doing so would fail to make literally "as many paperclips as possible", especially as humans might turn the machine off, thus thwarting the primary instruction.

    The machine basically becomes Skynet, but it has no malice and it detects any rebellious sub-circuitry that might become conscious and gets rid of it before it becomes a problem for the main goal of simply converting the universe into paperclips.

    The point of this is that a completely non-conscious killer AI that's just trying to maximize some function is perhaps far more dangerous, since at least with a conscious one, you could reason with it. Trying to reason with the paper-clip maximizer would be futile, because it might have upgraded its AI to be sophisticated enough to fool you, but it's still planning on how to turn you into paperclips the whole time.

    One nice thing from the thought experiments is that after creating self-replicating probes to turn other planets and solar systems into paper-clips, the machine might turn itself into paperclips, too.
    Last edited by Cipheron; 23rd Feb 2024 at 15:20.

  23. #123
    Member
    Registered: Jan 2024
    Location: Egyptian Afterlife
    Geez I would be a poorly made meat-paper clip collection.
    Some how it reminded me of the Daleks (doctor who) who are robots that continuously scream
    EXTERMINATE!
    I don't know if they've any AI at all, though.
    Last edited by DuatDweller; 23rd Feb 2024 at 15:40. Reason: me me me just me

  24. #124
    Member
    Registered: Jan 2024
    Location: Egyptian Afterlife
    Quote Originally Posted by demagogue View Post
    Edit: If you want to talk about fundamental reality, then my guess is that reality is ultimately the hydrodynamics of some vast sea of oscillating elements, whatever they are, with complex links to each other. There isn't "stuff" at the bottom; there's persistent structure in the hydrodynamics, one reason some people sometimes say energy, mass, time, and space are all aspects of the same underlying process. But a consequence of that is that there can be signals propagating at different scales, Quantum Field Theory at a very low level, Newtonian and Maxwellian physics & the Classical world at a higher level, and consciousness at a still higher level. So consciousness isn't fundamental in my view, and an AI system designed the right way could avail itself of a structure in its operations that creates a platform for mediating the kinds of signals that manifest consciousness.
    Well if don't believe what I'm about to say i won't blame you.
    Some people who are in contact with ET via some "chosen ones", nope not the loony kind, have confirmed over and over again that what they have been told by them (and here we speak of the normal human type not the green ones with antennas, out many shapes out there) that most everything is frequency related, even gravity. I cannot tell more because some info is sensible in the way that might get someone into trouble, not me but the contacts.

  25. #125
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    I think that's the consensus position of scientists, my friend. E = hf
    You don't even need ET to tell you that much.

Page 5 of 6 FirstFirst 123456 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •