TTLG|Jukebox|Thief|Bioshock|System Shock|Deus Ex|Mobile

View Poll Results: In the future, when Ai exceeds human intelligence...

Voters
30. You may not vote on this poll
  • ...1) Ai will bring a benevolent transcendence to humanity.

    4 13.33%
  • ...2) Ai will seek to destroy humanity.

    1 3.33%
  • ...3) Ai will carry out goals without regard to human consequence.

    12 40.00%
  • ...1 and 2...(Factionalized)

    0 0%
  • ...1 and 3...(Factionalized)

    3 10.00%
  • ...2 and 3...(Factionalized)

    0 0%
  • ...1, 2, and 3...(Factionalized)

    2 6.67%
  • ...Ai will never exceed human intelligence.

    8 26.67%
Page 3 of 6 FirstFirst 123456 LastLast
Results 51 to 75 of 137

Thread: BotChat!

  1. #51
    Moderator and Priest
    Registered: Mar 2002
    Location: Dinosaur Ladies of the Night
    Quote Originally Posted by Yakoob View Post
    var emotion = Math.random(happy, psychopathic);

    ;p
    Ha! Nerd humor always makes me laugh! I HATE NERD HUMOR! I WILL KILL YOU! Nah, just funnin' with you. OR AM I?

  2. #52
    Member
    Registered: Jun 2009
    Location: The Spiraling Sea
    Boston Dynamics Spot, now available for early adopters!



    https://www.bostondynamics.com/spot

  3. #53
    Level 10,000 achieved
    Registered: Mar 2001
    Location: Finland
    Ok, now we're living in the future.

  4. #54
    Member
    Registered: Jun 2009
    Location: The Spiraling Sea
    Tesla Optimus is coming along...


  5. #55
    Member
    Registered: Aug 2002
    Location: Point Nemo
    AI can already beat chess masters so it has already won.

  6. #56
    Member
    Registered: May 2004
    Location: Canuckistan GWN
    Also missing from the list: We won't even know when AI achieves sentience and it will not care.

    Many people still deny the sentience of other members of our species, because of variations in skin pigmentation. We only just began to recognize sentience in other mammals and birds. And they share brain anatomy and functions with us. There are ant species which build nests that have structures devoted to ventilation and cooling. Are hive-minds sentient? Are we even allowed to speculate on that?

    In the novel "Terminal Cafe" a self-aware being emerges within the ecology of the World Wide Web, from mutated virus programs. It has the mind of a genius, the resources of the entire globe and the emotional volatility of a toddler. In another book, which I forget the title of, a colony of robots on the moon, are given a handful qualities by their human inventor: a limited life span for their chassis (death), the need to build a replacement body in which to transfer their programming (reproduction), and the random alteration of a tiny part of their coding during each life cycle (mutation). In the story these robots develop human like emotions and motivations but in reality would we even recognize a sentient machine?

    A big part of the confusion is conflation of concepts and over-extension of the term AI, using it to mean cybernetic extension of humans, sentient machines and simulated humans. But AI outstripped the human ability to manipulate information decades ago. While homo sapiens are the most capable organic information processors (that we know of), that's not what makes us human. Dema mentioned will and I think that is the key. It's not how much information you can process but what you make of it. And you can't make anything out of it without a motivation to do so. It's not just about computing power, it's about abstraction and invention. If necessity is the mother of invention, the inventor must have needs, even artificial ones.

    And if the abstractions and inventions of a sentient machine (I prefer "self-willed-construct") don't serve our needs or expectations, will our hubris blind us to the existence artificial humanity, not if but when it arises?

  7. #57
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    For an AI to be determined as sentient, we have the definitional problem of what sentience is in the first place. If it is the most literal meaning, 'to sense', then anything that processes input as stimulus and responds to it is sentient. If it's the more philosophical meaning of 'being able to feel', then we have the problem of how you distinguish an entity with conscious feeling from an entity that simulates the input-response process but has no internal state that corresponds to something we traditionally define as sentience and cognition (see: P-zombie, The Chinese Room).

    What this means is that our definitions are missing or have elided something, or at the very least, are ill-suited to answer the question of how you ascribe consciousness and life to an entity. They may even be irrelevant to the question.

    Robots and AI organically developing emotions always seems like a far-fetched idea to me. Of what practical use would an emotion be to a computer? Human beings need them because of the way they link mental and physical processes together as a sort of codified shorthand response to situations that we've evolved over time to recognise one way or the other. An artificial intelligence would have no need for this, given enough computational power leading to fine-grained assessments possible at an instant in any situation, so other than ascribing mental states to human beings if it had to interact with them, and replicating them to assuage our human feelings if it had to, I don't see why it would want to or even suddenly develop such a feature as emotionality.

  8. #58
    Member
    Registered: May 2004
    Location: Canuckistan GWN
    Sentience is indeed a problematic term in this context. At least as problematic as AI, since we conflate AI with sentience in popular culture and many of the debates around the ethics and dangers of AI, circle around us imagining that it might develop its own motives (Robot Overlords, and all that).

    I am using it to describe beings with a theory of mind, self awareness. But even those refinements are problematic. If all it means is to experience emotions then it's inadequate.

    As for emotions. We are again projecting human expectations on artificial beings. Firstly we elevate our emotions, giving them great value which may not be deserved. We create art and literature about our feels, build monuments to sentimentality. But it's just biochemistry, the earliest form of data processing by our lizard brains.

    Why would an artificial being need the same emotions as we have? The same value judgements? They could have their own "emotions", their own background colours, informing them whether to proceed cautiously or enthusiastically, whether something is a threat or a benefit. We probably wouldn't recognize them as emotions, mostly because we believe that, as the pinnacle of creation, we are the measure of what a being is.

    Must sleep...

  9. #59
    Member
    Registered: Jun 2001
    Location: under God's grace
    I don't have much time, but I had an excellent cup of coffee so I had to write a bit.

    Quote Originally Posted by Nicker View Post
    Are hive-minds sentient?
    It could be that car traffic as a system is sentient, and the global cash streams (plural singular, erm... plingular) is reasoning about 11-dimensional hypermorality. Jeff buys a Minimoog on ebay? That payment was going to be one of its thoughts, but because Jeff cancelled his purchase, it forgot what it was thinking about.

    Are we even allowed to speculate on that?
    What do you mean "allowed"? What type of "allowed" are you referring to here?

  10. #60
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    I can take a few notes from my Philosophy of Mind days to note that these days I'm a partisan of Higher Order Thought (HOT) theory, in that I don't believe just any complex system becomes "sentient" just because it's complex. And I don't think a system that's designed to have sense-like inputs matched to outputs is sentient either, like both classic categories of AI, Good Old Fashioned state machine AI and Deep Learning setups like Large Language Models.

    I think a perception or "affect" has to be literally modeled as an affect for the decision-making apparatus. That is, there's a first layer where attention is put on to an affect & maybe there's a decision made about it or some orientation formed, but it doesn't become an affect until a second layer represents that relationship explicitly.

    To use a simple example of a Convolutional Neural Net identifying images with labels--where it breaks the image down into features, then links the arrangement of features to activation channeling to the right label--at the end of that chain, there's going to be activation of a label, which the system can then output outright like "chair".

    But in HOT theory, the system isn't sentient yet that it saw a chair. If you wanted to make that sentient, you'd have to re-represent that outcome as affects, that is, e.g., as a set of impulses to articulate the word "chair" strongly paired with a set of impulses connecting those impulses with impulses on a re-representation of the image and its features & its proprioceptive place in space, etc. You have to re-represent everything in some affective packaging.

    The moment you're just blindly activating some state or output and not re-representing the activation on content as an affect itself, where the system doesn't get any affective representation of what's happening internally, you've dropped the sentience ball. Anyway that's HOT Theory in a nutshell and I think the strongest argument why AI systems, as they're designed right now, aren't going to be sentient no matter how sophisticated they get, because they're not designed to be sentient.

  11. #61
    Member
    Registered: Dec 2020
    Quote Originally Posted by Nicker View Post
    Are hive-minds sentient?
    Well you could imagine a system where every person was asked to do and share calculations, and the sum total of the calculations added up to a realistic simulation of a human brain. The fact that the calculations aren't all happening in one place or in one location shouldn't have *anything* to do with whether the end result is that "something" was sentient and aware. It would just be a completely disembodied mind.

    So my view is that if one "box" can be sentient then you could just distribute the work done by the box across a network of boxes, and when you take the broader picture we would realize we were just being silly and anthropomorphic by thinking the box was sentient but not being sure whether the same interactions but not in a box somehow couldn't be sentient.

    The Chinese Room
    Yeah, i have a lot of issues with the Chinese Room argument. It falls down because of a category error, basically. The human in the room is acting like the "CPU", and it's wrong to ask whether the CPU "knows" how to do anything.

    Also, you can replace "knows Chinese" with "knows" anything else, or even just "can do" anything else, thus showing it didn't prove anything about awareness or sentience at all. For example you could state that the room cannot "play chess" because the man in the room doesn't learn how to play chess. I got ChatGPT to write up a counter-example:

    Imagine a room with a person inside, let's call him Alex. Alex does not know how to play chess; in fact, he has no understanding of the rules, strategies, or even the pieces involved. However, in the room, there is a vast collection of chess moves written in a rule book, and Alex has a set of instructions that tell him which moves to make in response to specific board positions.

    Now, someone outside the room passes chess positions and moves through a slot in the door. Alex, following the instructions, manipulates the chess pieces on the board accordingly, producing responses that are indistinguishable from those of a skilled chess player. Observers from outside the room might be convinced that Alex is a grandmaster chess player, given the quality of his moves.

    The external observers perceive the room that in the room is a competent chess player because the moves are accurate and appropriate. However, Alex has no comprehension of the game, strategy, or even the meaning of the moves; he is merely following instructions.

    But since Alex doesn't understand chess we conclude that the room cannot genuinely "play chess."
    Yet *something* in the room just whooped your ass at chess, it just wasn't Alex. Focusing on Alex's role was in a complete red herring.

    But if they say "yes but you haven't proved the room is conscious!" ... that's the point. The Chinese Room argument cannot prove OR disprove how the room operates. Putting a conscious human into the role of the CPU is complete misdirection. Imagine a "neuron room" where a human carries out the operations of each neuron. We could argue that since the human is not aware of what the neurons in aggregate are doing then the collection of neurons cannot be conscious.
    Last edited by Cipheron; 20th Dec 2023 at 14:24.

  12. #62
    Member
    Registered: Aug 2002
    Location: Maupertuis
    Quote Originally Posted by Cipheron View Post
    But if they say "yes but you haven't proved the room is conscious!" ... that's the point. The Chinese Room argument cannot prove OR disprove how the room operates. Putting a conscious human into the role of the CPU is complete misdirection. Imagine a "neuron room" where a human carries out the operations of each neuron. We could argue that since the human is not aware of what the neurons in aggregate are doing then the collection of neurons cannot be conscious.
    The Chinese Room argument is also facile for another reason. Stick a human in a room to translate, and let their instruction set be... an English/Mandarin dictionary and grammar. They'll translate slowly, _and_ they'll understand what they translate.

  13. #63
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Quote Originally Posted by Cipheron View Post
    Yeah, i have a lot of issues with the Chinese Room argument. It falls down because of a category error, basically. The human in the room is acting like the "CPU", and it's wrong to ask whether the CPU "knows" how to do anything.
    I don't particularly agree or disagree with TCR, because I don't think it answers everything myself. But there's still something to the question it poses. So if it's not the thing performing the operations that is conscious, then what are you proposing is when it comes down to it, and is that machine-replicable?

    Also, you can replace "knows Chinese" with "knows" anything else, or even just "can do" anything else, thus showing it didn't prove anything about awareness or sentience at all. For example you could state that the room cannot "play chess" because the man in the room doesn't learn how to play chess.

    ...

    Yet *something* in the room just whooped your ass at chess, it just wasn't Alex. Focusing on Alex's role was in a complete red herring.
    Well, let's go deeper into that. If you're talking about chess, and it's a set of instructions to be followed that have been written down, are we saying that the instructions themselves are a sign of conscious intelligence? That the actual intelligence was offloaded into the pre-prepared moves in the manual and whatever did that was the actual conscious intelligence? Because that makes sense and doesn't change anything about the experiment's conclusion.

    But if they say "yes but you haven't proved the room is conscious!" ... that's the point. The Chinese Room argument cannot prove OR disprove how the room operates. Putting a conscious human into the role of the CPU is complete misdirection. Imagine a "neuron room" where a human carries out the operations of each neuron. We could argue that since the human is not aware of what the neurons in aggregate are doing then the collection of neurons cannot be conscious.
    Let me preface by saying I'm probably not getting this completely, and I haven't formally studied philosophy or philosophy of mind in the past, so that's my fault, and feel free to clarify if I am missing something.

    I think that the last sentence you said is, in fact, the point. Neurons by themselves aren't conscious, because an additional something needs to develop - that is to say, intentionality, or affect per Dema's note of HOT. For this we'll have to chew through some theories of consciousness and agree what consciousness is and which theory we believe works best, I think, before we can pin down whether TCR's conclusion makes sense or doesn't. Or maybe there's purely logical reasoning that can circumvent all of that which you're getting at, and I haven't quite grokked it yet.

  14. #64
    Member
    Registered: Dec 2020
    Quote Originally Posted by Sulphur View Post
    I don't particularly agree or disagree with TCR, because I don't think it answers everything myself. But there's still something to the question it poses. So if it's not the thing performing the operations that is conscious, then what are you proposing is when it comes down to it, and is that machine-replicable?
    My point is that to even ask if the CPU is "conscious" is a meaningless question.

    Can the "CPU" play chess? no it cannot. I can't do ANYTHING, because all the connections to do those things are held in data, not in the CPU. A CPU is a simple adding machine that does low-level computations. So pointing at the CPU and saying "is that conscious? I DON'T THINK SO" is dumb, it's like pointing at neurons and asking which one is the conscious neuron.

    It's an inherently idiotic question to even ask. It's like asking which atoms in your brain are conscious. None of them are, because that's not what atoms do. We just have this lame idea that you point at a piece of inert matter and ask whether consciousness resides in that piece of matter. But that's not how consciousness works.

    Consciousness is not a property of a specific lump of atoms, it's a emergent property of a process of interactions, in the same way that "playing chess" isn't a property of the CPU, it's an emergent property of running a program from data storage through the CPU.

    So you can point to a CPU and ask if it's "being conscious" right now, assuming that you had enough memory and time and ran a brain simulation through it. No, it's not "being conscious" it's just adding up numbers any time you look at it. But by the same exact logic, you can point to the CPU while it's running a chess program and ask if it's "playing chess" right now, and you get the same answer. No it's not "playing chess", it's just adding up numbers.
    Last edited by Cipheron; 21st Dec 2023 at 01:54.

  15. #65
    Member
    Registered: Aug 2002
    Location: Maupertuis
    Please address my point too, Cipheron.

  16. #66
    Member
    Registered: Oct 2020
    Location: Russia
    Quote Originally Posted by mxleader View Post
    AI can already beat chess masters so it has already won.
    White? Black? I am the guy with the plug.

  17. #67
    Member
    Registered: May 2004
    Location: Canuckistan GWN
    Quote Originally Posted by Qooper View Post
    What do you mean "allowed"? What type of "allowed" are you referring to here?
    I think I was musing that, as non-hive minds, we may not be qualified to decide that on behalf of another potential mind. We simply have no frame of reference. But we do know that we are quick to dismiss the humanity of other species and even of other humans. And once we have decided something, we are difficult to move. Just because we cannot conceive that a nest of scurrying ants has a meta-awareness, that nest of ants might scoff at the notion that a single, independent, giant, bipedal organism could possibly be uniquely self aware. It's preposterous!

    Which asks the question, what it is about our beings which convinces us that we are individuals? What objective evidence do we have and how legitimate is our conclusion? If we were ourselves, nodes in a hive mind, would we even know it?

    ... I don't believe just any complex system becomes "sentient" just because it's complex.
    I agree but it does seem that self awareness is correlated with complexity, perhaps an emergent property of it. If so, when does an increasingly complex system make that transition? Is it some sort of higher octave complexity, producing novel, cognitive overtones ? Is there a highest octave or are we like a polygon adding segments, approaching, imitating but never actually becoming a real circle. Are we just simulating self awareness (sentience)?

    Or is sentience just a word we created to describe our particular perceptual configuration, and then elevated to appease our egos?

    Another option is theistic, that our humanity is from an external source, a character skin applied by a designer. But this is unsatisfactory as it just defers the questions, what really is sentience and how does it arise?

    Yet *something* in the room just whooped your ass at chess, it just wasn't Alex. Focusing on Alex's role was in a complete red herring.
    Just a better algorithm for chess. So what is the difference between being the best at winning chess and "knowing" how to play? I know how to play chess but I am crap at it. And when I say know, I mean I am aware of a game called chess and I can cite the rules of play but I really don't get it. There are billions of people who know chess better than me but do I know chess better than a supercomputer, even one who can beat every human player?

    Ah, words defining other words. It's so incestuous.

  18. #68
    Member
    Registered: Dec 2020
    Quote Originally Posted by Anarchic Fox View Post
    Please address my point too, Cipheron.
    The one about the English/Mandarin dictionary? Not sure how i'm supposed to answer that. That changes Searle's argument too much because the whole point he was making is that the symbols being manipulated were not ones the person understood.

    Putting the internal states into English changes the argument, but doesn't really say anything about whether the algorithm the ROOM is carrying out could be conscious. Like I said before, it's a complete red herring to ask about what the HUMAN knows.

    The human is taking the role of the CPU. Nobody is claiming a CPU can be conscious. That's not the what the strong-AI argument is claiming. Consciousness is an emergent property of patterns of information, it's not a property of the lump of rock that is the CPU.

    Quote Originally Posted by Nicker View Post
    Just a better algorithm for chess. So what is the difference between being the best at winning chess and "knowing" how to play?
    The difference isn't the point. My point was that you can use Searle's exact argument to prove that the "room" can't "X the Y" for any verb X and any noun Y.

    So we can follow his exact logic to "prove" the room cannot "play" "chess". But, we can see in that case that *something* played chess.

    So Searle hasn't actually demonstrated how the concept of "knowing" is actually any different to that. His argument is therefore reliant on his own conclusions about what "knowing" is, and that no system can do it other than the human brain. Circular logic, basically.

    Adding a conscious human into the room and asking about what the human knows just confuses the issue, as I said before the human is just working as the CPU, and a CPU only knows the exact math operations it's doing at that precise moment. It doesn't get the big picture of what the program or computer is actually doing, whether that's calculating tax returns, playing chess, or literally anything else.
    Last edited by Cipheron; 22nd Dec 2023 at 22:50.

  19. #69
    Member
    Registered: Aug 2002
    Location: Maupertuis
    Quote Originally Posted by Cipheron View Post
    The one about the English/Mandarin dictionary? Not sure how i'm supposed to answer that. That changes Searle's argument too much because the whole point he was making is that the symbols being manipulated were not ones the person understood.

    Putting the internal states into English changes the argument, but doesn't really say anything about whether the algorithm the ROOM is carrying out could be conscious. Like I said before, it's a complete red herring to ask about what the HUMAN knows.

    The human is taking the role of the CPU. Nobody is claiming a CPU can be conscious. That's not the what the strong-AI argument is claiming. Consciousness is an emergent property of patterns of information, it's not a property of the lump of rock that is the CPU.
    For context, two decades ago I took a philosophy of mind class which included reading the original Searle paper. Said paper was annoyingly vague about what went on in the Room. I'm having a hard time finding the original right now, and I'll hunt it down if you like, but for now I'll go from memory. As I recall it describes the person in the room executing "formal rules" in order to translate. The argument falls apart on three points: (1) If the rules are a grammar and a dictionary, the human being translates and understands. That class's professor said these aren't "formal" rules, without ever defining the word "formal." (2) If the rules are computer code, then the human being never finishes translating, never even completes a thousandth of the steps needed to translate; so any entity that sits in the room and produces a translation, is not a human being. (3) A story is not an argument.
    Last edited by Anarchic Fox; 23rd Dec 2023 at 09:39.

  20. #70
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Quote Originally Posted by Anarchic Fox View Post
    For context, two decades ago I took a philosophy of mind class which included reading the original Searle paper. Said paper was annoyingly vague about what went on in the Room.
    Here are the words:

    Quote Originally Posted by Searle
    Suppose that I'm locked in a room and given a large batch of Chinese writing.
    Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not
    even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or
    meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.
    Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script
    together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I
    understand these rules as well as any other native speaker of English. They enable me to correlate one set of
    formal symbols with another set of formal symbols, and all that 'formal' means here is that I can identify the
    symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together
    with some instructions, again in English, that enable me to correlate elements of this third batch with the first two
    batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in
    response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me
    all of these symbols call the first batch "a script," they call the second batch a "story. ' and they call the third
    batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers
    to the questions." and the set of rules in English that they gave me, they call "the program
    Not the most articulately worded, but that was his 1980 version. Not grammar, not a dictionary. Just coded responses to any possible question with a specific, coded answer. Not physically feasible, of course, but it's a thought experiment. Here's the paper if you want it. The one from 1990's a bit clearer.

    Quote Originally Posted by Cipheron View Post
    My point is that to even ask if the CPU is "conscious" is a meaningless question.

    Can the "CPU" play chess? no it cannot. I can't do ANYTHING, because all the connections to do those things are held in data, not in the CPU. A CPU is a simple adding machine that does low-level computations. So pointing at the CPU and saying "is that conscious? I DON'T THINK SO" is dumb, it's like pointing at neurons and asking which one is the conscious neuron.

    It's an inherently idiotic question to even ask. It's like asking which atoms in your brain are conscious. None of them are, because that's not what atoms do. We just have this lame idea that you point at a piece of inert matter and ask whether consciousness resides in that piece of matter. But that's not how consciousness works.
    Right, so in my admittedly superficial search of the responses to TCR, yours is what's called the System response. So this ascribes the feature of consciousness to the system as a whole, where the system includes everything in the room - and that's definitely one perspective to take. But there's a problem, because even if you discount the man and just replace him with an actual CPU performing lookup operations, there's a philosophical quandary here: for theory of mind to apply, there has to be something intelligent performing dynamic, on-the-fly evaluations of context with semantic understanding, but the way TCR is constructed means it should have a pre-constructed realistic response for every possible question ever posed. Fundamentally, it boils down to a variant of the p-zombie issue from this perspective.

    Consciousness is not a property of a specific lump of atoms, it's a emergent property of a process of interactions, in the same way that "playing chess" isn't a property of the CPU, it's an emergent property of running a program from data storage through the CPU.
    Somewhat tangential: while that makes sense intuitively, I don't think we've been able to make a definitive claim that that is, in fact, how consciousness as we understand it arises. It's still a theory - the emergent one - and the thing is that proving it is a hell of a lot more difficult than posing it; at least, there's a lack of strong evidence for it (or against it, for that matter), as far as I can tell. If you've got sources that say otherwise on this, do share.
    Last edited by Sulphur; 23rd Dec 2023 at 12:26.

  21. #71
    Member
    Registered: Aug 2004
    Quote Originally Posted by Sulphur View Post
    Just coded responses to any possible question with a specific, coded answer. ...the way TCR is constructed means it should have a pre-constructed realistic response for every possible question ever posed.
    "What was the last question I asked you?" I kinda suspect that Turing's incompleteness theorem can be used to prove this isn't possible even in an abstract thought-experiment sort of way.

  22. #72
    Member
    Registered: Apr 2002
    Location: Third grave from left.
    Quote Originally Posted by Nicker View Post
    Are hive-minds sentient?
    That one is easy. You are a hive-mind of neurons (~ really crappy ants). Are you sentient?

    Usually people just stumble what to call sentient to distinguish their self importance - ie. where and on what grounds to draw the arbitrary line of separation. Is my perpetually drunk neighbor sentient? Is a dog sentient? Bird? Fish? Plants? Plate tectonics? Rocks/crystals? F'n "empty" space? Google search results (we being the ants that through Google Inc form a feedback loop that could cause sentience that neither party could be aware of - i am only half joking - think of memetics)?

    Most of that being un-bloody-likely for any level of reasonable "sentience" I would be willing to accept ... but not inconceivable.

    It is hard to judge sentience without being able to directly inspect it - especially without having settled on what sentience exactly is supposed to mean.

    I feel like this is one of thous questions that define their answer and are therefore not a question to begin with (ie. to answer it you have to define what you are asking till the question becomes the answer).

  23. #73
    Member
    Registered: May 2004
    Location: Canuckistan GWN
    I feel like this is one of those questions that define their answer and are therefore not a question to begin with (ie. to answer it you have to define what you are asking till the question becomes the answer).
    That was kind of my point. We can't even offer operable definitions of terms like consciousness and sentience. We conflate intelligence with humanity, humanity with consciousness, emotions with consciousness, hominids with humanity. Two legs conscious - everything else NOPE! And the only reference we have is ourselves. Used to be that humans were distinguished by being the only creatures that used tools and made war. But now we are not alone in that.

    So if we decide that a machine cannot be conscious or a hive can not be conscious, no matter how complex they become, by what right do we do that? Aren't we just saying that they are not similar enough to us? We can't even point at the thing in us that makes us conscious/sentient. We just know it's there. We think.

    We are just processing information and rendering outputs. Semantic processors. Emotional processors. Biochemical processors. We feel singular but are we?


    SO... complexity. If consciousness is not just an emergent property of complexity, then where does it come from (assuming you can define what it is)?

  24. #74
    Member
    Registered: Dec 2020
    Quote Originally Posted by Pyrian View Post
    "What was the last question I asked you?" I kinda suspect that Turing's incompleteness theorem can be used to prove this isn't possible even in an abstract thought-experiment sort of way.
    That's a little confused. It's Godel's Incompleteness Theorem. Turing proved the Halting Problem is not solvable.

    And neither of those things seem to apply to the situation you describe. The Incompleteness Theorem is about how some truths cannot be proven within a specific set of axioms. But if you add more axioms they can be solved. However, the new bigger set of axioms will always have more constructable "unprovable truths".

    The Halting Problem is about whether you can write an algorithm which will tell you whether any other algorithm will end in finite time. That might be more applicable, but you could always construct the rules of TCR so that it halts. The Halting Problem doesn't mean you can NEVER tell if a specific system will halt, you just can't universally decide this for all theoretical systems.

    As for the question, it would not be solvable if the book doesn't have state. As soon as you allow any sort of marks, bookmarks, counters or tokens to be used by the man in TCR then it's a solvable question.

    However, the main issue is that being able to write answers to questions has ZERO to do with whether or not a system is actually conscious. You can write a system that only uses basic statistical language modelling to write fake responses and make them realistic and adaptive. See ChatGPT.
    Last edited by Cipheron; 26th Dec 2023 at 17:59.

  25. #75
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Yep. If anything, TCR illustrates that we haven't quite sussed out how to test for consciousness with any sort of universally accepted theoretical model yet.

Page 3 of 6 FirstFirst 123456 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •