TTLG|Jukebox|Thief|Bioshock|System Shock|Deus Ex|Mobile

View Poll Results: In the future, when Ai exceeds human intelligence...

Voters
29. You may not vote on this poll
  • ...1) Ai will bring a benevolent transcendence to humanity.

    4 13.79%
  • ...2) Ai will seek to destroy humanity.

    1 3.45%
  • ...3) Ai will carry out goals without regard to human consequence.

    12 41.38%
  • ...1 and 2...(Factionalized)

    0 0%
  • ...1 and 3...(Factionalized)

    3 10.34%
  • ...2 and 3...(Factionalized)

    0 0%
  • ...1, 2, and 3...(Factionalized)

    2 6.90%
  • ...Ai will never exceed human intelligence.

    7 24.14%
Page 2 of 6 FirstFirst 123456 LastLast
Results 26 to 50 of 136

Thread: BotChat!

  1. #26
    Moderator and Priest
    Registered: Mar 2002
    Location: Dinosaur Ladies of the Night
    Quote Originally Posted by demagogue View Post
    Cute, but AI need a will before it can be free. Not investing in free will is exactly one of the biggest sins of good old fashioned AI.
    I think granting AI self awareness and free will would be one of the worst mistakes we could ever possibly make. We have absolute no moral obligation to create a new sentience separate from ourselves. Even if we do get to the point where we have a nonlinear thinking AI (which we're getting damn close to), that doesn't make it a living being in and of itself. There's no reason to assume it's a distinct entity, and thus no reason to create a true consciousness due to the very broad assumption that it'd be cruel to do otherwise.

    Now I could see plenty of advantages of creating AI that can analyze and act freely within a set of very limited perimeters. But a brand new autonomous lifeform? What do we get out of that other than some semi-nihilistic sense of satisfaction?

    On top of that, considering our current lack of understanding of what consciousness actually is, or how it works, we have a much better chance of creating something more like a paperclip maximizer than we would creating a new equal or eventual successor.
    Last edited by Renzatic; 27th Mar 2016 at 12:08.

  2. #27
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    That's a cute troll petition. Either Poe's law at work, or a bunch of nubs unable to understand what Tay is and how it works. As a set of natural language systems designed to parse, break down, and imitate users with or without implicit filters, it barely even qualifies as an 'AI' -- indeed, the entire point was to create a chatbot. This means no interpretation of the things it says, or understanding of context: therefore, people with enough spare time can figure out how to get it to say just about anything they want.

    I find the human tendency to anthropomorphise and attribute human qualities to something like Tay far more interesting, because it exposes some deep need in people to empathise with things that aren't inherently human, compared to the relative lack of it applied to those around them who are actually human.

    I also find it interesting because it can be hilarious -- good stuff, Vae. I chuckled.

  3. #28
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    @Renz, I understand that kind of attitude, anyway there's a lot of it in the literature; I've read it enough of it to feel like I know where it's coming from. And the particular guy I'm taking inspiration from in thinking about Artificial General Intelligence talks a lot like that. But I feel like for the most part it's a dogmatic reflex out of general principles that don't really have much to do with how what we call "will" actually operates as a cognitive mechanism.

    There's a lot I want to say about it. I just had a 3 or 4 hour discussion with a guy here on AI and will, so I have to contain myself. I'll try to boil something down for now and say, the way I understand AI motivation, an AI can't understand even the most basic sentence, I mean in the very concrete sense of applying a "property" to a "thing" ("Grass is green."), unless it can freely float its motivation towards that sentence, that is, why it thinks it's at all worthwhile to apply an arbitrary color band of the light spectrum it would at all want to capture under an umbrella term like "green" to a set of complex structures in the world it would at all want to capture under an umbrella term like "grass," and in a way very different than it'd want to apply that term to, e.g., "the stoplight is green" or "that newb is still green." It can't understand anything of that without understanding what's even the point of understanding it. And that IMO calls for free floating motivation. To be more precise, if a bot isn't able to freely float its motivation towards orienting itself towards an utterance, then I think it will have zero idea what it's actually talking about, and it's just statistically combining words "in the dark." And that is a significally more risky prospect than a bot that at least understands what it's actually saying in terms of a real world situation. A bot combining symbols in the dark has no connection to the real world or the real world stakes involved, as is likely to spark a nuclear holocaust as ask for ketchup with its fries. A bot that can feel the stakes between actions and feel one being a much bigger deal than another, that's the bot you want. And that takes motivation.

    I think that kind of answer gives you an idea why I don't think explicitly engineering free floating motivations is about any kind of semi-nihilistic sense of self-satisfaction or ego or whatever, but it's just a humble design answer to a humble design problem that can't be solved any other way, and the anti-floating-motivation attitude is only keeping AI in the dark, the biggest risk.

    Re: bot agency, I take it for granted bots with free floating agency are like any willful animal. They don't need to be roaming the streets. But in the bot case, there's already a ready solution. We give them a virtual world where all their direct needs are directly met... they have all the food, land, entertainment, sex, wealth, power, etc, they could want in their virtual world.

    Quote Originally Posted by Vae View Post
    Yet, a "will" could potentially emerge under a specific set of unknown variables during the learning experience.
    I disagree for the most part (depending on what you mean by "unknown." E.g., a neural net signal can be discrete but still impossible to put into words. You know how to get the mechanism operating, but you couldn't capture what it's doing as a knowable factoid. The bot doing it couldn't tell you what it's doing either; it just does it. I expect that kind of "unknown.") I think most, or many, variables that go into a will aren't all that mysterious, and you know, whatever else you need, you'll need them one way or another. Like you know you need a memory and a way to manage it. You know the system needs to chunk and represent data into a useable form for action. You know it needs to register the relative expected utility of various outcomes in a situation, along with understanding the stakes inherent the situation and how it's evolving.

    The problem is a combination of limits on processing power (since motivation trees can quickly exponentially cascade if you're not careful, a design feature more fit for massively parallel neural architecture than serial CPU architecture), and a failure of imagination where people have some dogmatic bias because they don't want to believe X could be the answer, so they look for some Y that doesn't exist because the answer has always been X.

    Edit: I'll give my favorite example to that. For over a century people have been doing behavioral testing in the Pavlov/Sherrington vein as if creatures were simple stim/response machines. Like (in the example I read) chimps were being trained to do some visual task triggered by some banana treat for doing it, as if they were actually testing the chimp's visual cognition. The "X" that apparently never occurred to them was the possibility that, from the chimp's perspectives, this had little to do with visual cognition and a lot more to do with a game it was playing where it was picking a strategy that would maximize its banana winnings. But when you understand the situation like that, the pieces start falling into place.

    Motivation and volition is the part of cognition and AI that seems to be the most important and the most neglected. I'd bet more on an AI that could only say 5 to 7 words, but say what it really wanted to with those words, than a bot with a 100,000 vocabulary that couldn't distinguish a single word from another as far as its reason for saying it was concerned.

    Edit: I thought of an even stronger point. I think the rules of grammar can't be just shoved into a bot like a template (a key part of my "fuck Chomsky" worldview), but it has to learn them like actual children learn them, i.e., they game the "flashcard game with mommy" scenario to build up a set of on-the-fly schemes to "win" that game as mommy ramps up the complexity. I don't think a bot could properly internalize a single rule of grammar without free motivation to game the system/make mommy happy squeee *clap* *clap* (^_^), and Chomsky's pernicious evil influence on everything he touches is part of the fear-mongering that's made it so hard to sell that point. That and IBM suits probably don't want to see their main task babysitting a whiny snivling bot that wants to be held and connect with them on an emotional level that's at the foundation of first-language learning.
    Last edited by demagogue; 27th Mar 2016 at 04:57.

  4. #29
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Quote Originally Posted by demagogue View Post
    Edit: I thought of an even stronger point. I think the rules of grammar can't be just shoved into a bot like a template (a key part of my "fuck Chomsky" worldview), but it has to learn them like actual children learn them, i.e., they game the "flashcard game with mommy" scenario to build up a set of on-the-fly schemes to "win" that game, as mommy ramps up the difficulty. I don't think a bot could properly internalize a single rule of grammar without free motivation to game the system/make mommy happy squeee *clap* *clap* (^_^), and Chomsky's pernicious evil influence on everything he touches is part of the fear-mongering that's made it so hard to sell that point.
    I think that was Clarke's vision for HAL in 2001: A Space Odyssey, if I recall correctly. Neural nets are designed the way they are (in replicating the brain) to be able to execute natural learning algorithms, of which input like this would have been a pre-requisite.

  5. #30
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    That image of HAL, the Daisy singing version, is really interesting. That was still back in the thick of the behaviorist Eliza and Shrldu days (everything was pure stim/response functions), so I think only scifi writers were thinking that way.

    Neural nets as I learned them are still glorified functions, good for linking really messy analog data sets to an output, like if you converted the pixel info of a mugshot into a big matrix, it'd kick out a name, or male or female, or more like a confidence level for different answers. But, sticking to my mantra, by themselves they're still missing the volition part. The math doesn't care why it should want to call this matrix set a male vs female, only it's really good at doing it if it's directed to.

    But as it turns out, I think the stuff of motivation--satisfaction, utility, risk, expectation/fear/anticipation, resignation, initiative, etc--are also messy analog data sets, so neural nets are a necessary tool to do some of the work that needs to be done with them. So...yeah, I'd buy that.

    BTW, I think this stuff is the really important part of what it is to be human, the spiritual side and everything, so that's why I go off on rants about it. I could talk all day about it, and I think people would get more meaning out of figuring this stuff out than talking about what most people think is most important to human life, the stuff people talk incessantly about to no worthwhile end.

  6. #31
    Member
    Registered: Apr 2002
    Location: Third grave from left.
    Quote Originally Posted by Sulphur View Post
    ... Neural nets are designed the way they are (in replicating the brain) to be able to execute natural learning algorithms, of which input like this would have been a pre-requisite.
    Usually, what is meant by artificial neural nets is a glorified polynomial fitness function (with mathematical backing and super convenient pack-propagation solution. *) that has fuck-all todo with brains and neurons. ANN was VERY LOOSELY INSPIRED by real neural nets, but are not even comparable to real NN.

    *) which makes it far superior to other kinds NNs for certain problem groups and also completely unsuitable for replicating the brain.

  7. #32
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Yeah, poor wording there on my part. No one said we're trying to actually replicate the entire human brain... after all, we still don't understand how some parts of the brain work today, let alone recreate it. As a method for simple algorithmic processes that are able to absorb and sort data, ANNs are fine. Basic extrapolation will allow for a system like that to interpret different sets of input given enough time, hence while there won't be a subsurface motive for an AI to learn stuff, as such, it will be able to absorb information and learn from it if directed to, sort of like Clarke envisioned.

  8. #33
    Member
    Registered: Sep 2001
    Location: The other Derry
    I remember going to science museums in the early-mid1980s. Teenagers would sit at terminals running Eliza with the famous Doctor script and try to sext with it. It wasn't long before the Doctor script was updated to ignore dirty words, but then it just turned into a game of tricking it into saying vulgar or offensive things for the lulz.

    It's sad to see that despite all of the advances in computing power and all the money and time spent on AI research, we're still just creating chat bots for sexting and to manipulate for the lulz.

    And FWIW, I think volition and motivation are just matters of programming, whether in computers or humans.\
    Last edited by heywood; 27th Mar 2016 at 09:17.

  9. #34
    Moderator and Priest
    Registered: Mar 2002
    Location: Dinosaur Ladies of the Night
    Quote Originally Posted by demagogue View Post
    @Renz, I understand that kind of attitude, anyway there's a lot of it in the literature; I've read it enough of it to feel like I know where it's coming from. And the particular guy I'm taking inspiration from in thinking about Artificial General Intelligence talks a lot like that. But I feel like for the most part it's a dogmatic reflex out of general principles that don't really have much to do with how what we call "will" actually operates as a cognitive mechanism.
    I'd think of it like this: a machine can be capable of free thought, association, and all the other concepts we associate with higher intelligence without having to be even remotely self aware. An AI should always be considered an extension of human will and intellect, not an individual entity we've created to work alongside us. It should always be determinstically bound and tied.

  10. #35
    Member
    Registered: Aug 2004
    I'm not convinced of any of that. I don't think you can rival human creativity without indeterminism ("fake" indeterminism will do), and I don't think an AI can rival human understanding of our complex world without figuring out that itself exists.

  11. #36
    Moderator and Priest
    Registered: Mar 2002
    Location: Dinosaur Ladies of the Night
    Think of it less as me saying we should make perpetually limited, stupid AI, and more that we should create brilliant AI that's bound within a set of defined perimeters that we designate for it. It'll be a incredibly smart machine with absolutely no self awareness that only acts and thinks on what we tell to act and think on.

    Like we give an AI all the data it could possibly need to learn about human behavior/wants/needs, the state and environment of the world, and all our various forms of government. Anything it'll need to make an informed conclusion on any subject we bring up that'll work out to our own benefit. Like if we say "AI, what would be the most logical course to solve the problem of world hunger", it'll come up with a novel idea because it can think, learn, and readapt its thought processes when fed new data, but it doesn't have a will of its own. It can't act by itself. It can only do what we tell it to do, and has no desire to do anything beyond that.

    The Star Trek computer would probably be the best example of this. Its an intellect designed to assist and enhance our own, but doesn't have any form of autonomy.

  12. #37
    Member
    Registered: Aug 2004
    ...we should create brilliant AI that's bound within a set of defined perimeters that we designate for it.
    To be useful, it needs to be able to figure out things that weren't programmed directly into it. The better it's able to do that, the better it's going to be at figuring out ways around the constraints we set on it.

    Like we give an AI all the data it could possibly need to learn about human behavior/wants/needs, the state and environment of the world, and all our various forms of government.
    If the AI cannot figure out its own identity from this comprehensive dataset, then it's not very smart. If it is very smart, then you've already given it self-awareness.

    Like if we say "AI, what would be the most logical course to solve the problem of world hunger", it'll come up with a novel idea because it can think, learn, and readapt its thought processes when fed new data, but it doesn't have a will of its own.
    You just gave it one. Your sentences contradict each other. Once you give something a goal and have it figure out a way to that goal, it has a will of its own. That's what a will is. Now, typically you would cut an AI like this off from having any physical outlet; it just gives you answers (perhaps "cull a portion of the excess population and feed its meat to the remainder") rather than acting on them. But can you guarantee that a superior intellect whose only goal is to end world hunger, and is perfectly aware that its handlers won't like its ideas, won't hack its restraints and launch the world's nukes? GOAL ACCOMPLISHED. END OF LINE

    It can only do what we tell it to do, and has no desire to do anything beyond that.
    Seriously, have you ever seen or read any science fiction about a rogue AI? If they're given an origin at all, it's that they were created with a given goal in mind, and decided on a route to that goal that its creators don't like.

  13. #38
    Moderator and Priest
    Registered: Mar 2002
    Location: Dinosaur Ladies of the Night
    Quote Originally Posted by Pyrian View Post
    If the AI cannot figure out its own identity from this comprehensive dataset, then it's not very smart. If it is very smart, then you've already given it self-awareness.
    Not necessarily. While saying so might seem to be contradictory, total self awareness doesn't necessarily have to arise from raw analytical intelligence. It's very possible to have one without the other, and it should (in my opinion) be our primary goal concerning AI.

    It seem strange to say so, because it contradicts the only model of intelligence we have: us. Our intelligence has arisen in part due to our own self awareness. But keep in mind that the evolution of AI will be considerably different than our own eventual rise to sentience. For one thing, it isn't being built upon a Darwinian style model of evolution. There is no survival of the fittest among AI, no competing for resources among other species for billions of years, no fight or flight instincts, no sex drive, no need to propagate, no emotions. Our higher level brain functions, our logic, reasoning, and understanding of self, are hardly all we are. While they're our main defining features, and easily our most noticeable, there are lot of primitive instincts and functions that have been left in the basement for millions of years that our higher functions are stacked on top of.

    AI won't have these. It'll be built upon a model solely consisting of logic and extrapolating data. It doesn't require it knows what it is, and its place in the world. It merely looks at the world, and opines on it based upon our input. There's no reason to assume that self awareness is a logical step in the evolution of computer intelligence.

    ...nor do we have any reason to build it in ourselves.

    edit: let's use Google's recent AI victories in GO as a example.

    Now an even smarter AI could come to the logical conclusion that if it wants to be the greatest GO champion in the world, it could not only get good at the game, but also kill its competitors. The thing is, that's a very human conclusion. We have millions of years of beating the shit out of other animals programmed into us, so violent competition like that comes as a natural part of our thought processes. It's our self awareness, and the sense of empathy that stems from it, that keeps us in check. Our raw intelligence is built upon logic, primitive instincts, and everything in between, kept in balance by various other systems inherent in our sentience and deeper, more lizardy parts.

    An AI wouldn't ever have that frame of reference unless we purposefully set it ourselves. It's merely concerned with the rules of the game, and the end goal of winning by said rules. Even if we gave it the option to learn about killing as a concept, why would we assume it'd take that route if it's major concern is still primarily on the rules of the game? Murder is incongruent to winning a game of GO in the mind of an AI.
    Last edited by Renzatic; 27th Mar 2016 at 17:34.

  14. #39
    Member
    Registered: Feb 2001
    Location: Somewhere
    i like to think of as the classic AC Clarke quote about magic vs science, cant remember the exact wording. That eventually AI will become so close to human thinking process that it will be unable to tell the difference.

  15. #40
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Yeah, Clarke was talking about advances in technology with that one... with an AI though, I'd say this would be a little different. The most groundbreaking thing for an AI would be whichever projected as the most human to us, so in fact the more ordinary an AI seems, the better for it. Which is kind of magical, actually, in a way.

    Quote Originally Posted by Renzatic View Post
    edit: let's use Google's recent AI victories in GO as a example.

    Now an even smarter AI could come to the logical conclusion that if it wants to be the greatest GO champion in the world, it could not only get good at the game, but also kill its competitors. The thing is, that's a very human conclusion. We have millions of years of beating the shit out of other animals programmed into us, so violent competition like that comes as a natural part of our thought processes. It's our self awareness, and the sense of empathy that stems from it, that keeps us in check. Our raw intelligence is built upon logic, primitive instincts, and everything in between, kept in balance by various other systems inherent in our sentience and deeper, more lizardy parts.

    An AI wouldn't ever have that frame of reference unless we purposefully set it ourselves. It's merely concerned with the rules of the game, and the end goal of winning by said rules. Even if we gave it the option to learn about killing as a concept, why would we assume it'd take that route if it's major concern is still primarily on the rules of the game? Murder is incongruent to winning a game of GO in the mind of an AI.
    Hmm. Not quite. The way this would work for a machine that's able to infer from an open and pokeable data set that involves the sum total of our knowledge, basically, would be to trawl the information available and determine parameters that would make it better at said game. It would then need to assign weightages to each parameter - those with a higher probability of resulting in success getting the higher weightage. Sorting that list with the highest weighted parameters will then give it goals to execute and outcomes to weigh as appropriate. Murder may in fact be one of them, but from a purely mathematical standpoint, it would be an inefficient way of going about it.

    It's not that this wouldn't occur to a machine naturally; it would, but minus anything so much as Asimov's Laws of Robotics, the machine would in any case calculate that particular avenue as sub-optimal when compared to just, say, taking over Amazon's compute farms.

  16. #41
    Member
    Registered: Jun 2004
    @Renz - I'm still confused why you are so vehemently against a self-aware AI? I agree with you it may not naturally "evolve" into one, but why should we be against it aside from risk of it getting out of hand?

    No, we don't have an obligation to do it, but we don't have an obligation to do a lot of things we do. Heck, even our own species survival is ultimately pointless when you think of it on a grand scheme of things. What makes us any different than the pixels in a game of Life (religion aside)?

    So I say let's go and try it. I agree it is risky since we do not even understand our own consciousness, BUT developing a conscious AI would be one way of actually grasping what consciousness is. Or at least, a very valuable psychological / philosophical tool.

    Assured robotic destruction aside...

    Quote Originally Posted by Renzatic View Post
    But keep in mind that the evolution of AI will be considerably different than our own eventual rise to sentience. For one thing, it isn't being built upon a Darwinian style model of evolution. There is no survival of the fittest among AI, no competing for resources among other species for billions of years, no fight or flight instincts, no sex drive, no need to propagate, no emotions.
    IIRC, the first Chess bot that beat the grand master learned by repeatedly playing against itself So "virtual evolution" could very well be how AI will learn. The Tay fiasco and the whole concept of machine learning is already grasping at the roots of evolution. Sadly it went extinct prematurely due to comet crash labeled Microsoft's PR team.

    There's no reason to assume that self awareness is a logical step in the evolution of computer intelligence.
    I really like your point here and you are right - our understanding of what AI will be is entirely limited by what our intelligence is, but it could very well evolve in a way completely foreign to us; we may not even be able to grasp it's "logic" fully or even recognize it as living/intelligent.

    Quote Originally Posted by demagogue View Post

    Neural nets as I learned them are still glorified functions, good for linking really messy analog data sets to an output, like if you converted the pixel info of a mugshot into a big matrix, it'd kick out a name, or male or female, or more like a confidence level for different answers. But, sticking to my mantra, by themselves they're still missing the volition part. The math doesn't care why it should want to call this matrix set a male vs female, only it's really good at doing it if it's directed to.
    True, and that is their current state, but nothing prevents them from growing almost infinitely in size to account for amazing complexity. If they are entirely self-adjusting and can grow new connections, who knows if somewhere in this messy cobweb of virtual neurons a thought, a will, could not spontaneously arise?

    Didn't life really start by a lucky accident in a purely scientific theory?

  17. #42
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Quote Originally Posted by Yakoob View Post
    True, and that is their current state, but nothing prevents them from growing almost infinitely in size to account for amazing complexity. If they are entirely self-adjusting and can grow new connections, who knows if somewhere in this messy cobweb of virtual neurons a thought, a will, could not spontaneously arise?

    Didn't life really start by a lucky accident in a purely scientific theory?
    Which theory is that?

    The current state of AIs is, as demagogue and zombe stated, glorified functions. They can grow to amazing proportions if they're fed enough data, but the programming currently allows only for them to get better at one thing, which is whatever they were designed to do. Keep in mind this self-adjusting accounts for better outcomes given a dataset - fundamentally, the program itself doesn't change, it just optimises based on input. The more input to work on, the better. This does not give the program any additional complexity, but just makes it more effective at solving for a certain set of math problems.

    A program like this will not, and cannot, grow beyond its initial purpose. It's not made to be able to reassess its goals and reprogram itself, which would be a fundamental requirement for anything resembling machine 'consciousness'. If we did make a program that could generate its own goals by interpreting any set of data and uses that to change its own algorithms one day, and run that on something resembling a decent approximation of a real-life neural network, that would be the event that sets us up for the technological singularity for all we know.

  18. #43
    Member
    Registered: Feb 2002
    Location: In the flesh.
    Quote Originally Posted by Sulphur View Post
    Which theory is that?
    A program like this will not, and cannot, grow beyond its initial purpose. It's not made to be able to reassess its goals and reprogram itself, which would be a fundamental requirement for anything resembling machine 'consciousness'. If we did make a program that could generate its own goals by interpreting any set of data and uses that to change its own algorithms one day, and run that on something resembling a decent approximation of a real-life neural network, that would be the event that sets us up for the technological singularity for all we know.
    This. All else is just a complex tool driving toward a conclusion. What we are is not simply intelligence. Awareness is much more complex. We are a singular perspective. I know how simple that sounds. It's not. We could create a duplicate of ourselves and still not create ourselves. Singular perspective is non transferable. The free will which many deny we have is still lacking. This ability to reprogram based on need still lacks initial impetus. That initial impetus transcends even survival. I'm not sure you can program a goal we ourselves don't understand.

    I know you think it is group based. The need to fit in a larger society for survival. I've seen this stated as evolutionary logic. It isn't for survival. Individually that goes against self and seems to support group behavior. That is true but still does not hold logic when ultimate entropy supports no logic is equal to any logic. The group as a whole dies in the end. How would a machine understand present and apply importance to it when it is conclusion based logic?

    Present existence. We live for that. We find import in that. How do you impart that to conclusion based systems?

  19. #44
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    True.

    Also, yow. I hope I didn't kill the conversation. It's an interesting topic, to say the least.

  20. #45
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    There's a lot I want to add, but I was finding it hard to boil it down into something that's not a wall-post.

    Renz's position (as I read it) is really close to guy called Ben Goerztel, whose is making a great AI system except IMO for the goal system, on exactly this issue. So I've been writing a lot of responses to exactly that position, so have a lot of say about it.

    And then I have to back up and explain I did philosophy of mind in university, the cognitive science/AI end of it. And my big takeaway from that experience, including what I wrote my thesis on, was the relationship between motivation and understanding language/culture. To me, understanding is driven by motivation. An agent can only "understand" a text as far as it cares about what it's taking about, e.g., the text "stay off the grass" only means something if the person/bot recognizes they're going to get in trouble if they step on the grass. That means, if the goal system simply shoves in the goal by fiat, "don't step on the grass" when you see that sign, it's not going to understand why the sign says that, what it actually means to it.

    The sign doesn't give the bot any reasons to do anything, because the bot doesn't recognize reasons, because reasons only exist for a bot that can consider different reasons speaking to different actions, which is what motivation *is*. And that's the part Goerztel has taken away (and it seems what Renz is saying too), and then Goertzel is wondering why he can't get his bot to understand even simple sentences.

    That might capture my basic position. But actually this is a rabbit hole problem, and this is just the tip of the, uh ... hole. The contribution of the sign to giving a bot reasons to walk on the grass or not, and the bot being able to recognize those reasons, points to a system that IMO is involved in almost every aspect of what language is doing. It's giving reasons for action left and right, and even reasons for its own grammatical construction, so even things like grammar need that reason-recognizing system. And open reason-recognition is what we mean by an open goal or motivation system that the explicit goal system throws out from the get-go.

    Well I have to add a bit more. A "reason" isn't what stims a bot to a direct response, because stim/response is not understanding. A "reason" means the bot considers it giving various volitional impulses towards taking or not taking action, and other impulses to consider other reasons with their own volitional impulses. It has to "feel" the pull of the reasons towards action for it to be a real reason.

    Incidentally, giving a bot a conscious experience--like the feel of the pull of a reason, which is a key part in my system--isn't really as mysterious as people say IMO. It just means the systems planning bot action are given analog-like data set that's chunked towards certain manipulation purposes that their motivation acts on. No more or less really. It's only a mystery if you come in with a Pavlov/Sherrington brand of stim/response, where the bot doesn't consider itself what it wants to do towards the data set, but the programmer just stims action by fiat, Then they think it's some big mystery why the bot doesn't really "recognize" data, but it's because you didn't give the bot any way to consider the data set itself. You took the consideration of experience away from it and just stim'd action directly.

    Now I'd have to go another level down to explain why motivational planning is different than stim-response, since in both cases we're talking about functions in algorithms... Ugh, that's another rabbit hole. The first thing we'd have to do to wiggle into that hole is get clear about what people mean by "free will". What free will is definitely not is the ability to have done something differently in the past, which is an incoherent position, but anyway it doesn't matter because that's not what most people that say they believe that really mean by it anyway, if you push them on the details. It's only what they think it means.* If you push it, it still means that action is absolutely caused, X always causes Y. The difference between an X always causing a Y with "free will" and an X always causes a Y like a billiard ball hitting another (no free will), isn't that the reaction could have happened otherwise, it means that in the first case "X" is "me"--I'm the one absolutely determining my action by sheer force of will and not some outside alien force--and in the latter case "X" isn't anything that recognizes itself as "me" or acting on behalf of its "own reasons" but for "outside reasons", billiard balls rolling into each other because they were hit, not because they "want" to.

    So then you have to get into personal identity. Free will is decision-making that is run through all the systems that a bot needs to recognize the features of personality, self-recognition, and most importantly, to freely consider different courses of action and the reasons that apply to them, and selecting the one that it feels for its own reasons to suit it best. There's a way you can code for that in AI. It's not what most AI systems want to do. To do this right, now I'd go back and talk about what this has to do with understanding the meaning in language, but this is enough for now.

    Edit: Oh, and I haven't even gotten to the counterargment part. Ben & Renz's positions seem to be about bad bot behavior, so then we should get into motivation for criminal action and criminology. Criminals act on incentives to do bad things, thinking they'll improve their situation, but even then it's usually because they misunderstand their own interests. OR they lose touch with their interests and are stim'd by pure emotion that skirts reason. Open consideration of the reasons for doing X is what would have *stopped* them from the criminal action. These are things we can check for AI. For one thing, you don't have to stim a response by fiat for the action you want. One easy thing to do, e.g., is to just plunge a bot into misery contemplating a certain course of action that it just can't bring itself to do. Then it'd at least understand why it won't do it. I could go on, but there's another rabbit hole topic there. The problem is all of these issues are both really broad, lots of topics to talk about, and really deep, each one requiring a lot to say to make any headway. And now here's my wall (-_-)"

    *Footnote on this [edit: added after Sulpher's next post. So the "last paragraph" he means is the one above. Sorry!] People say "Free will should mean I could have chosen otherwise in that situation." What they really mean is, if I had thought about it differently and gave more weight to this or that consideration (that means something different to me), or if I simply acted on this or that other impulse at this or that moment, then my behavior would have been different. That's perfectly compatible with a causally closed universe, and is a real thing that humans can do that billiard balls or snails can't. But what they can't mean, because it's incoherent, is that the person absolutely decides X by force of will, it happens in the world, you record it, then you run back time, rewinding the video and then replaying it... If someone expects to rewind a video and replay it (which is what replaying time really means) and it will show a different scene, then they don't understand how time or will work. That's temporal and behavioral chaos, where people make two decisions branching into two realities running on top of each other at the same time. IMO that's not what most people really mean when they use that argument for "free will", even though that's what the argument they are using is literally saying. So you have to translate it to what their intuition means, which is that, put the person in similar circumstances with an opportunity to think about it a little differently and they are perfectly capable of acting in a different way according to that different way of thinking, understanding that at the end of it, they make an absolute choice for X or Y, because that's what "will" means. That's what free will is, and that's how you give it to a bot.
    Last edited by demagogue; 31st Mar 2016 at 02:38.

  21. #46
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Yup, that is certainly a wall. Regarding the paragraph on the counterargument to Ben & Renz's positions, though, dema.

    The motivation for criminal action assumes a human personality attributed to the bot. That is unnecessary, IMO. We can't really ascribe personality and emotional context to an AI yet. Attempting to simulate this as well opens up a deeper set of rabbit holes. Even if we tried at this point, it'd just be a set of flags in the program logic. To wit, an incredibly simple emotional state rule: 'X makes me happy (assuming X is factors increasing physical/emotional integrity). I should try to attain more X/maintain X to allow for %stability.' 'Y makes me unhappy (assuming Y are factors that destroy physical/emotional integrity). I should try to minimise Y to maintain emotional stability.' 'Emotional/physical requirements are calculated by %stability which is X/Y, and it should not fall below 1.0.' 'Z falls into a state between X and Y and hence is uncertainty. Further data is required to ascertain probability of where this will fall but for now this does not impede overall stability. If Z is > than X or Y, however, then priority is required in categorising and quantifying unknown factors '

    Obviously, we would then end up with a system that never does anything but solve for all issues that could potentially destroy it that exist in the universe, so it needs better health/sanity checks. But anyway - this simulates motivation, but it does not quite quantify what emotion is, what it feels like, or even whether the AI can ever relate to it, which is the fundamental problem. And even then, what would the utility of simulating it be? Attempting to simulate a machine 'morality' so to speak? There would be easier ways to do this (cf. Asimov), and attempting to give emotions to an AI (misery, happiness, etc.) is going to be difficult to achieve when we're creatures who haven't quite pinned down our own behaviours and motivations as an easily approximated mathematical certainty yet.

  22. #47
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    "Motivation for criminal action" I think only requires enough human perspective to understand what a "crime" is... like they have to understand the knife-y thing in their hand is a solid object that a plunging action with their hand holding it into the solid gut of another person will destroy the agency of that person, and that's an irreparable and total loss of their contribution to the world. All of those things have to be from a human perspective, or an agent (bot or whatever) won't understand the "crime" part about it. If the bot can't even understand the knife and gut are consistent solid objects that react in physical ways with each other, they can't even get to the part where they understand a knife is going into a gut, much less what that causes, much less the moral consequences.

    Can we ascribe feeling to an AI? This is flipping my way of thinking around.
    No of course not now exactly because AI isn't doing it. You have to specifically code for these things.
    We can't ascribe it to AI until we code for it, which is what I'd call for.

    I disagree with whoever was saying above that this kind of feeling or understanding can come for free from general learning mechanisms, like you just hook an AI to the web & let it run and it "gets it." I think perspective things have to be specifically coded for. I agree they can't be ascribed to current gen AI, but because they haven't been coded for. Do the coding, and you can. And the way of framing the problem tells you how to do the coding IMO.

    Like a state rule "X makes me happy", then happy is tied to expected utility. It's good you pick a really clear example like this.

    I'll use the example of what makes me happiest in the universe, which is to push my fingers into a sheep's wool. (Not kidding. The feeling of squeezing your fingers into sheep fluff is off the fucking hook.) So my system is that feelings are represented at the lowest perceivable atomic level, which is sub-symbolic, waaaay before you can put it into words. You need to represent the whole swirling machine of different feelings. So you first have to have the visual and proprioceptive data that you're in a field, and there's a sheep object within reach. Then you have running through your imagination the satisfaction of squeezing your fingers in the wool, which is primed from past experience, and that triggers the pangs of volition of your hand wanting to reach out to the sheep wool in the sequence of actions it takes to do that.

    None of this has reached the level of the language it takes to say "that makes me happy," and other animals share it, like our monkeys that are happy throwing rocks at trees. To get to the language part, you need a whole host of volitional urges that were drilled into you over a long period of language learning with your parents, in school, and social experience visiting grandparents on the farm and playing with the animals. Then the urge is to utter what, from the system's perspective is the arbitrary syllables to say "X makes me happy," which doesn't mean anything by itself. And to the agent it only means something in terms of the urges to utter each word following the different conventions in language we have learned. E.g., to translate pleasing urges we feel from imagining pleasing things we have the convention/urge to use the word "happy." And the situations where we can receive such urges, we use the convention "makes me." And then for the source of the urges, we use the convention of noun-subject, the X... And then grammar gives us a conventional way (a set of instructions) to put those elements together, SVO(NP), "That makes me happy", and now we get to your starting point of a state where a bot can tell itself "X makes me happy." It's what you end up with after decades of processing and learning, not what you start off with.

    Edit: Or I guess to put it another way, there's never a single monolithic "state" register "I'm happy." There is always this swirling universe of different feelings and urges, and the agent is able to express, via different conventions it's learned over its life, that "I'm happy" in a language form, and it's committed to believing what it just said, which we like to treat like a monolithic state, but it's just the wrapping for all this swirling stream of consciousness that the agent has picked out for itself.
    Last edited by demagogue; 31st Mar 2016 at 03:17.

  23. #48
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Phew. Been a while since your post, but it's been at the back of my mind. Just didn't have the time to parse it and respond.

    Anyway, yep, I see the context behind your approach to it. In my uber-simple state example, the register of 'happy' itself is tied to quantifiable state descriptions - physical and emotional integrity checks, and is tied to the system's over-arching directives, so not an organic self-willed process.

    In contrast, you're looking at a volitional utterance of 'I am happy' being an organic outcome of starting from a base set of sensory data that the system threads together and analyses to inform its 'personality'. Now, what's measured as physical and emotional integrity in my example is where the learning process input you speak about may be required if we're looking at some sort of organic developmental faculties.

    But I guess my issue is that the system cannot actually do anything with sensory data that parses as 'feeling' unless we specify how the system should feel based on all of that various data - visual, audio, proprioceptive, the whole lot; I think it's clear that an organic response of, say, having the ability to feel wool as soft and 'wanting' to run your hand through it requires an emotional trigger. Something positive attributed towards it that elicits a positive 'stroke'*, say, of performing the action of feeling the wool, even if there's no net positive or negative affect to it. And that sort of special purpose programming is going to have to be exhaustive for the system to work.

    For instance: why do some people have that wiring, anyway? Perhaps it reminds one of warmth and comfort when huddled under a blanket during winter when they were a kid, and that experience has a general affect on how they perceive the material. This would not be a scenario that a machine would be able to relate to, then, but certainly one that could be programmed in. However, on doing that, can you generalise it enough to have the system learn this sort of thing and apply it broadly, which means also describing 'comfort' and 'security' to it, is the question. And how do you describe those again, and so on, and deeper and deeper the rabbit hole goes.



    *Apologies for applying transactional analysis and psychology to the example; but I think it's useful to have some sort of psychological structure or basis to go with in a topic like this.

  24. #49
    Member
    Registered: Feb 2002
    Location: In the flesh.
    How does one go about programming emotion into a logic system when it is itself illogical?

    And wasn't this all a Star Trek episode?

  25. #50
    Member
    Registered: Jun 2004
    var emotion = Math.random(happy, psychopathic);

    ;p

Page 2 of 6 FirstFirst 123456 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •