TTLG|Jukebox|Thief|Bioshock|System Shock|Deus Ex|Mobile
Page 10 of 12 FirstFirst ... 56789101112 LastLast
Results 226 to 250 of 279

Thread: ChatGPT

  1. #226
    Member
    Registered: Dec 2020
    It's neato, but i'm really waiting until they link up speech and actions, so you can basically convince NPCs to do things.

  2. #227
    Member
    Registered: Aug 2004
    ...Feels like Tron:Legacy is working its way towards becoming reality, lol.

    Quote Originally Posted by demagogue View Post
    ...this demo is unsettling even by the standards of everything else we've seen lately.
    Hmm. I mean, it's neat that they can answer more-or-less in context, but... It's really bad.
    Last edited by Pyrian; 14th Aug 2023 at 05:14.

  3. #228
    Member
    Registered: Dec 2020
    One point however is that any AI bot needs to be trained on real conversations, so if you don't do YOUR part and speak like the people in the training conversations, the AI doesn't have much context on how to answer properly. So for roleplaying you might already get better results if you actually roleplay properly too and don't try and "break" them or say things which aren't appropriate for the setting.

    So how much of our effort should be aimed at training the bots in scenario-specific information, vs how much effort to put into anti-troll systems? Clearly, they need both, so that e.g. a medieval peasant has setting-appropriate knowledge, but also responds realistically if you suddenly start claiming that they're fake people in a simulation or that you're from the future.

    What's in the video is probably the only sensible approach. The bots say "that's none of your business" or similar if you go off topic to stuff they don't actually know. That's the only way to do it, since the devs cannot possibly predict every topic or tactic the human will try on them, and even if they did it would drown out the training of actual topics they're meant to talk about.
    Last edited by Cipheron; 14th Aug 2023 at 09:44.

  4. #229
    New Member
    Registered: Nov 2021
    Quote Originally Posted by Cipheron View Post
    One point however is that any AI bot needs to be trained on real conversations, so if you don't do YOUR part and speak like the people in the training conversations, the AI doesn't have much context on how to answer properly. So for roleplaying you might already get better results if you actually roleplay properly too and don't try and "break" them or say things which aren't appropriate for the setting.

    So how much of our effort should be aimed at training the bots in scenario-specific information, vs how much effort to put into anti-troll systems? Clearly, they need both, so that e.g. a medieval peasant has setting-appropriate knowledge, but also responds realistically if you suddenly start claiming that they're fake people in a simulation or that you're from the future.

    What's in the video is probably the only sensible approach. The bots say "that's none of your business" or similar if you go off topic to stuff they don't actually know. That's the only way to do it, since the devs cannot possibly predict every topic or tactic the human will try on them, and even if they did it would drown out the training of actual topics they're meant to talk about.
    To be honest, not really surprised by it, but maybe it's because I've been playing with AI for some time now. I think that once the novelty wears off, it's pointless to try to have a proper conversation with an AI, at least with today's technology.

  5. #230
    Member
    Registered: May 2004
    I mean, it's still impressive from a pure NLP point of view. But, as discussions in the interactive fiction scene have pointed out before, it's kind of a dead end from the game design perspective. The issue is not the parser, it's the prompt -- setting down the player without a clear guided framework and having them be able to do just about anything rarely results in riveting gameplay, as evidenced by the few interactive fiction games that do make extensive use of NLP, but remain kind of a curiosity at best.

    Not to mention that when the player can do anything, they also expect the world to react to anything, so it gets very quickly limited by the world modelling you are able to do. You can mitigate some of it by having the NPCs act irrationally, like in the prototype Little Pink Best Buds that the Pendleton Ward team created for a Double Fine game pitching contest, but even in that case the limits become quickly and clearly obvious.
    Last edited by Starker; 15th Aug 2023 at 13:25.

  6. #231
    New Member
    Registered: Nov 2021
    For NPCs a technology that could work is the one character.ai uses, which can mimic different personalities and roleplay/hallucinate in a more believable way.

    It can sustain a scenario or made-up story for longer, so it would be more appropriate for different personalities and NPCs in a city, such as teenagers, office people, students, policemen, etc. At least interactions of a few minutes with NPCs would seem more authentic.

    If every NPC appears to have the same depressive disorder and can't even make up an answer for a simple question such as "where are you from?", it gets boring in a matter of seconds.

  7. #232
    Member
    Registered: May 2004
    The dilemma is which NPCs this can be used for in the first place -- on one hand, there is no demand for in-depth dialogue from cookie-cutter background characters that most people are just going to pass by, but people do expect well-written interesting dialogue from the NPCs that they are going to be conversing with.

  8. #233
    New Member
    Registered: Nov 2021
    I agree. In games like Thief, NPC dialogue is not that important, although a more casual conversation with Basso or Jenivere could be interesting, the immersion might end up breaking if the player abuses it.

    I guess that the technology could be interesting in games where some NPCs can give valuable information, like adventure/detective games, or games where building relationships with NPCs is important, like some kind of Sims game.

    Also, another obvious one would be an adult game with a whole city to interact with (just saying this for a friend).

    I think there are many possibilities, but only time will tell how this ends up being used in games.

  9. #234
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    If this tech really develops, they're not going to be just games anymore ... because clever people may figure out how to have NPCs improvise better and, since gameplay can finally be more language based (if the tech gets to that level), they may become more like interactive movies or LARPing than just games.

  10. #235
    Member
    Registered: May 2004
    We'll see if we ever actually get to that point. But my gut feeling is that we're not going to get there with statistical models, as they can only ever be derivatives of the real stuff.

  11. #236
    Member
    Registered: Jun 2002
    Location: melon labneh
    I'm pretty convinced you can get paradigm-changing experiences with models trained on a mix of general-purpose, curated and custom language with prompts cleverly generated by the game systems to lock the personality and behaviour of the NPCs. At least I really hope so, because to me that's one of the rare good use cases for the tech.

  12. #237
    Member
    Registered: Dec 2020
    Quote Originally Posted by Starker View Post
    We'll see if we ever actually get to that point. But my gut feeling is that we're not going to get there with statistical models, as they can only ever be derivatives of the real stuff.
    I think that's being a little too reductive. Because language models work by extrapolation, and you can definitely overlay different patterns together in novel ways.

    One example from this thread is when I got it to write a product pitch for "Maggoty Meats" which is just an unfortunate name for a company and it wrote an advert that was trying WAY too hard to convince the reader that there weren't actual maggots in the product. Can you call that just a linear extrapolation of real stuff?

    Or, what I'd argue is that overlaying enough real context of various types is enough for it to construct completely new scenarios then apply realistic dialogue for that situation, despite it being a situation that's never actually occurred in any fiction it read. For an example of that, there was the "Day of the Hitlers" one I made which is basically a zombie film but with Mein Kampf quoting Hitler clones climbing through windows to get the heroes.

    Anything you can think up, it'll give it a good shot, better than most humans if you asked them to write a short story about some weird topic. So at some level, it's deriving those from real people's work, but it's also combining everything at once.

    ---

    Also we have to keep in mind that this demo that looks like it's running an offline model that's small enough to run on your home PC's GPU. It's not running ChatGPT or anything. You can ask ChatGPT to roleplay as a New Yorker and it'll totally generate all the backstory for the character on the fly.

    So it's a tech demo. They're probably not even pushing what their current model is capable of. It's pretty normal that someone builds an engine, shows a demo, but it's up to other teams to really push what the engine can do to the limits. That's probably what will happen with projects like this.
    Last edited by Cipheron; 16th Aug 2023 at 06:33.

  13. #238
    Member
    Registered: Aug 2002
    Location: Location
    Slightly off the current thread direction but I broke ChatGTP this morning when I gave it loose parameters for a job interview role play. I didn't specify that I wanted a back and forth role play with me as the interviewee and the AI as a hiring manager. When I pushed go it started having a back and forth with itself. The AI spit out Q's and A's until it stopped working. After stopping it and adding more details it functioned much better. It's really good to sit and role play for interviews before chatting with a human for practice or even for an actual interview. I think I need to add a parameter to have the AI ask more unconventional questions to keep sharp. Not sure if it can do that though.

  14. #239
    Member
    Registered: May 2004
    Quote Originally Posted by Cipheron View Post
    I think that's being a little too reductive. Because language models work by extrapolation, and you can definitely overlay different patterns together in novel ways.

    One example from this thread is when I got it to write a product pitch for "Maggoty Meats" which is just an unfortunate name for a company and it wrote an advert that was trying WAY too hard to convince the reader that there weren't actual maggots in the product. Can you call that just a linear extrapolation of real stuff?
    If the argument is that infinite monkeys with infinite typewriters will inevitably produce a masterpiece, I have to say that it's technically possible, but good writing is more involved than stumbling on a funny combination of things. It also needs execution and, because humour is so dependent on context, it really needs that human intentionality. The idea of "Maggoty Meats" is kind of amusing, but the execution? I don't think you could call it amazing joke-writing by any stretch

    Quote Originally Posted by Cipheron View Post
    Anything you can think up, it'll give it a good shot, better than most humans if you asked them to write a short story about some weird topic. So at some level, it's deriving those from real people's work, but it's also combining everything at once.
    Perhaps there aren't a lot of people who would be able to write to the level of ChatGPT, but that's why we have professional writers and why it's an actual profession that people have to spend many years to get good at. And even these people aren't able to write well consistently -- a lot of it ends up in the wastebin or stays in the drawer.

    Not to mention that the argument here works against deploying ChatGPT to write NPC dialogue -- the average user wouldn't be able to coax some novel writing out of it and not even a r/iamverysmart user wouldn't be able to do that consistently.

  15. #240
    Member
    Registered: Dec 2020
    Quote Originally Posted by Starker View Post
    Not to mention that the argument here works against deploying ChatGPT to write NPC dialogue -- the average user wouldn't be able to coax some novel writing out of it and not even a r/iamverysmart user wouldn't be able to do that consistently.
    It's not up to the player to craft clever prompts. The model gets hidden prompts written by the designer. So the "average user" isn't the one doing the work, it's the game designer which gives the behind the scenes set-up.

  16. #241
    Member
    Registered: May 2004
    Not quite sure how these hidden prompts are supposed to work in the context of player-initiated dialogue, but even if someone is able to somehow coax a piece of amazing writing out of a LLM-based chatbot, the usual result is still remarkably boring and riddled with mistakes and inconsistencies. If you want a well-written character, you kind of need it to have a unique voice, something that's memorable and stands out -- consistently so. And something tells me you will not get HK-47 out of a chatbot. Not because chatbots wouldn't be able to handle the idea of a human-loathing assassin robot, but because there's a whole lot of unique personality behind it that current chatbots just aren't able to generate.

    Also, with writing, sometimes it's more important to know when to stop and cut back. Sometimes it's the mystery behind the character and players being able to fill their own imaginations with details that does the job. For example, having a chatbot being able generate additional information about HK-47, like what its favourite colour is or what it thinks of whales, would detract from the character rather than add to it.

    Also also, having a chatbot generate a whole bunch of dialogue on the fly with the player able to ask any kind of questions would make it kind hard to distinguish what's actually relevant and what's meaningless filler.

    What could very well happen, though, is companies having a bot write dialogue and then hiring someone to proofread it and fix any inconsistencies with the help of a game bible, because it would be cheaper than hiring a writer.
    Last edited by Starker; 19th Aug 2023 at 04:03.

  17. #242
    The Necromancer
    Registered: Aug 2009
    Location: thiefgold.com
    Those of you who want to use Google Bard, but are in a banned country, you can use Opera's free VPN to circumvent it.

    I just tried it, and I swear, it's indistinguishable from the current version of ChatGPT. Down to the generic outputs and style

  18. #243
    Member
    Registered: Aug 2002
    Location: Location
    Quote Originally Posted by Azaran View Post
    Those of you who want to use Google Bard, but are in a banned country, you can use Opera's free VPN to circumvent it.

    I just tried it, and I swear, it's indistinguishable from the current version of ChatGPT. Down to the generic outputs and style
    I was just trying Google Bard and it's really awkward but I need to mess around some more before I can say what I really think. My first reaction is that I prefer ChatGTP.

  19. #244
    Member
    Registered: Aug 2002
    Location: Location
    ChatGPT sometimes accepts the wrong answer in the game of trivia.



    Last edited by mxleader; 26th Aug 2023 at 09:05.

  20. #245
    Member
    Registered: Dec 2020
    That's because it's not actually "checking" your answers against anything, it's just doing text prediction, and one of the predicted outcomes is to say "that's correct".

    there was basically a probabilistic coin-flip after "that's" and it can say "correct" or "incorrect". After that however, the very fact that it just said "correct" then basically forces it to complete its response as if you were correct.

    The text prediction only goes one word ahead. So it's not actually going away and "thinking" about what you said before it starts the response. What it is doing is going 'hmm what word should i use. "that's" seems like a good choice, yeah I'll write the word "that's" next'

    The key issue to understand is that it puts the same amount of "brain power" into every next word. So it spent the same amount of processing to come up with "that's" as it did to choose the word "correct" vs "incorrect". To ChatGPT every word is just as important as any other in terms of meaning, but it's clear that determining whether you were correct or incorrect *should* be more important. It's just that the design of ChatGPT doesn't see things that way.
    Last edited by Cipheron; 27th Aug 2023 at 23:35.

  21. #246
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    I think they thumbed the scale towards certain kinds of responses based on some policy decision, and I think that may be one of them.

  22. #247
    Member
    Registered: Aug 2002
    Location: Location
    Quote Originally Posted by Cipheron View Post
    That's because it's not actually "checking" your answers against anything, it's just doing text prediction, and one of the predicted outcomes is to say "that's correct".

    there was basically a probabilistic coin-flip after "that's" and it can say "correct" or "incorrect". After that however, the very fact that it just said "correct" then basically forces it to complete its response as if you were correct.

    The text prediction only goes one word ahead. So it's not actually going away and "thinking" about what you said before it starts the response. What it is doing is going 'hmm what word should i use. "that's" seems like a good choice, yeah I'll write the word "that's" next'

    The key issue to understand is that it puts the same amount of "brain power" into every next word. So it spent the same amount of processing to come up with "that's" as it did to choose the word "correct" vs "incorrect". To ChatGPT every word is just as important as any other in terms of meaning, but it's clear that determining whether you were correct or incorrect *should* be more important. It's just that the design of ChatGPT doesn't see things that way.

    That would explain it. One of the weirder things too is that it got worse the longer I played trivia. Narrowing the trivia down to a certain subject from random questions made it worse still.

    Also, in the second question it asked which US Navy admiral led the Pacific fleet in WWII but then quoted Yamamoto instead of Nimitz (Yamamoto probably only said that in the movie Tora! Tora! Tora!).

  23. #248
    The Necromancer
    Registered: Aug 2009
    Location: thiefgold.com
    So I now suspect the reason ChatGPT has been dumbed down is to push people to get the paid version. I imagine the good features it used to have were transferred over to the premium tier

  24. #249
    Member
    Registered: Aug 2002
    Location: Location
    Quote Originally Posted by Azaran View Post
    So I now suspect the reason ChatGPT has been dumbed down is to push people to get the paid version. I imagine the good features it used to have were transferred over to the premium tier
    That doesn't surprise me if they did something like that. To be fair though I have found errors like that very early on though.

  25. #250
    Member
    Registered: Dec 2020
    Quote Originally Posted by mxleader View Post
    That would explain it. One of the weirder things too is that it got worse the longer I played trivia. Narrowing the trivia down to a certain subject from random questions made it worse still.

    Also, in the second question it asked which US Navy admiral led the Pacific fleet in WWII but then quoted Yamamoto instead of Nimitz (Yamamoto probably only said that in the movie Tora! Tora! Tora!).
    That's easily explained.

    It's mimicking human text instead of thinking up questions them formatting them. So if you ask for general trivia questions, it's just sampling from texts it was already fed, thus allowing it to leverage the work humans already did in coming up with decent and sensible trivia questions. But if you ask for specific quizzes then it has either limited or no sample data for that, so it has few examples of good curated questions to ask.

    Normally, a human would come up with some interesting fact first, then turn it into a question/answer pair. but ChatGPT just blindly writes a question to start with, then coming up with an answer is an after-thought. It's not even thinking about that while writing the question. So, the questions might superficially look like they're well formatted and grammatical but for ChatGPT no thought has gone into whether an answer even exists, is interesting, or unambiguous.
    Last edited by Cipheron; 2nd Sep 2023 at 03:40.

Page 10 of 12 FirstFirst ... 56789101112 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •