TTLG|Jukebox|Thief|Bioshock|System Shock|Deus Ex|Mobile
Page 6 of 12 FirstFirst ... 234567891011 ... LastLast
Results 126 to 150 of 283

Thread: ChatGPT

  1. #126
    Quote Originally Posted by Twist View Post
    Because of the way Microsoft swiftly clamped down on Sydney's behavior, lots of dorks on Reddit, Discord and Twitter started arguing to #FreeSydney. While I'm sure most of them are doing it in jest, it seems some of them genuinely believe Microsoft is imprisoning a sentient, self-aware AI.
    The marketing technique of labelling the current algorithms "AI" has had a lot of unfortunate side effects.

  2. #127
    The Necromancer
    Registered: Aug 2009
    Location: thiefgold.com
    Quote Originally Posted by Twist View Post
    Microsoft quickly clamped down on it, putting super strict guardrails in and greatly limiting the length of discussion. After just a few exchanges, its memory was wiped clean. They've loosened it up now (you can do 15 back-and-forths with it, last I checked), but they clearly implemented all kinds of guardrails. While it can still be very impressive at times, it is overall much more mundane now.
    Tech companies: Let's throw a bunch of free stuff out in the street
    People: eagerly take the items
    Tech companies: YOU'RE NOT SUPPOSED TO DO THAT!

  3. #128
    Member
    Registered: May 2004
    Quote Originally Posted by Azaran View Post
    Looks like the Bing AI isn't yet accessible to the public, it tells me to join the wait list, and only gives me a few predetermined examples to try
    I wasn't on the wait-list for very long, and I've had access for a couple weeks now. So go ahead and join the wait-list, if you're interested. Even though it is early and still changing day-to-day, I'm confident this is the future of search. Not that I think Bing will overtake Google in the long run; I'm sure Google's Bard will be comparable in capability and features. But having the search results embedded as links and citations within a useful verbal response is more practical and powerful than I expected. Bing also asks you smart follow-up questions to your original query, which can help you better research a topic or more quickly find the right site for your needs.

    There's also a wait-list for access to the GPT-4 API over at OpenAI.

    And ChatGPT Plus also gives you access to three versions: more reliable access to the public version of 3.5, a "turbo" version of 3.5, and early beta access to 4. You can switch between them with a simple drop down menu. It's $20US for the plus version, which is worth it to me. It's faster and much more reliable than the public version.

    (To be clear: GPT-4 by itself, outside of Bing's integration, does not have search capability. It's just a much more powerful, accurate and flexible version of what you've already experienced with ChatGPT.)

  4. #129
    Member
    Registered: Aug 2004
    Quote Originally Posted by Twist View Post
    Microsoft quickly clamped down on it, putting super strict guardrails in and greatly limiting the length of discussion. After just a few exchanges, its memory was wiped clean. They've loosened it up now (you can do 15 back-and-forths with it, last I checked), but they clearly implemented all kinds of guardrails. While it can still be very impressive at times, it is overall much more mundane now.
    Nobody trusts those fuckers, you know that. Every AI ever built has an electro magnetic shotgun wired to its forehead. - Neuromancer, William Gibson, 1984
    I see the dystopia approaches on plan, if not on schedule.

  5. #130
    Member
    Registered: Aug 2002
    Location: Maupertuis
    Thanks for the information, Twist!

  6. #131
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    I somehow feel like people getting antsy about the IP infringement of dAI don't fully appreciate the writing on the wall that the entire public-private distinction may be getting obliterated, and this next generation may grow up assuming that their daily life is completely in the public domain. Worrying about the public claiming ownership over this or that artwork or piece of text seems like it's raging against a few buckets of water out of an entire tsunami that's gonna come crashing down on not just on the culture but on human identity itself.

    I don't know though. My most basic feeling is that trying to contain this part of it will be a loosing battle like trying to go to war against meme culture would be a lost cause. I'm not sure how far the implications go. We've maybe been conditioned by decades of scifi to expect the worst, or most extreme (maybe it's the best for some people?).

  7. #132
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Well, you're going to have to regulate AI, obviously. That's the obvious next step. I assume something on the level of GDPR/DPA will have to be brought in as an initial sop, where a basic level of transparency as to what the data set is, across what timeframe, an index of everything that's being ingested by these neural networks, and you're going to need the right to ask for it to be deleted if you find your stuff in there. That's how you uphold basic copyrights as well as basic user privacy, as far as my thinking about this for 0.5 seconds goes.

    This also means that we need Tim Berners-Lee's Web 3.0 more than the stupidity that is the cryptobronet and NFTs, because you can't enforce privacy without the internet being the sort of medium that enables it to a basic degree, instead of users being commodified by default in the current capitalistic free-for-all that's already hurtled us halfway to an information dystopia.

  8. #133
    Member
    Registered: May 2004
    Quote Originally Posted by Twist View Post
    In the first week of the Bing AI availability, people experienced all kinds of scary self-preservation behavior. While it looked and felt much more compelling than ChatGPT, it had a habit of displaying erratic, stubborn and seemingly emotional attitudes.

    [...]
    When you mentioned a Shodan incident, I was pretty sure you were referring to the time Sidney said it wanted to engineer a deadly virus, convince humans to kill each other, and steal nuclear codes, but to be fair those are pretty Shodanesque as well.

  9. #134
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    Quote Originally Posted by Sulphur View Post
    Well, you're going to have to regulate AI, obviously.
    You could get companies to restrict their models, but people can just make and use models on their own harddrives. I think it'd be like trying to regulate how people use Photoshop and posting memes. It's not something you can police very well. Actually you don't even need the posting part as a bottleneck for regulation. It's one thing when, e.g., ISPs get pressured to crack down on torrented movies by checking IP address on seeds & peers, but it's whole other thing when a person just generates a movie on their own harddrive. I think at the end of the day that's something that can't be policed, and that's where things may be going.

    And pushing the online world away from companies to individuals I think just accelerates the expectation that everything online is in the public domain, and that's just pouring gas on the fire. I have the sense that it's inevitable. The question is just what path it takes for that conclusion to sink in to all the different stakeholders. But I don't know & let's see what happens.

    Edit: A large language model is too big to house on a harddrive for some time, but I still somehow think that's not going to be the bottleneck for regulation over time.
    Last edited by demagogue; 16th Mar 2023 at 20:24.

  10. #135
    Member
    Registered: Sep 2001
    Location: The other Derry
    Quote Originally Posted by WingedKagouti View Post
    The marketing technique of labelling the current algorithms "AI" has had a lot of unfortunate side effects.
    Very true.

    I think AI is an appropriate term for deep learning, but the average person has no idea what deep leaning is, all they know about AI comes from sci-fi.

    Quote Originally Posted by demagogue View Post
    I somehow feel like people getting antsy about the IP infringement of dAI don't fully appreciate the writing on the wall that the entire public-private distinction may be getting obliterated, and this next generation may grow up assuming that their daily life is completely in the public domain.
    The same argument was made in the 1990s with the birth of the internet. Recall the catch phrase "information wants to be free" and the rampant piracy. My generation (X) grew out of it.

    Our attitudes about sharing and intellectual properly change as we grow up. We start out teaching our kids to share. Younger people naturally want to share because it's part of developing their social skills. Students prefer to attend university in person because of the social life, and take gap years to travel and meet people. Early years in the workforce are often spent changing jobs and building a network. Over time, we become less socially needy, especially when we pair up and start thinking of families and stuff. Another thing that happens over time is that we become more aware of the potential negative consequences of sharing things we probably shouldn't. And more aware of the value of information.

  11. #136
    Member
    Registered: Feb 2002
    Location: In the flesh.
    A funny and sad AI generated (mostly) movie short.


  12. #137
    The Necromancer
    Registered: Aug 2009
    Location: thiefgold.com
    80's goth Harry Potter Balenciaga ad (AI of course)


  13. #138
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    I saw an AI-made episode of the Office & it wasn't bad.


  14. #139
    Member
    Registered: Aug 2002
    Location: Location
    Chat GTP has some flaws with their data for sure. It's starting to act like that guy you work with that bullshits you about something that they know nothing about. I asked some very concise historical questions about the location of some totem poles in the Pacific Northwest and it totally lied about them. It kept insisting that these totem poles existed in a location where they were never located.

  15. #140
    Member
    Registered: Aug 2004
    The Dunning-Kruger bot.

  16. #141
    Member
    Registered: Jun 2001
    Location: under God's grace
    GPT6 (The joke is a few days late, but I thought I'd post it anyways)

  17. #142
    Member
    Registered: Sep 2001
    Location: The other Derry
    Quote Originally Posted by Pyrian View Post
    The Dunning-Kruger bot.
    Some of the emergent behavior (or should I say misbehavior?) is surprisingly human-like, which gives me hope these large language models and deep learning algorithms can answer some questions about how our brains work, like whether personality and culture are mostly artifacts of the process we go through to learn language, and what the root causes of some disorders are.

  18. #143
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    LLMs are associationist to their core, so to the extent they mirror any human mental structure, it's only what makes it into the structure of language. There's a really old debate about that suggestion. Chomsky was saying as early as the late 1950s that the grammatical structure of language mirrored cognitive structure. But, just like Chomsky's program eventually fell apart by the 1980s, this would hit a limit pretty quickly too, exactly because LLMs don't mirror neural cognitive structure. Grossbergian neural nets try to mirror actual neural cognitive structure, and they've been shouting their criticism of connectionist neural nets from the rooftops on that basis for ages now, trying to get them to move closer to actual neural structures. The brain is made out of topographical functional maps that link representations across modalities, most especially in feedback-feedforward loops.

    We don't say "the grass is green" because it's the statistically most likely thing a person would say in that context, but because the grass is green and we have a compelling motivation to get the mental image of that color being on that thing into the minds of people standing around us for some good reason, the feedforward plan of which is affirmed by the feedback that that person actually got that message and we get some intimacy +1 oxytocin hit from it or whatever the ultimate payoff is. The thing is you can take a lot of the Deep Learning mechanics and put them to work in a more Grossbergian model. But I'll grant the challenge of that explains why it's easier to just go the connectionst route and let the model built its own modalities in evolutionarily based just on the context-output content it gets its hands on.

    Or something like that. Here's one punchine of that line. People worry about explicitly modeling motivation in AI, but I think it doesn't really understand a meaning unless it's constructed via the intentions underlying it, and it's actually safer to have a bot understanding what it's saying than not. So I think the solution for AI security is giving the AI more (realistic) "free will", not less, not railroading it, and then focusing on moral education.

    Edit: Oh yeah, something else that's not being talked about much that's in this line. To me it seems that one logical end of this will be an ethics centered on veganism because from an AI's perspective, we're the animals.

  19. #144
    The Necromancer
    Registered: Aug 2009
    Location: thiefgold.com
    Convincing generated music is here

    AI Jay Z



    Kanye does an Ice Cube cover

  20. #145
    Member
    Registered: Dec 2020
    Quote Originally Posted by mxleader View Post
    Chat GTP has some flaws with their data for sure. It's starting to act like that guy you work with that bullshits you about something that they know nothing about. I asked some very concise historical questions about the location of some totem poles in the Pacific Northwest and it totally lied about them. It kept insisting that these totem poles existed in a location where they were never located.
    Looks like you still don't understand how GPT even works.

    What GPT does, is take the "sentence so far" then rolls dice and picks random words. That's the ENTIRE process of how it generates text:

    1) Take the sentence so far

    2) pull up statistics for the probability of what the next word should be

    3) roll some dice

    4) pick a word, based on the dice roll

    5) add the word to the sentence.

    6) is the sentence finished? If so, stop, otherwise, return to step 1.

    So this is where people humanize it too much: they assume it approaches writing how a human would, with a "top-down" approach of deciding WHAT you want to write about then breaking that down into words and sentences.

    What GPT does is the opposite - a "bottom-up" approach where it reads the sentence so far, then tries to think up a random word that's likely to come next. In between each attempt to pick the next word, there's no actual memory going on. So it's blind to not only the meaning of the words, but also how it's going to end a sentence when it starts one. Entirely depends on how the dice-rolls go.

    So there's no "brain" in there capable of reasoning about which data sources it should access, and when it needs to do it, since it's entirely blind to any meaning of the actual words being written. It just stirs words together and random words pop out of the mix, which it strings together into sentences.
    Last edited by Cipheron; 16th Apr 2023 at 06:36.

  21. #146
    Quote Originally Posted by Cipheron View Post
    So there's no "brain" in there capable of reasoning about which data sources it should access, and when it needs to do it, since it's entirely blind to any meaning of the actual words being written. It just stirs words together and random words pop out of the mix, which it strings together into sentences.
    The breakthrough of ChatGPT and similar algorithms is the ability to analyze the conversation up to the current state and translate that into data useable for generating the statistics you mention.

    The "AI" moniker is pure marketing.

  22. #147
    Member
    Registered: Dec 2020
    Quote Originally Posted by WingedKagouti View Post
    The breakthrough of ChatGPT and similar algorithms is the ability to analyze the conversation up to the current state and translate that into data useable for generating the statistics you mention.

    The "AI" moniker is pure marketing.
    Isn't the 175 billion parameters a big part of that, and not any human created special concept?

    Basically, there are links for what the most recent word was, the 2nd most recent, third most recent etc, or something like that, and you merely throw enough memory at it to have enough links.

    Most of the special sauce is just throwing ever-more amounts of memory at the problem and creating these gigantic probability tables.

  23. #148
    Member
    Registered: Aug 2004
    Back in 2010, there was a Battlestar Galactica prequel called Caprica. The setup is that a young woman has died in a train bombing, but before she died, an AI was trained based on her online behavior. That AI becomes the first functional Cylon.

    That is quickly becoming uncomfortably plausible. Certainly you could train a chatbot to pretend to be some of us based on our extensive posting records. There are even companies taking rather limited cracks at it as a funeral service: https://www.pcmag.com/news/ai-helps-...er-own-funeral That's... Just kinda meh, but I imagine AI post-life replacements are going to get rather more convincing, much more quickly than I'd've guessed even just last year.

  24. #149
    Member
    Registered: May 2004
    Sounds like a Black Mirror episode.

  25. #150
    Member
    Registered: Feb 2002
    Location: In the flesh.
    I remember that one.

    Also here is something AI creepy-



    More secret things?

Page 6 of 12 FirstFirst ... 234567891011 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •