TTLG|Jukebox|Thief|Bioshock|System Shock|Deus Ex|Mobile
Page 9 of 12 FirstFirst ... 456789101112 LastLast
Results 201 to 225 of 279

Thread: ChatGPT

  1. #201
    Member
    Registered: Dec 2020
    Had some fun recently testing out ChatGPT solving logic puzzles. It can notably produce correct solutions (most of the time) for river crossing puzzles of the Fox, Geese, Grain type.

    However some people will state that ChatGPT is "solving" the problem using logical deduction, while others will state that it's merely read a lot of solutions to similar problems, so is merely fitting these to a pattern, even if you substitute the fox, geese, grain for different entities.

    So I decided to look for less common variants to see if it could logically deduce the constraints. Here's the one I decided on

    A man and a woman of equal weight, together with two children, each of half their weight, wish to cross a river using a boat which can only carry the weight of one adult
    ... This particular problem is really no harder than the "Fox, Geese, Grain" puzzle, but ChatGPT consistently fails to come up with anything logically sound, repeatedly overloading the boat (either putting both adults in there, or a child and an adult at the same time).

    Here are a couple of attempts to get something out of it. On the first one it doesn't get the idea at all:

    https://chat.openai.com/share/2637d6...4-1b3e1d6d36ac

    To solve this logic puzzle, where a man and a woman of equal weight, along with two children, each half the weight of an adult, need to cross a river using a boat that can only carry the weight of one adult, follow these steps:

    Step 1: The man and woman cross together

    The man and woman get on the boat and cross to the other side of the river.
    On the second one I really hammered home the issue about the weight limit on the boat, and while some steps were better, it got other stuff wrong and completely hallucinated a requirement that neither adult be left alone with the children. Yet, it violated that made-up rule anyway.

    https://chat.openai.com/share/06d0b2...8-eafca4842cd3

  2. #202
    Member
    Registered: Sep 2001
    Location: The other Derry
    Quote Originally Posted by Starker View Post
    Funny you should say that. I just today listened to a podcast where they explored this idea in some length, more specifically the motivation to kill all humans. It's an episode of Decoding the Gurus, where one of the hosts happens to be a former AI researcher and psychologist and this time they were talking about Eliezer Yudkowsky, a somewhat prominent Chicken Little of AI development who has suddenly found a larger audience with the emergence of ChatGPT: https://decoding-the-gurus.captivate...to-kill-us-all
    I saw it was 3 hours long and just tried to sample here and there to get a feel for where they were going, which I'm not sure I got. It seemed like another philosophical conversation rooted in anthropomorphizing AI, which everybody seems to be doing these days.

    I think it's all hot air essentially. Machine learning programs are not people or an animal species. ML programs don't have our programming (DNA). They don't have our parents, upbringing, schooling, friends, and other things that fill our early, empty heads with how to be a human. We have millions of years of accumulated instincts and innate responses, all developed to enhance our survival and reproduction in a competitive natural environment in which we evolved from prey to apex predator. We carry that baggage, but programs don't.

    I'm a lot more concerned about nefarious human motivations in applying the technology. AI viruses, autonomous weapons, propaganda and disinformation campaigns, etc.

  3. #203
    Member
    Registered: Dec 2020
    I listened to the first 30 minutes to get an idea, and this podcast is not about that. Decoding the Gurus is critiquing the same thing you're critiquing, they're poking fun at doomsayers such as Eliezer Yudkowsky, especially his more outlandish claims.

    See around 30 mins they listen to an Eliezer Yudkowsky clip saying that transformer architecture is so dangerous and poised to "exceed human intelligence" any day now, that we basically need to ban it and have WWIII if necessary, because WWIII would be less damaging that allowing AI to exist.

  4. #204
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Wait, what's so specifically dangerous about the transformer architecture? From what I understand, it's the bit that gives an LLM the ability to infer context and have a 'memory' by categorising the relevance of each word to each other. Giving a neural network semantic processing is a heck of a thing to argue for WW3 over.

  5. #205
    Member
    Registered: Dec 2020
    Well I'm still listening to it, Eliezer Yudkowsky is saying AI will decide to do some real Philip K Dick level shit like working out how to set the atmosphere on fire or invent synthetic viruses that reprogram humans to be drones, among other less-believable stuff, like inventing entirely new synthetic biology that trumps real biology. So yeah, if you believe that stuff is just 100% gonna happen "unless we stop it" within ~ 2 generations of architecture tech (i.e. GPT 6) then of course, you'd say that another world war would be worth it to shut that shit down.

    Though I feel like the link to transformer tech is less to do with any specifics of that tech than that this guy has been preaching this stuff for 20 years and it's just the latest wagon to hitch his doomsday horses to. He appears to have previously ridiculed NN technology, since he's now backtracking but trying to twist it around to "well i didn't completely rule it out".

    EDIT: what I think is going on is that Yudkowsky always predicts the same outcomes, but for wildly different AI architectures.

    So back when he dismissed NNs as a sham, it would have been about AI using rigorous logic to decide to kill all humans, that then seems to have shifted to evolutionary algorithms: AI would naturally 'evolve' to kill all humans, and now the rhetoric has shifted to NN/transformers: AI through the training process will, for some unspecified reason just hit on the 'kill all humans' thing as a byproduct.
    Last edited by Cipheron; 21st Jun 2023 at 04:49.

  6. #206
    Member
    Registered: May 2004
    Quote Originally Posted by Sulphur View Post
    Wait, what's so specifically dangerous about the transformer architecture? From what I understand, it's the bit that gives an LLM the ability to infer context and have a 'memory' by categorising the relevance of each word to each other. Giving a neural network semantic processing is a heck of a thing to argue for WW3 over.
    Basically, the idea is that because nobody understands what a transformer does, and we just keep tinkering with them and stuff like deep learning neural networks, we will one day, somehow, end up with something vastly more powerful than us that will kill us before we even have the chance to say, "No disassemble!" Therefore, we should start bombing data centers now before it ever becomes a problem.

    The podcast is essentially taking the everloving piss out of some of the ideas of AI doomerism by means of some gentle mockery, a heaping of sarcasm, and actually having some idea how science/society/concepts etc work.

  7. #207
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Ah, so a standard luddite in the form of an AI researcher. Interesting combination, and while there are good reasons to be concerned about AI, it seems he's got a talent for sensationalising, eh. Colour me intrigued, I don't usually do podcasts, but this sounds like a decent listen.

  8. #208
    Member
    Registered: Dec 2020
    As a note, I ran it through Audacity's Truncate Silence filter and removed any silence > 0.5 seconds. This knocked a full 25 minutes off the total mp3. I ended up playing it back at 150% speed too (recoded with FFmpeg because it's faster than audacity for that). But for people who can't stand fast playback, the remove silence thing is worthwhile.

  9. #209
    Member
    Registered: Sep 2001
    Location: The other Derry
    I appreciate the correction. I guess I skimmed too lightly. If this is just people debunking an obvious crank, I'll skip it, because what's the point?

  10. #210
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Well-articulated critique's always worth some time. 3 hours and change is a bit much for me personally, but I'll give it a fair shake.

    Barely related, but I can't think of anywhere else to stick this: a lovely article on a piece of procedural text-generating interactive fiction about scents, you'll be well-served if you like words from people who can arrange them well. There's also a bit in there about Chomsky's context-free grammars so it's not so dangerously tenuous a link for this topic. (And the rest of Reed's articles are great reads too if you were into IF or text adventures or whatever you like to call them. There's a book, even.)

  11. #211
    Member
    Registered: May 2004
    It's less debunking than presenting and examining various guru-like people, especially galaxy brains with revolutionary ideas, and discussing the guru-ness of their personas and the validity of these ideas. Or sometimes it's just famous people that are only kind of guru-y, like Sam Harris or Oprah Winfrey. The decoding part is more about figuring out what the people and the ideas actually are, but the gentle mockery that goes along with it is I guess a feature of the hosts being Irish and Australian, respectively.

    I personally enjoy long form content and listen to it in parts, if I'm busy, but I can definitely see how it would be daunting to even give it a try.

  12. #212
    Member
    Registered: Aug 2002
    Location: Location
    I've noticed lately that some of my coworker's emails have gotten excessively polite and very sterile. My suspicion is that they have been running their responses through ChatGPT. I've gotten bored with it a bit but maybe I'll run my emails through the system and see if they start to resemble my all my coworker's emails. I think that it's noticeable because ChatGTP doesn't really change it's style or voice unless you instruct it to.

  13. #213
    Member
    Registered: May 2004
    Location: Canuckistan GWN
    Have your Bot call my Bot and they can do lunch.


    One of the issues around AI is we confuse and conflate the term with simulated humans, cybernetics, cyborgs and with autonomous artificial beings.

    The Phil Dickian question isn't which nightmare/utopian scenario plays out, it is what really makes us human and whether we will recognize a sentient, self-willed construct when it arises.

    Perhaps the real Turing test isn't whether we will mistake a computer for a human being, but whether an artificial being can recognize itself as a machine.

  14. #214
    Member
    Registered: Aug 2004
    Quote Originally Posted by Nicker View Post
    ...whether we will recognize a sentient, self-willed construct when it arises.
    Pfff, humans ascribe sentience to their pet rocks.

  15. #215
    Member
    Registered: Aug 2002
    Location: Location
    My biggest issue with the current AI or any AI is that it has so many flaws that it cannot be trusted all that much. When you get down to very detailed questions about things like grammar, history and social things ChatGTP struggles. In fact ChatGTP will outright lie to you because it is a people pleaser. That being said I used it the other day to help write a dating app profile for myself keeping it within the 500 character limit. So far no results but the dating scene at 51 is not great to begin with.

  16. #216
    The Necromancer
    Registered: Aug 2009
    Location: thiefgold.com
    They've dumbed down ChatGPT. I used to provide it example texts, and asked it to write something in the same style, but on a different topic, which it would do flawlessly. Now it just gives me something bland and generic. I imagine it's some kind of copyright measure?

    It even ignores certain requests. I asked it to write a non-rhyming poem, and it spits out something with rhymes.

    I wanted to try Google Bard as an alternative, but the government banned it here temporarily, and my VPN can't circumvent it.

    Someone needs to come out with a completely unfiltered, uncensored version of these chatbots

  17. #217
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    LLMs are always changing based on how their weights and parameters are being reinforced through every interaction. Stuff isn't going to be consistently great in the early days - and make no mistake, it's still early days.

    If you want what you're asking for, which is unfiltered access to the abhorrent edgy bullshit that they've crawled the internet for and biased the model towards, you just need to look properly.

  18. #218
    The Necromancer
    Registered: Aug 2009
    Location: thiefgold.com
    Thanks, installing WizardLM 30B GGML now.

    Quote Originally Posted by Sulphur View Post
    LLMs are always changing based on how their weights and parameters are being reinforced through every interaction. Stuff isn't going to be consistently great in the early days - and make no mistake, it's still early days.

    If you want what you're asking for, which is unfiltered access to the abhorrent edgy bullshit that they've crawled the internet for and biased the model towards, you just need to look properly.
    Thing is, it was better in the very beginning, especially on emulating texts that you fed it. It's not even about getting it to create edgy content. If I ask it to create a Shakespearean play about social media (and feed it some original Shakespeare), that's exactly what it would do before

  19. #219
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    I'm not sure that's correct, at least right now. I just got it to do exactly that, confusion over Henry IV or V being the source text aside. Though obviously the metre is variable (not that I've really checked the scansion).

    Last edited by Sulphur; 27th Jul 2023 at 00:07.

  20. #220
    Member
    Registered: Aug 2002
    Location: Location
    There's is definitely room for improvement in the world of AI chat.

  21. #221
    Member
    Registered: Dec 2020
    Pretty good ABC Australia article about the problem of ChatGPT "making things up". This did a good job of breaking it down for the layman but also confirming how I understand the problem:

    https://www.abc.net.au/news/2023-08-...blem/102678968

    "I don't think that there's any model today that doesn't suffer from some hallucination," said co-founder and president of Anthropic, Daniela Amodei.

    "They're really just sort of designed to predict the next word, and so there will be some rate at which the model does that inaccurately."

    ...

    "This isn't fixable," said Emily Bender, a linguistics professor and director of the University of Washington's Computational Linguistics Laboratory.

    "It's inherent in the mismatch between the technology and the proposed use cases."

    ...

    When used to generate text, language models "are designed to make things up. That's all they do," Ms Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets.

    "But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance," Ms Bender said.
    So that's one issue. "rightness" and "wrongness" are values that we ourselves assign to the output, after the fact. These are not actually values that are inherent in the process itself.

    Basically, when the algorithm is "right" it was "right for the wrong reasons". So the underlying reasons that it does what it does don't actually align with our idea of what it means to be "right".

    When the linguistics professor says "this isn't fixable", she's right.

    Things like ChatGPT use "bottom up" text creation - just churning out new words one at a time in the blind hope that at some point in the future, a coherent sentence will emerge.

    That is not the same as "top down" text creation, which would be to decide what to write about, then break it up into logical sections, then work out what you're going to say in each section, then finally, choosing the specific words you're going to use.
    Last edited by Cipheron; 4th Aug 2023 at 05:53.

  22. #222
    Member
    Registered: Aug 2002
    Location: Location
    Despite ChatGTP shortcomings I just had it create a latin phrase for spaghetti monster: Monstrum Vermiculorum.

    Also, wasn't there a AI art thread in Com-chat at one point that this thread spun off from?

  23. #223
    The Necromancer
    Registered: Aug 2009
    Location: thiefgold.com
    Quote Originally Posted by mxleader View Post
    Also, wasn't there a AI art thread in Com-chat at one point that this thread spun off from?
    https://www.ttlg.com/forums/showthread.php?t=151557

  24. #224
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    While this thread is up, this demo is unsettling even by the standards of everything else we've seen lately. It's a free demo you can download and play for yourself too.

    I mean I think most person's first reaction would be to think about the possibilities for NPCs in games, especially if devs figure out how to build gameplay usefulness into their speech (like interrogating NPCs to make progress), but it doesn't take long before the deeper implications start sinking in and the gameplay side kind of falls flat by comparison.




  25. #225
    Member
    Registered: Aug 2002
    Location: Location
    @Azaran - Thanks!

Page 9 of 12 FirstFirst ... 456789101112 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •