I wasn't on the wait-list for very long, and I've had access for a couple weeks now. So go ahead and join the wait-list, if you're interested. Even though it is early and still changing day-to-day, I'm confident this is the future of search. Not that I think Bing will overtake Google in the long run; I'm sure Google's Bard will be comparable in capability and features. But having the search results embedded as links and citations within a useful verbal response is more practical and powerful than I expected. Bing also asks you smart follow-up questions to your original query, which can help you better research a topic or more quickly find the right site for your needs.
There's also a wait-list for access to the GPT-4 API over at OpenAI.
And ChatGPT Plus also gives you access to three versions: more reliable access to the public version of 3.5, a "turbo" version of 3.5, and early beta access to 4. You can switch between them with a simple drop down menu. It's $20US for the plus version, which is worth it to me. It's faster and much more reliable than the public version.
(To be clear: GPT-4 by itself, outside of Bing's integration, does not have search capability. It's just a much more powerful, accurate and flexible version of what you've already experienced with ChatGPT.)
Thanks for the information, Twist!
I somehow feel like people getting antsy about the IP infringement of dAI don't fully appreciate the writing on the wall that the entire public-private distinction may be getting obliterated, and this next generation may grow up assuming that their daily life is completely in the public domain. Worrying about the public claiming ownership over this or that artwork or piece of text seems like it's raging against a few buckets of water out of an entire tsunami that's gonna come crashing down on not just on the culture but on human identity itself.
I don't know though. My most basic feeling is that trying to contain this part of it will be a loosing battle like trying to go to war against meme culture would be a lost cause. I'm not sure how far the implications go. We've maybe been conditioned by decades of scifi to expect the worst, or most extreme (maybe it's the best for some people?).
Well, you're going to have to regulate AI, obviously. That's the obvious next step. I assume something on the level of GDPR/DPA will have to be brought in as an initial sop, where a basic level of transparency as to what the data set is, across what timeframe, an index of everything that's being ingested by these neural networks, and you're going to need the right to ask for it to be deleted if you find your stuff in there. That's how you uphold basic copyrights as well as basic user privacy, as far as my thinking about this for 0.5 seconds goes.
This also means that we need Tim Berners-Lee's Web 3.0 more than the stupidity that is the cryptobronet and NFTs, because you can't enforce privacy without the internet being the sort of medium that enables it to a basic degree, instead of users being commodified by default in the current capitalistic free-for-all that's already hurtled us halfway to an information dystopia.
You could get companies to restrict their models, but people can just make and use models on their own harddrives. I think it'd be like trying to regulate how people use Photoshop and posting memes. It's not something you can police very well. Actually you don't even need the posting part as a bottleneck for regulation. It's one thing when, e.g., ISPs get pressured to crack down on torrented movies by checking IP address on seeds & peers, but it's whole other thing when a person just generates a movie on their own harddrive. I think at the end of the day that's something that can't be policed, and that's where things may be going.
And pushing the online world away from companies to individuals I think just accelerates the expectation that everything online is in the public domain, and that's just pouring gas on the fire. I have the sense that it's inevitable. The question is just what path it takes for that conclusion to sink in to all the different stakeholders. But I don't know & let's see what happens.
Edit: A large language model is too big to house on a harddrive for some time, but I still somehow think that's not going to be the bottleneck for regulation over time.
Last edited by demagogue; 16th Mar 2023 at 20:24.
Very true.
I think AI is an appropriate term for deep learning, but the average person has no idea what deep leaning is, all they know about AI comes from sci-fi.
The same argument was made in the 1990s with the birth of the internet. Recall the catch phrase "information wants to be free" and the rampant piracy. My generation (X) grew out of it.
Our attitudes about sharing and intellectual properly change as we grow up. We start out teaching our kids to share. Younger people naturally want to share because it's part of developing their social skills. Students prefer to attend university in person because of the social life, and take gap years to travel and meet people. Early years in the workforce are often spent changing jobs and building a network. Over time, we become less socially needy, especially when we pair up and start thinking of families and stuff. Another thing that happens over time is that we become more aware of the potential negative consequences of sharing things we probably shouldn't. And more aware of the value of information.
A funny and sad AI generated (mostly) movie short.
80's goth Harry Potter Balenciaga ad (AI of course)
I saw an AI-made episode of the Office & it wasn't bad.
Chat GTP has some flaws with their data for sure. It's starting to act like that guy you work with that bullshits you about something that they know nothing about. I asked some very concise historical questions about the location of some totem poles in the Pacific Northwest and it totally lied about them. It kept insisting that these totem poles existed in a location where they were never located.
The Dunning-Kruger bot.
GPT6 (The joke is a few days late, but I thought I'd post it anyways)
Some of the emergent behavior (or should I say misbehavior?) is surprisingly human-like, which gives me hope these large language models and deep learning algorithms can answer some questions about how our brains work, like whether personality and culture are mostly artifacts of the process we go through to learn language, and what the root causes of some disorders are.
LLMs are associationist to their core, so to the extent they mirror any human mental structure, it's only what makes it into the structure of language. There's a really old debate about that suggestion. Chomsky was saying as early as the late 1950s that the grammatical structure of language mirrored cognitive structure. But, just like Chomsky's program eventually fell apart by the 1980s, this would hit a limit pretty quickly too, exactly because LLMs don't mirror neural cognitive structure. Grossbergian neural nets try to mirror actual neural cognitive structure, and they've been shouting their criticism of connectionist neural nets from the rooftops on that basis for ages now, trying to get them to move closer to actual neural structures. The brain is made out of topographical functional maps that link representations across modalities, most especially in feedback-feedforward loops.
We don't say "the grass is green" because it's the statistically most likely thing a person would say in that context, but because the grass is green and we have a compelling motivation to get the mental image of that color being on that thing into the minds of people standing around us for some good reason, the feedforward plan of which is affirmed by the feedback that that person actually got that message and we get some intimacy +1 oxytocin hit from it or whatever the ultimate payoff is. The thing is you can take a lot of the Deep Learning mechanics and put them to work in a more Grossbergian model. But I'll grant the challenge of that explains why it's easier to just go the connectionst route and let the model built its own modalities in evolutionarily based just on the context-output content it gets its hands on.
Or something like that. Here's one punchine of that line. People worry about explicitly modeling motivation in AI, but I think it doesn't really understand a meaning unless it's constructed via the intentions underlying it, and it's actually safer to have a bot understanding what it's saying than not. So I think the solution for AI security is giving the AI more (realistic) "free will", not less, not railroading it, and then focusing on moral education.
Edit: Oh yeah, something else that's not being talked about much that's in this line. To me it seems that one logical end of this will be an ethics centered on veganism because from an AI's perspective, we're the animals.
Convincing generated music is here
AI Jay Z
Kanye does an Ice Cube cover
Looks like you still don't understand how GPT even works.
What GPT does, is take the "sentence so far" then rolls dice and picks random words. That's the ENTIRE process of how it generates text:
1) Take the sentence so far
2) pull up statistics for the probability of what the next word should be
3) roll some dice
4) pick a word, based on the dice roll
5) add the word to the sentence.
6) is the sentence finished? If so, stop, otherwise, return to step 1.
So this is where people humanize it too much: they assume it approaches writing how a human would, with a "top-down" approach of deciding WHAT you want to write about then breaking that down into words and sentences.
What GPT does is the opposite - a "bottom-up" approach where it reads the sentence so far, then tries to think up a random word that's likely to come next. In between each attempt to pick the next word, there's no actual memory going on. So it's blind to not only the meaning of the words, but also how it's going to end a sentence when it starts one. Entirely depends on how the dice-rolls go.
So there's no "brain" in there capable of reasoning about which data sources it should access, and when it needs to do it, since it's entirely blind to any meaning of the actual words being written. It just stirs words together and random words pop out of the mix, which it strings together into sentences.
Last edited by Cipheron; 16th Apr 2023 at 06:36.
Isn't the 175 billion parameters a big part of that, and not any human created special concept?
Basically, there are links for what the most recent word was, the 2nd most recent, third most recent etc, or something like that, and you merely throw enough memory at it to have enough links.
Most of the special sauce is just throwing ever-more amounts of memory at the problem and creating these gigantic probability tables.
Back in 2010, there was a Battlestar Galactica prequel called Caprica. The setup is that a young woman has died in a train bombing, but before she died, an AI was trained based on her online behavior. That AI becomes the first functional Cylon.
That is quickly becoming uncomfortably plausible. Certainly you could train a chatbot to pretend to be some of us based on our extensive posting records. There are even companies taking rather limited cracks at it as a funeral service: https://www.pcmag.com/news/ai-helps-...er-own-funeral That's... Just kinda meh, but I imagine AI post-life replacements are going to get rather more convincing, much more quickly than I'd've guessed even just last year.
Sounds like a Black Mirror episode.
I remember that one.
Also here is something AI creepy-
More secret things?