I wasn't on the wait-list for very long, and I've had access for a couple weeks now. So go ahead and join the wait-list, if you're interested. Even though it is early and still changing day-to-day, I'm confident this is the future of search. Not that I think Bing will overtake Google in the long run; I'm sure Google's Bard will be comparable in capability and features. But having the search results embedded as links and citations within a useful verbal response is more practical and powerful than I expected. Bing also asks you smart follow-up questions to your original query, which can help you better research a topic or more quickly find the right site for your needs.
There's also a wait-list for access to the GPT-4 API over at OpenAI.
And ChatGPT Plus also gives you access to three versions: more reliable access to the public version of 3.5, a "turbo" version of 3.5, and early beta access to 4. You can switch between them with a simple drop down menu. It's $20US for the plus version, which is worth it to me. It's faster and much more reliable than the public version.
(To be clear: GPT-4 by itself, outside of Bing's integration, does not have search capability. It's just a much more powerful, accurate and flexible version of what you've already experienced with ChatGPT.)
Thanks for the information, Twist!
I somehow feel like people getting antsy about the IP infringement of dAI don't fully appreciate the writing on the wall that the entire public-private distinction may be getting obliterated, and this next generation may grow up assuming that their daily life is completely in the public domain. Worrying about the public claiming ownership over this or that artwork or piece of text seems like it's raging against a few buckets of water out of an entire tsunami that's gonna come crashing down on not just on the culture but on human identity itself.
I don't know though. My most basic feeling is that trying to contain this part of it will be a loosing battle like trying to go to war against meme culture would be a lost cause. I'm not sure how far the implications go. We've maybe been conditioned by decades of scifi to expect the worst, or most extreme (maybe it's the best for some people?).
Well, you're going to have to regulate AI, obviously. That's the obvious next step. I assume something on the level of GDPR/DPA will have to be brought in as an initial sop, where a basic level of transparency as to what the data set is, across what timeframe, an index of everything that's being ingested by these neural networks, and you're going to need the right to ask for it to be deleted if you find your stuff in there. That's how you uphold basic copyrights as well as basic user privacy, as far as my thinking about this for 0.5 seconds goes.
This also means that we need Tim Berners-Lee's Web 3.0 more than the stupidity that is the cryptobronet and NFTs, because you can't enforce privacy without the internet being the sort of medium that enables it to a basic degree, instead of users being commodified by default in the current capitalistic free-for-all that's already hurtled us halfway to an information dystopia.
You could get companies to restrict their models, but people can just make and use models on their own harddrives. I think it'd be like trying to regulate how people use Photoshop and posting memes. It's not something you can police very well. Actually you don't even need the posting part as a bottleneck for regulation. It's one thing when, e.g., ISPs get pressured to crack down on torrented movies by checking IP address on seeds & peers, but it's whole other thing when a person just generates a movie on their own harddrive. I think at the end of the day that's something that can't be policed, and that's where things may be going.
And pushing the online world away from companies to individuals I think just accelerates the expectation that everything online is in the public domain, and that's just pouring gas on the fire. I have the sense that it's inevitable. The question is just what path it takes for that conclusion to sink in to all the different stakeholders. But I don't know & let's see what happens.
Edit: A large language model is too big to house on a harddrive for some time, but I still somehow think that's not going to be the bottleneck for regulation over time.
Last edited by demagogue; 16th Mar 2023 at 19:24.
Very true.
I think AI is an appropriate term for deep learning, but the average person has no idea what deep leaning is, all they know about AI comes from sci-fi.
The same argument was made in the 1990s with the birth of the internet. Recall the catch phrase "information wants to be free" and the rampant piracy. My generation (X) grew out of it.
Our attitudes about sharing and intellectual properly change as we grow up. We start out teaching our kids to share. Younger people naturally want to share because it's part of developing their social skills. Students prefer to attend university in person because of the social life, and take gap years to travel and meet people. Early years in the workforce are often spent changing jobs and building a network. Over time, we become less socially needy, especially when we pair up and start thinking of families and stuff. Another thing that happens over time is that we become more aware of the potential negative consequences of sharing things we probably shouldn't. And more aware of the value of information.
A funny and sad AI generated (mostly) movie short.
80's goth Harry Potter Balenciaga ad (AI of course)
I saw an AI-made episode of the Office & it wasn't bad.