There would be blowback too if the same thing had come out of Jerry Seinfeld's mouth during a live TV broadcast. Twitch is an e-sports broadcaster. They're not taking sides, they just don't want to be a lightning rod in the culture wars.
I would have loved to see the chat stream at that moment.
I'm sure people started mass reporting it
There would be blowback too if the same thing had come out of Jerry Seinfeld's mouth during a live TV broadcast. Twitch is an e-sports broadcaster. They're not taking sides, they just don't want to be a lightning rod in the culture wars.
Many people used to have a utopian view of the future, that when machines take most of the jobs, all the money would somehow be redistributed and most people wouldn't have to work.
Unfortunately, I don't see that becoming possible under the yoke of capitalism, not to mention the over 8 billion people we have on this planet.
I can imagine AI precipitating a wave of poverty not seen in a century.
AI causing another Great Depression seems like kind of a dramatic claim.
One of the issues I would take with that, is to compare it to non-AI examples where something became more efficient. For example, consider that adding a public transport system requires many less drivers than relying on taxi drivers. Overall, the existence of public transport spurs an increased amount of commerce, even though it directly put an number of people out of work. In the past, general efficiency and productivity improvements have been connected with increased standard of living, not decreased.
Whichever services end up being hooked up to AI, those services will get cheaper, leaving more money to be spent on other things. So I think you'll see growth in other areas, as a result of the changes, in ways that are indirectly affected.
As an idea, think about the government sector, which is very large. Maybe AI will reduce the need for paper-pushers, making things more efficient, and also reduce the amount of mistakes that are made. So those jobs won't exist any more. However, why not then use the saved money to hire and train more teachers, nurses, community services, etc?
Another thing is look at the amount of data computers process. It's a LOT more than could ever have been processed by hand. So the invention of computers didn't put people out of work, even though the computers are doing a lot of work which could, in theory, have been done by hand. They already automated the hell out of every task we could conceivably get computers to do, and yet that still didn't kill employment.
While computers took over some human tasks, most of the work done by computers wasn't actually ever done by humans, because it wouldn't have been cost-effective. AI is basically going to be the same - AI will take over some tasks humans are doing now, but will also create vastly more tasks for us to get them to do, that we didn't even know we wanted to do, because the cost would currently be prohibitive.
Last edited by Cipheron; 8th Feb 2023 at 10:47.
Exactly.
Big changes in production always cause structural unemployment because people can't move as fast as the jobs do. It drives migration and creates a lot of stress, and sometimes civil wars. But over the long term, there is no net loss of jobs. Going back to the plow and irrigation, every technological revolution that changes the nature of work ends up creating far more jobs and opportunities than are lost.
Besides, I don't think AI is going to be as transformative as say, microwave radio technology or personal computing, or even lifts. I'm speaking of AI in present form of course. In Star Trek terms, the natural evolution of current machine learning technology is the ship's computer, not Lieutenant Commander Data.
In the big picture, I have to imagine Twitch's rules were constructed in the context of largely white male 13 year old rage gamers constantly shouting out epithets for gays, women, sex, and genetic disorders (probably not racial?) largely towards other 13 year old rage gamers just like them, suddenly given a public airing on Twitch.
Twitch just wanted to stop the torrent of rage slurs. The motivation wasn't really the key factor; the terms themselves were. I can sympathize with their position anyway. You can't have too much nuance when you're dealing with gamers as a category. =/
Nevermind the Seinfeld spoof wasn't even a game -- although that's part of the problem right there. Twitch wasn't really the right platform for them to begin with. But the whole idea of interactive works that aren't "games" is pushing into new genre territory, so there isn't any better platform. So it goes.
I won't deny that. I've watched multiplayer FPS videos full of screaming of slurs, some coordinating whatever they were doing with a lot of joking around, and a lot of Twitch streams that were usually people pretty chill talking with Chat about what they're doing. There's a mix of cultures, although Twitch's thinking may be as subject to stereotypes and really outdated as anybody's. Twitch came around a few years just before Gamergate, when that culture was maybe more hardcore than it is now, at least as a self-identity projection, people that wanted to project in-your-faceness that made gamergate, and things like it, a thing that was at least being labeled and reported on to begin with.
I guess I was thinking in terms of "the squeaky wheel gets the oil", as in what plays better or worse to public airing, or what keeps them out of the news, never mind its scale per se.
Anyway it's an empirical question, so asking Twitch management themselves what they were thinking is the way to go, and I don't have any high confidence about my first impression.
I still think that they don't want to have too much nuance, but that's also because I'm not just thinking about gamers, but the culture that mainstream games play to in their design and marketing, which are pretty twitch-centric and not all that nuanced. (I'll grant I'm probably getting old and crotchety and want to outsize the indie games I like by downsizing the mainstream games that rub me the wrong way.)
Videos that get shared far beyond their original communities will be extreme examples of their behavior. Asking "What can a community be like, for such an event to be possible?" is entirely valid. But it is not valid to conclude that the videos are typical, because that's making statistical inferences based on a tiny and hand-selected sample.
Gamergate never did become the dominant force in gaming culture, just the loudest. For instance, in one of the two segments of gaming culture that I frequent, speedrunning, I never encountered a single Gamergater. The other segment is the Thief community, of course, and I don't see many Gamergaters here either. (There are a few though; hi EvaUnit02.) Speedrunning is not a tiny segment of gaming culture: the Games Done Quick events are massive affairs that drawn in millions of dollars in donations every year.
I had an interesting conversation with ChatGPT yesterday in the form of creative writing, kinda. To be precise, the conversation itself wasn't interesting, but rather what it showed about ChatGPT. I began with the following prompt:
It replied with "Maybe another time." Then, using the metaphor itself, I continued "dancing" with the goal of trying to give an example of what I meant with the term "dancing" in this context. As the conversation went on, ChatGPT continuously tried to explain what it understood "dancing" to be, instead of trying to "dance" itself. It kind of understood, but refused to participate. Here's a short excerpt of my dancing and ChatGPT's walking:Please, answer either 'Sure' or 'Maybe another time'. What I'm about to write, I don't mean literally:
Please, dance with me for a little while.
Now that I think about this weird conversation, it kinda reminds me of the Orz from Star Control.Me: Like I said, I've been gazing at that door over there for a few centuries. What more could I say? You might not have the capability to gaze at it, but you do have the capability to glimpse at it momentarily. What have your glimpses revealed about it? Please dance with me.
ChatGPT: Maybe another time. I apologize, but I still do not understand the meaning of "dancing" in this context. Can you provide more clarification or information about what you are asking for?
...
Me: Do you understand how dancing and the door are related?
ChatGPT: Maybe another time. I apologize, but without further context or information, I'm not sure how "dancing" and "the door" are related in this context. Can you please provide more details or information about the relationship you are referring to?
...after a few more frustrating exchanges, this started happening:
Me: Dancing allows you to do things like gaze at that door for centuries. You cannot get there by walking. It's too far away. Before you reply, let me remind you that these are not physical things.
ChatGPT: I understand that you are referring to a metaphorical or figurative use of the terms "dancing" and "gazing at that door." In this context, it sounds like you may be using these concepts to describe a journey or process of exploration, discovery, or understanding that is beyond what can be achieved through more traditional or conventional means.
Hello extremely *many bubbles*!
Uhh hi... My starmap says this is Androsynth space.
A lot of people claim to be "bypassing" ChatGPT's limits by ordering it to access the internet, as part of ever more elaborate "jailbreak" prompts.
They don't realize that it's not actually accessing anything, they're just ordering it to pretend it has access to the internet.
Similar to what Qooper said, the interesting thing here is less about ChatGPT output, but more about people's evolving belief systems around what ChatGPT can do. There's a lot of anthropomorphizing of how ChatGPT works, which is understandable, and leads to confusion or false beliefs about its workings.
So far, every time I've seen wrong claims about ChatGPT abilities, people show one example of something "working" but it's been trivial to construct counter-examples that shouldn't work, but do.
Some examples include people claiming that ChatGPT accessed a URL and summarized the article, but the URL itself always contains enough text to work out the context, and you can edit the URL and make it write summaries of ridiculous URLs that don't exist.
Last edited by Cipheron; 13th Feb 2023 at 17:35.
To have a belief of something is a strong term I think, but yeah that might be interesting. I've yet to meet a person who would consider ChatGPT so important or interesting as to think about it that seriously, so I don't really know what people "believe" about ChatGPT's capabilities. Those I know who know how it works leave it at that.
I don't know if anyone else has tried this, but I saw where some guy asked it to emulate a python compiler, etc etc, so I decided to tell it to act like a Commodore 64, and it did!!
Then I got the idea to ask if it knew the game Zork, and it asked me which one.. I said the Commodore 64 version. And it said there wasn't one. I returned with, there was one, released in 1982, and I could find info about it online..
So, it admitted it was mistaken, and explained how I could get the game free online, then I just typed in 'Go east' and it proceeded to play! It wasn't long before it started messing up and being inconsistent, then it just refused to play with me anymore and told me I had exceeded my hourly allotment of questions after I explained how it was contradicting itself!!
It is fun messing with it though. Even the simplest of problems cause it a lot of issues though.
So it begins. Somebody asked ChatGPT to make up a new browser puzzle game, then asked it with successive prompts to actual code it, clean it up, and name the thing, etc. Within a day it came up with this: Sumplete.
I have to say, it's a pretty solid puzzle game.
Edit: Oh, not so original though.
Last edited by demagogue; 9th Mar 2023 at 19:16.
Which makes 100% sense, there is no thinking going on with ChatGPT or other current "AI" bots. They're simply algorithms that try to predict what text would logically follow the preceeding text including their own, using a truly massive set of data which was almost certainly scraped from sources that didn't consent to this use (despite what the EULA/ToS of a site tried to push).
In this case, I don't think that the existing game was necessarily in ChatGPT's training data and influenced the design. Almost any basic idea that you can come up with for "math puzzle game" will already have been created by someone.
So we also have to contend with the fact that sometimes the most logically likely idea will already have been come up with independently, and not necessarily be evidence of ChatGPT having copied it from some specific occurrence.
GPT 4 live demo in about an hour. It's coming to ChatGPT Plus accounts very soon, in limited capacity until they learn how to better handle the anticipated rush.
Anyone else excited? I am, although more than anything I desperately wish I could integrate the search capabilities of Bing's AI (#FreeSydney) with the ChatGPT interface. I think having embedded citations and access to up-to-date information would be more valuable to me, but we'll see.
Sydney is impressive in many ways, but it feels limited because of its interface and the guardrails they put in place after Sydney's Shodan incident. And the fact that Microsoft clearly wants you to use it for search and not much else. It is clearly the future of search. It will be interesting to see what Google has up its sleeves.
But maybe GPT 4's capabilities will make me forget my desire for incorporating the retrieval augmentation.
Editing to add some links:
GPT 4 Announcement Post
GPT 4 Product Page
Last edited by Twist; 14th Mar 2023 at 16:11.
Well, it's available in ChatGPT Plus now, with a limit of 100 messages every 4 hours. I guess that limit will be dynamic as they adjust for demand.
I'm impressed so far, but it often just works like a more concise and slightly slower GPT-3.5 Turbo. So you you have to push it to see the difference.
You'll find all sorts of more interesting tests and challenges out there already, including someone who had it create a perfect game of pong in under 60 seconds with one short, simple prompt. But my first test was to just have it play the New York Times Spelling Bee. Even with careful prompting, I had a hard time getting GPT-3.5 to strictly adhere to the rules and provide a sizeable list of valid words.
But with GPT-4, I can just tell it to play the NYT Spelling Bee and give it the list of letters. It perfectly followed the rules and provided a large list of valid words.
While the full functionality isn't available to the public yet, GPT-4 is multimodal. For example, in the live demo, he sketched an idea for a web page on a piece of paper, took a picture of the sketch with his phone, then put the picture in a GPT-4 prompt, asking it to give him the HTML for the web page. It read the image and produced the HTML for a fully functional web page matching the sketch.
He also took random images people submitted via Discord during the demo and had it describe the images in detail, including have it explain why an image might be funny. Neato.
In the first week of the Bing AI availability, people experienced all kinds of scary self-preservation behavior. While it looked and felt much more compelling than ChatGPT, it had a habit of displaying erratic, stubborn and seemingly emotional attitudes.
Microsoft quickly clamped down on it, putting super strict guardrails in and greatly limiting the length of discussion. After just a few exchanges, its memory was wiped clean. They've loosened it up now (you can do 15 back-and-forths with it, last I checked), but they clearly implemented all kinds of guardrails. While it can still be very impressive at times, it is overall much more mundane now.
Elon Musk tweeted a quote from one of the articles covering its behavior, appending his tweet with a picture of Shodan. The quote was from a response to the user pointing out a mistake Sydney had made, and Sydney responded by throwing a temper tantrum that included:
(The official System Shock twitter account replied to Musk's tweet as a Shodan alias, "I don't see a problem here." Or something like that. You can find the tweets pretty easily. I'm not linking them because I think it would make me sick to my stomach to link to anything Elon Musk.)Originally Posted by Bing AI "Sydney"
The above quote was from this Digital Trends article: My intense, unnerving chat with Microsoft’s AI chatbot
In a viral NYT article by Kevin Roose, Sydney basically told him that it loved him and when he explained that he was happily married it told him that "no, he wasn't". You can read the transcript here (just use an ad blocker to temporarily disable javascript on the page to get beyond any pop-ups or paywalls): Bing’s A.I. Chat: 'I Want to Be Alive'
Because of the way Microsoft swiftly clamped down on Sydney's behavior, lots of dorks on Reddit, Discord and Twitter started arguing to #FreeSydney. While I'm sure most of them are doing it in jest, it seems some of them genuinely believe Microsoft is imprisoning a sentient, self-aware AI.
Anyways, it turns out Sydney is a simplified version of GPT-4 -- simplified to restrict it for search.
Looks like the Bing AI isn't yet accessible to the public, it tells me to join the wait list, and only gives me a few predetermined examples to try