There's like entire posts about ChatGPT in the AI thread below this.
I logged onto ChatGPT a couple of days ago and it's an interesting, but limited, tool for doing all sorts of things. I suppose that it's good that it's limited in what it can do. After much exploration I've found it to be a conversational encyclopedia which makes it more interesting in a lot of ways. When you chat about a subject like coal mining, which is one of my favorite subjects, it presents a lot of data in a conversational way that makes it more easily digestible. It does sometimes make incorrect statements and you can refute the data. I was chatting about the Peanuts comic strips and talking about Snoopy's siblings in the strips and it claimed that his siblings weren't in the strip but I argued my point and it admitted being wrong. The program does have a few disclaimers and one of them is that it will occasionally make mistakes. I don't know if that is on purpose or if it is a product of the way I was making statements or asking questions. The biggest disappointment was that when I asked it to write a backhanded compliment it refused. It was also interesting to chat with the AI about the differences between it, Siri, Alexa and HAL 9000. What are your thoughts if you've tried it?
There's like entire posts about ChatGPT in the AI thread below this.
Crap.
I have actually found the "Improve Text" parameter very helpful in multiple contexts fwiiw. Would love to hear how others are using the Dec update
Well a thread for ChatGPT specifically isn't a bad idea. The idea of AI art and ChatGPT output are in fact not quite the same, and maybe we can put the Chat output here.
Let me start with a fresh prompt to show what ChatGPT can do
i wrote
It then just spat this complete program out:write Python code for a simple game, where you create an RPG character from a few prompts (very simplified rules and class selection) then it rolls random monsters for you to fight until you die.
https://pastebin.com/cix5GYpv
So how far can I actually push this game in Python without writing a line of code myself? Let's see.
You can run this code an it does indeed prompt you to type a name for your character, then type in your class name, then it automatically runs battles vs randomly generated creatures, but there's only one creature the "orc". And it doesn't pause between combat rounds.
After a few goes and adding to the prompt:
Version 2 of the gameOk, firstly, please combine the "Character" and "Monster" objects so they're the same class. That will simplify the code. The game should pause after each combat round, so the player can read the text. Also, instead of typing in a class name, have a data-structure with class choices in it, with each class having pros and cons. Monsters should have a similar table of types that gets randomly selected from. Also, make sure the data tables are made as compact as possible by omitting type names in favor of packing values in arrays. Plus, make sure that the character stats and monster stats are displayed at the start of each round of combat.
https://pastebin.com/RGqaTzPP
Well, what's next? let's see how GPT copes with this next bit. There needs to be healing and treasure, and a win-condition. Here's a more complex prompt with a lot of stuff that would in fact be time-consuming and bug-prone for a human to program in.
That got most of it, and the rest were tweaks for the rules and some edge-cases, which got this mostly finished game:ok add healing potions to the game. the character will start with 3 healing potions, and after winning each battle they should find a random amount of gold, plus a 25% chance of finding another healing potion. Before each round of fighting, the player should have options. Include Fight, Heal, Run. Run should be 50/50 to get away. Heal is only shown if they have healing potions. Potions heal 50% of total health. The automatic rest after a battle heals a random amount from 25%-50% of health. Make sure to pause when needed to let the player read the text. monsters should also have a small chance to run away if badly hurt. they should still drop treasure if they run away. Also, the game should have a win condition. Start by describing the character trapped in a dungeon, and have some random flavor text between each fight. if they win 5 battles, they should emerge to the sunlight, write flavor text as needed.
https://pastebin.com/uSFCZk4J
Some of the stuff in there is stuff I specifically asked it to do, but a lot of it is just stuff that GPT inferred that it should do, based on context, like detailing you treasure and potions left when you leave the dungeon, and having a text to say you found a healing potion, and the flavor text when you rest after a battle. I didn't tell it to do any of those.
However this was a quick and dirty one, and the code grew bigger than the free ChatGPT allows for output in one response, so i had to ask it to 'continue' to get the whole program each time. You get better results by asking ChatGPT to break things down into functions, then you can focus it on improving just one function at a time.
ChatGPT should be its own thread, because I think this is going to become much, much bigger than AI generated art. This tech is going to replace current search engines as the gateways to the internet, which also means that it's going to be huge business. Anybody ready for a ChatGPT called Bing? Google is in deep shit.
It's worth noting that ChatGPT is getting banned from some educational institutions, with the most publicized story being all public schools in NYC banning use of the bot. The main concern being the potential to plug in a homework assignment and getting back something that doesn't need editing as an answer.
There are several concerns from scientific circles as well, given that the replies ChatGPT comes up with can be nonsensical, based on misunderstandings or even material that has been proven false. And there are obviously several people who will try to pass off a ChatGPT article as their own work for a variety of reasons, like passing a written exam they didn't study for or trying to expose a lack of peer reviews.
Non-paywalled version of that
https://web.archive.org/web/20230106...s-ban-chatgpt/
That made some interesting points, similar to a post i just wrote elsewhere in response to a comment that AI art could "stifle human creativity". Using pastebin to avoid a wall of text, but I'll have one excerpt:
https://pastebin.com/SF44rG4j
What I wrote gels with the idea from the article that graphing calculators were initially viewed as cheating, but now it's seen as just another skill set to learn. Similarly, using the AIs will be a skill set that can be mastered, and the output itself usually needs to be "transformed" as in my post where I describe my own experiences using ChatGPT to write song lyrics, which I then write the music for and sing/perform on guitar.A lot of people probably think the process is that you just type in random prompts and keep pushing the "generate" button until purely at random, you get a "good output" and you're completely done. They think this, because they've only dabbled in the process, and that's how *beginners* do it, so they think that's how everyone else does it.
Yeah those are definitely issues, and part of it is because ChatGPT has boilerplate to sound "authoritative" even if it doesn't know what it's talking about. As for general AI articles passed off as people's work, that's not really an issue if the article is good or entertaining. So for journalism, I have no problem with them using AI tools. Maybe they'll have less grammar mistakes.
The thing about student assessment is a valid concern. But I think the idea that "someone will pass their history exam without studying!" is verging on the moral panic side of the debate, similar to how people were concerned that just being able to "Google the answer" would do the same thing. There are good students and bad students, and the bad students funnily enough are also bad at using tools like Google effectively. The same will probably be true in the AI era.
That will be an interesting prediction, if it turns out the students in the areas banned from using ChatGPT end up having *less* modern media literacy than the students where they didn't regulate that. Some people will still clearly excel given the available tools of the day, and assessments will tend to reflect that, however the means of assessment and thinking about what it is we're really trying to teach will evolve.
Last edited by Cipheron; 7th Jan 2023 at 19:18.
I spent the day drawing. With a pencil. Soooooo sort of the opposite of this thread. I might even write someone a letter. With a pen. On paper. From thoughts in my own brain.
Does anyone even remember the joy of getting a hand written letter in the mail? You are looking through all the bills when out of nowhere there is a letter from someone you know. Whoa. Crazy shit. Crazy delightful shit.
I cannot remember the last time I got a handwritten letter. It was probably from my grandmother, at least 20 years ago.. She was a big letter writer, with some pretty awful handwriting, and I think it certainly helped all those years deciphering her letters that helped me so much with reading doctors' handwriting. I write cards occasionally, but can't remember the last time I got one of those either..
I do miss it though.
It's just another tool in the toolbox and not a replacement for human creativity. I find it useful to bounce ideas off of and to help making lists to help me break that writer's block. It definitely can't replace human creativity at this point but I can see people being lazy and trying to use it to replace there own work and creativity.
I should mention that I sent out actual Christmas cards this year and I think that it surprised a lot of people.
Last edited by mxleader; 8th Jan 2023 at 01:16.
I used to string Christmas cards together in a long line as a decoration during the holidays. I also used to go out and chop down a tree. Now I just pull the little fiber optic tree from a box and plug it in. It's not the same as when there were kids at home. Then you were making memories. Now it's hoping nobody dies and fucks up the holiday. If I die at Christmas then just freeze dry my ass and bury me come April.
But yeah, Kevin is going to get a surprise. Not freeze dried me but a letter like I used to do. Sure we can talk any time we like. It used to cost dearly and now it's just another call with no long distance bill. But back in the day I wrote the hell out of some letters. And one out of nowhere? GTFO. I might even send a homemade CD like I used to. Sentimental shit is great. If you still have a grandparent I bet they would love this stuff. I know I would.
The main problem I see here is that ChatGPT is reasonably adept at figuring out what a multi-stage task is and provide a reasonably close answer to that, where Google needs to do several searches and the person doing the search needs to filter the various possible answers.
How each method will turn out will obviously depend on how the original question is posed and what is expected as demonstrated by your earlier game example in this thread.
Great 15 min video for an inkling into how this stuff works:
https://www.youtube.com/watch?v=gQddtTdmG_8
As for being wrong, that's necessary for it to also have creativity capacity. An AI, built to never be wrong, would also not be capable of generating original statements, since for any original statement there's always a non-zero chance that it's actually wrong.
Basically it's using inference and interpolation to create all the statements it gives you, and the process by which GPT creates novel correct statements is in fact identical to the process by which it creates novel incorrect statements, it's just that you only notice the process when it gives a "wrong" answer. So there's no easy fix for something like that.
--
It's the same technology used to create "new" faces. When a face-generator interpolates between "real" faces to make a new face, that is in fact an "untrue" face, since it doesn't correlate to any person that actually exists.
So the sentences about Snoopy are like those fake faces. They're sentences that *could* be true in some reality, just as the faces it generated are people who *could* exist.
---
One problem is that if anything is "too obvious" we don't generally write that down. So something like GPT merely never gets those facts as input.
One possibility is that there were in fact some cartoon-only characters, and because they were, it was a *notable fact* then it was written down somewhere and ended up being included into GPT's data. Then GPT has just inferred that it can combine the tokens "The siblings were in Snoopy" with the statements it's seen for some other characters which say "Character X was in Snoopy but not in the comic book". it's not dealing with facts, but abstract word-tokens, and in a lot of similar cases the same inference rules will allow it to generate CORRECT inferences. Truth isn't encoded in the system, that's an external quality that you assess by comparing the output to the "real world".
So there wasn't anywhere that they expressly told GPT that, unless specified otherwise, Snoopy characters are by default in the comic books. This may seem obvious if you know the context of Snoopy, but to a machine where it's sum total of context is the statements we give it about Snoopy, it's not actually at all obvious, and if the only time we reference appearances in the comic books is to note the EXCEPTIONS, then we're in fact giving it bad data.
Last edited by Cipheron; 9th Jan 2023 at 08:35.
Another thing to keep in mind is that GIGO applies. It's pretty evident that one thing that sets ChatGPT apart from previous efforts is the colossal size & breadth of its training data. It looks like they threw everything they could get their hands on into it. Regardless of how good the algorithms are, it's answers will only be as correct as the sources it's pulling from.
It's the same risk with web search. But at least with search, the results are displayed in context, which helps us figure out what weight to give to the information. Before ChatGPT can be used for academic or research purposes, it will need to be able to cite the sources it used when compiling its answer. I expect there will be copyright challenges to overcome as well.
I don't get why people are saying this will kill Google. ChatGPT is not a search engine. It can't pull up images, you can't enter quoted text and ask it to give you a link to the source, it can't pull up websites, etc. Sure, Google will pick answers from random websites about general questions and put them at the top, but otherwise apples and oranges
There's already a parse engine underneath Google, and ChatGPT's is better, so I agree would make for a better search engine parse engine after the rest of the infrastructure is made for it. But I don't think Google is going to be overthrown by it just because I can foresee Google just making the equivalent tech in-house.
But I think the writing on the wall is you're going to have one master chat engine that handles everything, search engine, running your apps & games, browsing videos and shows, linked up with your house and cars, taking care of little handyman tasks and chores, everything. Whoever gets on the ground level of that tech is going to be the tech behemoth for the next generation.
We have much of that already, and people don't want it. They use it to play songs and not much else, and the systems are losing money.
This might offend some people, but the bottom line is we have too much, and with time we'll have less and less appreciation for simpler things. AI just took that to a new level.
Too much info, too much art, too much music, too many different movies, games and shows.
I remember when innovative things had a huge impact on the world. Now it's news for a day, and the next we move on to the next trend.
We invented a fount of knowledge (composed of the internet, portable devices, social media, and now AI), and instead of sipping from it, we dove in and willingly drowned
https://www.geekwire.com/2023/micros...on-could-work/
When Microsoft was working on Cortana, they were also working on something they called the Bing Concierge Bot. It was supposed to be a productivity agent that would communicate with the user via text over a conversation/messaging platform, and try to do what a human assistant would do, using the web of course. It went dormant, then in 2019 Microsoft quietly made a $1B investment in OpenAI. The leak last week said that Microsoft has already been working on integrating ChatGPT into the Bing platform.
ChatGPT is a search engine, and a natural language processor, but also a lot more. Google search can give you answers to basic questions, especially ones that are commonly asked. But anything more than a quick question generally requires a human to review the resulting hits for relevance, and compile information from multiple sources to complete a task. The big deal with ChatGPT is not just the breadth of its training data set, but the way it compiles, merges, and adapts the information it was trained on to provide relatively complete answers.
I'll give you an example: recipes. I like to cook and I like variety. I don't like to make the same things the same way over and over. I often find myself with a rough idea of what I want and then hit Google search. I get back pages and pages of recipes, most of them redundant. I have to spend time skimming each recipe looking for what's unique about it (if anything) and how well it suits my intentions. While looking them over, I'll rough out a recipe with pen & paper, taking the relevant bits I like from recipes I find interesting. For a complicated or large meal, I could be at it for an hour. With ChatGPT, I can explain my recipe goals and constraints, and skip all that scrolling and skimming and note taking. It doesn't always give me a usable answer, because it doesn't understand flavor balance, but I've cooked two of its answers so far with only minor modifications.
Keep in mind too that ChatGPT is just a public test and it's been trained on text only. It hasn't been trained on the web. I assume that's happening with the Microsoft project.
Their intention is to go beyond just providing information, including taking action for the user. Suppose I need to make a business trip to the LA area. Right now, I'd start by pulling up Google Maps to see where the work location is and which airport is going to get me there the quickest, avoiding traffic jams if possible. Then I'd hit the travel search engines looking for flights, a hotel, a car. Before picking a hotel, I'd go back to Google to see what restaurants are in and/or around it. I'm always looking for good SE Asian (especially Thai) and Southern Italian restaurants. And then I'd book it all, and monitor my email for confirmations. Where ChatGPT is going, all of that will be done for me by my agent in a few minutes of messaging back and forth while I'm multitasking.
I'd love to have my virtual agent for organizing big meetings too. The reason why major companies across all industries are making investments in machine learning is not to produce individualized creative works for consumers. It's for the productivity improvement. The last time we saw a rapid exponential increase in non-farm labor productivity was during the first 10 years of the web. Integrating AI/ML into ordinary business is going to be the next big one, and each employee having their own virtual admin is just one of the ways.
Last edited by heywood; 9th Jan 2023 at 18:40.
ROFLMAO
Welp, it was bound to come to this sooner or later.
ChatGPT is future. It is the best AI tool. I used multiple AI tools but the content it generated is amazing. Long story short it garnishes the content like human written content.
Someone got ChatGPT to write Steve Jobs selling a regular stick as if it's an Apple product. That was ok, but I thought i could dial that up a notch.
It took a lot of cajoling of ChatGPT to get the structure of the comedy right. It keeps wanting to do stuff like explain the entire punchline too early in the piece, or deflating each feature before upselling the next, when it's in fact important for the pacing of the joke that Steve Jobs upsells everything to the max, then only grudgingly admits each feature doesn't really exist, while also downplaying it.Originally Posted by https://pastebin.com/bKhCHkwj
Last edited by Cipheron; 21st Jan 2023 at 07:35.