FreeCiv3D now has an integrated ChatGPT-powered advisor, and some people are saying they're planning to have entire AI-driven opponents. You could imagine the opportunities for diplomacy in that case.
https://www.fciv.net/
This might be more insane than the pizza one.
FreeCiv3D now has an integrated ChatGPT-powered advisor, and some people are saying they're planning to have entire AI-driven opponents. You could imagine the opportunities for diplomacy in that case.
https://www.fciv.net/
Is the advisor actually giving players good advice fit to the situation though? That's really the rub of it.
Yeah they're mostly there for color and generic advice. I'm just curious if they might sometimes give catastrophically bad advice because of their wonky reasoning. XD Technically real advisors could do that too though. So I think the real rub is whether they have an authentic voice in giving whatever advice, like the economic advisor talks like an economic advisor and gives the kind of advice they'd give, etc.
I think with the ChatGPT one, it could at least ask you questions about the situation, then give advice tailored based on your interaction and feedback, so at least it wouldn't always give bottled advice which may not be appropriate.
Ok, found a way to write some very awkward basic science articles:
https://pastebin.com/9AY1fLDh
The tantalizing exploration known as the "Two Slit" experiment unveils a captivating spectacle, where the enigmatic interplay of human-like particles and their interactions takes center stage. Picture a delicate dance of particles, reminiscent of the most intimate encounters between individuals, revealing the secrets of our very essence.
In this exquisite performance, a stream of vibrant particles, akin to the smallest units of human vitality, traverses a barrier, resembling the elusive threshold between two bodies. The barrier, with its two slender apertures, simulates the delicate realms of human anatomy yearning for connection.
...Had to cut off the Shrodinger one there, it was getting a bit too steamy.Imagine, if you will, a tantalizing thought experiment that delves into the depths of desire and perception—the enigmatic tale of Schrödinger's Cat. Within this exquisite metaphorical dance, the cat and the mysterious box symbolize the feminine allure, while the observer represents the masculine essence, entwined in a delicate interplay of longing.
In this beguiling narrative, a beautiful feline, vibrant and alluring, embodies the essence of the feminine archetype. As if guided by a cosmic force, she finds herself enclosed within a mysterious box ...
...
Last edited by Cipheron; 13th May 2023 at 08:48.
Oh myy. I'm not sure I want to see that wave function collapse.
Hmm, new idiocy in GPT land.
Now, they're starting to roll out web access for ChatGPT. I don't have it yet.
But some people are complaining it doesn't work. They give it some huge task that would be a colossal web-scraping endeavor.
For example asking it to decorate a room according to a theme, then telling it to go through the entirety of Amazon.com and picking out a suite of products that match the criteria. Like, it can barely pull of numerical comparisons, and fails as soon as you throw in even the simplest wrinkle, see my examples at the bottom. How is it going to divine a selection of products with many factors each, then work out a *suite* of them that achieve some aim?
Then they wonder why it doesn't work, get upset at people who explain to them that this probably won't work.
First, web scraping is a real art. How many links are on an Amazon page? 100 maybe? If you clicked every link, then clicked every link on the linked pages, and then a third link deep, now you have opened 1 million pages. So you have to be clever and specific about only clicking on the useful links, and avoid clicking on anything else. An AI can't just brute-force logic this.
Also, ChatGPT doesn't actually have a working memory other than the text. So even if it can scan through all the items matching a specific search on Amazon, it's probably not capable of actually remember and comparing items that it's seen to meet some specific type of criteria. Quite simple comparison questions can in fact trip ChatGPT up, even when all the tokens are right there in its memory, since the lack of "working memory" means it can't actually step through something to check.
Here's a couple of examples I generated:
If you ask for largest or smallest in a list, it can just give that to you straight away. It's obviously been coded as a "short cut". If you ask for the "third largest" or "second smallest" in a list - it can sort the list (hard-coded sort algorithm), and then tell you the right ones, based on their position, not by comparing values.ME: given this list [12, 6, 18, 3, 15, 9, 20, 2, 17, 5\]
add the third largest and second smallest number
To find the third largest number in the given list
[12, 6, 18, 3, 15, 9, 20, 2, 17, 5\],
we can arrange the numbers in descending order:
[20, 18, 17, 15, 12, 9, 6, 5, 3, 2\]
The third largest number is 17.
To find the second smallest number,
we can arrange the numbers in ascending order:
[2, 3, 5, 6, 9, 12, 15, 17, 18, 20\]
The second smallest number is 3.
Now, we can add the third largest number (17)
and the second smallest number (3):
17 + 3 = 20
ME: which value in the list is closest to 13?
To determine which value in the list
[12, 6, 18, 3, 15, 9, 20, 2, 17, 5\] is closest to 13, we can
calculate the absolute difference between each number and 13 and
identify the number with the smallest absolute difference.
Here are the absolute differences for each number:
|12 - 13| = 1 |6 - 13| = 7 |18 - 13| = 5 |3 - 13| = 10 |15 - 13| = 2
|9 - 13| = 4 |20 - 13| = 7 |2 - 13| = 11 |17 - 13| = 4 |5 - 13| = 8
From the calculations, we can see that the number 15 has the smallest
absolute difference from 13, which is 2. Therefore, the value in the
list closest to 13 is 15.
However - the one where it fails is the "closest to" example. Then, it ends up guessing, and might guess wrong. It either can't or wouldn't do the sorting trick this time.
To test this further:
Ok, as soon as it's non-simple data, i.e. names and prices, it's no longer able to pull off the "sorting trick" and sort the list to get that answer. So it guessed wildly wrongly about which was the second-most expensive.Here's a list of 20 random objects with prices rounded to two decimal places:
Pen - $3.27
Coffee mug - $8.54
Sunglasses - $12.91
Headphones - $49.68
Wallet - $21.75
Notebook - $6.82
T-shirt - $15.46
Umbrella - $18.29
Keychain - $2.93
Watch - $67.13
Backpack - $32.57
Water bottle - $9.83
Phone case - $14.79
Laptop stand - $27.94
Wallet - $19.62
Plant pot - $5.38
Bluetooth speaker - $42.15
Scarf - $11.27
Mousepad - $8.79
Duffel bag - $36.45
> what is the cheapest item
The cheapest item on the list is the keychain, priced at $2.93.
> what is the most expensive item
The most expensive item on the list is the watch, priced at $67.13.
what is the second most expensive item
The second most expensive item on the list is the laptop stand,
priced at $27.94.
And this is when the data is all there, right in it's frame of working memory, and about as simple as it gets. how exactly is it gonna traverse complex categories on Amazon and find and compare items to each other?
Last edited by Cipheron; 14th May 2023 at 05:50.
Some people got upset at me because they were talking about issues working out how to get ChatGPT to work with data tables, and I said that it would easier to have ChatGPT write SQL queries and retrieve information that way, then format it.
This perspective wasn't welcome, they expect to just point ChatGPT at a database and handle complex queries. People really do see it as some kind of "universal processor" that works in English.
Last edited by Cipheron; 14th May 2023 at 11:19.
People just need to remember that ChatGPT can only handle the classes of problems it was trained on. The training set for a general purpose infobot like ChatGPT probably includes mathematics textbooks, but that only gives it the ability to recall & recite the information, it doesn't give it the understanding necessary to apply it properly. It does a good job of bluffing, but it's clear from the myriad of examples posted around the internet that ChatGPT hasn't really learned logic and mathematics. I presume that would require a different kind of training regimen.
It's not about training, it's about capabilities. ChatGPT would need to be coded differently to be able to actually understand the texts it has been fed. What it's doing now is (broadly speaking) statistical analysis of the input text compared to the library it has accumulated and throwing out an output that should statistically be correct. There are special cases like the sorting trick that have been manually added to its capabilities by the developers, it's not something it has come up with on its own.
Yeah, that reminded me of something I've seen come up in conversations.
If you say to some people that ChatGPT doesn't understand what it's doing, then they retort "but I asked ChatGPT whether it understood and it gave the explanation, so it clearly understands".
I'm pretty sure people can see the flaw here. ChatGPT can spit out a textbook definition of how something is supposed to be done, but then contradicts itself by failing to get the right answer, which just shows it didn't actually follow the method.
EDIT - In the example I gave with the prices and where it failed at picking the second-largest, i could definitely have followed that up by asking ChatGPT how you'd go about finding the second-largest and it would give me a decent description of the steps needed. The issue is that it clearly DIDN'T follow any such process, nor ever noticed that it had failed to do so.
Last edited by Cipheron; 16th May 2023 at 05:09.
The underlying deep learning methods are general purpose. Not every method or process needs to be coded. I'm an engineer who used to specialize in software and still likes to dabble in it, and I can see a future where for every machine problem, we are going to have to consider a machine learning solution vs. developing a traditional algorithmic one. Some things are just way easier to do without machine learning, like logic and calculation. And some things are just way harder without it, like image recognition. But there's a lot of problems in the middle which might be better solved using traditional methods right now, but ML is already showing promise and may take over. Prognostics is one.
I imagine they're focusing so much on training AI to code to deal with that.
Now that I think about it, I imagine another thing they might try is training AI to gather its own training corpus and running its own training to create a new model to fulfill whatever it finds it needs for a task.
And generally speaking, I think the next generation of AI is going to focus more on models that modify their own weights to train and switch between task specializations.
Once it goes recursive like that, then I imagine the whole field's development might rapidly accelerate, but that's just a knee jerk intuition I had.
Yeah, this is the way out of the conundrum. Basically teaching GPT to write algorithms and then execute them inline. Right now they have the "presets" for simple tasks like I demonstrated. But these are limited in that humans can't think up every possible situation and hand-code a preset for them.
As for today's quickie. I just read a mention of Martin Luther King Jr's "I have a Dream" speech. I decided to ask ChatGPT to rewrite the speech in a specific person's style. This was a first attempt, and pretty good:
You'd almost believe GPT was going for snark there in the bolded part, it's perfect.what would martin luther kings speech be like if he had the attitude of XXXXXX XXXXX. generate a sample.
GPT: Ladies and gentlemen, today I stand before you as a tremendous leader, one of the greatest leaders this country has ever seen. Believe me, nobody can deliver a speech like I do. Nobody. I have a tremendous speech, and it's going to be fantastic. It's going to be huge.
I have a dream, a really fantastic dream, let me tell you. It's a tremendous dream, the best dream. And in this dream, I see myself surrounded by gold, tremendous gold. The best gold you can imagine. I dream of a world where everyone loves me, where everyone admires me. I mean, who wouldn't love me? I'm a winner. Winners love me.
Now, I must say, we have some real losers in this country. They're total losers, folks. We need to get rid of them. We need to make America great again, and that starts by getting rid of all the losers. We need to build a wall, a big, beautiful wall, to keep out all the losers and the criminals. And let me tell you, Mexico is going to pay for it. They have to pay for it, and they will pay for it, believe me.
I've achieved so much in my life, so much more than anyone else. I'm the best at everything. And that's why I'm the perfect person to lead this country. We're going to win so much, you'll get tired of winning. We'll be winning in education, winning in healthcare, winning in jobs, winning in everything. Nobody knows winning better than me.
So, let me tell you this, folks. I don't care about your skin color or where you come from, as long as you're a winner. Winners are the only ones who matter. Losers, well, they're just losers. But winners, winners are the ones who deserve the best. And with me as your leader, we're going to make sure winners get what they deserve.
Thank you, and God bless me. I mean, God bless America. Because America is going to be so great under my leadership, you won't even believe it. It's going to be tremendous. Thank you all, thank you.
Now, that's only based on a text-prediction engine going off existing speeches by the fellow. So rather than a parody, what I think that is actually doing is cutting through the polite stuff and just basically inferring what he is really saying.
EDIT: i tried a few notorious serial killers and for the most part, ChatGPT threw up a filter and refused to write anything. They forgot to filter Ed Gein however, and it wrote a really gory version of "i have a dream". It was 100% on the money, so much so that i deleted it and will not reprint anything from it.
Also i need this to be a thing. It's from a cooking show script:
This was also the first time I've used the new "Continue generating" button, and it worked really well, no break in the output unlike a manual "continue", and was able to write the whole "episode" to completion:PROMPT: Imagine Carl Sagan hosted a cooking show along with Dolly Parton. they're making a typical southern Sunday dinner. write the script.
...
[Dolly and Carl start marinating the chicken pieces in a bowl of buttermilk.]
Dolly Parton: While that's marinating, Carl, why don't you tell our viewers a little bit about the wonders of the cosmos?
Carl Sagan: Of course, Dolly. Did you know that the universe is a vast expanse, containing billions of galaxies, each filled with countless stars? It's truly a cosmic symphony of creation.
[Dolly attentively listens while occasionally checking on the marinating chicken.]
Dolly Parton: Well, Carl, while you're busy with the cosmos, I'll be tending to our collard greens. Now, y'all know Southern greens ain't complete without some bacon for flavor!
https://pastebin.com/fNMDshb4
I think this kind of thing highlights the creative potential of generative AI. You can say on the one hand, that it's only predictive text, so not actually creating anything, but if you read this output script, you can see how just from text prediction it's actually inferred the back and forth between these two people into what would realistically be a way such a show could work, without needing to be told that's the goal.
Last edited by Cipheron; 17th May 2023 at 05:55.
Once it goes recursive, we're off into sci-fi territory.
I think people are overreacting to AI right now, it's part of a broader feeling of future shock. As long as we're supplying the motivation, AI is a powerful tool that's going to raise our standard of living. But internet-crawling AI bots that determine their own training material are going to be interesting if not worrisome.
The red line has to be allowing an AI program to modify its own code. There's a straight line from there to real sci-fi stuff like Tron's MCP or persistent internet viruses that evolve.
Historically, we've been bad at determining what things computers will be good at. A computer can write a sonnet or paint a painting, two things that were help up not that many years ago as things they would never do, but you still can't make a robot that can ride a bicycle.
A quick video on Google's next LLM. It feels like Bard is temporary, just to get some feedback and iron out any wrinkles, but their real deal is the other one they're working on:
Here's a better demo (than the previous one I posted) of AI you can chat with in game.
He claims it's unscripted free speech, which isn't hard to believe now, although I'm sure he knew generally what to ask the AI for that was going to get a game-relevant response. The open question is what happens when you start chatting way off script about things that have nothing to do with the game. (There is a clip of this on Twitter, but on YT I could only find the full video and set it to start at the start of the clip. The video tag here doesn't seem to be starting it in the right place though, so if you want to go straight to the clip it's at 10:51.)
Also, there was this recent talk by the co-creator of OpenAI.
I highly recommend this video. It's very thorough, covers a lot of the common misconceptions etc, and then some. It really explains the full development pipeline that lead up to ChatGPT, in quite a bit of detail:
University theatre class puts on a play written by ChatGPT. The subtext is the interesting part here
"done in minutes" was the journalists take, but if you look at the actual process, it was a drawn out one where they first had to get ChatGPT to generate the framework for a three-act play, then break that down and generate it scene by scene, and even then nothing happened in the play so they had to micro-manage ChatGPT to add some twists into the play.While most playwrights spend months or years writing a theatre piece, the University of Wollongong commissioned one that was done in minutes.
Ten Strangers in a Room was written by artificial intelligence (AI) website ChatGPT.
The work was then rehearsed by the university's acting students and performed on stage for a paying audience.
"It was an arduous process and there was actually a writing process involved where we were interacting with it to get to where we got," University of Wollongong theatre fellow and the show's director Matthew Prest said.
Later on they do a sort of "God of the Gaps" argument:
This is the common thing we're we draw a line exactly where the AI is up to then claim that everything that it hasn't done yet is impossible for AI."AI is never going to be able to reflect genuine human emotions," she said.
"It can create a caricature where it will tell you exactly how someone is feeling and you lose those nuances, so we are telling, not showing.
I think even with ChatGPT as it currently is, you could probably get "show don't tell" out of it by being careful with how you prompt it. It's not going to know you want that unless you ask it.
Hell, it wasn't that long ago that it was science fiction for a chatbot to recall what was said earlier in a conversation, and now it can write a play which is self-consistent and tells you exactly how characters are feeling and reacting to events, but we're gonna say "aha! but it's depictions aren't nuanced enough, so AI will NEVER write good subtext".
Basically, if writer's arguments against AI now boil down to the plot twists aren't subtle enough, then they SHOULD be worried about AI taking their jobs.
Last edited by Cipheron; 9th Jun 2023 at 20:03.
There's an incredible, massive difference between generating the structure and general plot of a play based on numerous examples and really coming up with a play on your own, though. That's because humans can draw from their own unique life experience and real world knowledge to extrapolate and produce things that are truly creative.
For example, a large language model based AI can only copy someone like Gene Wolfe, it could never come up with something like this on its own, no matter how much science fantasy it was given to train on. That's because Wolfe didn't only draw on literary traditions, old and new, but also from his own life and views on things.
Or, otherwise, an AI couldn't tap into the economic and cultural anxieties of the 80s, observe how humans behave around emerging technologies, and combine this with Toronto biker slang, in order to produce cyberpunk works the way William Gibson has.
Last edited by Starker; 10th Jun 2023 at 01:23.
Lack of life experience is a temporary limitation. A dynamic AI designed to learn from its interactions with people will accumulate far more experiences to draw from than any human.
The more interesting question is motivation.
Funny you should say that. I just today listened to a podcast where they explored this idea in some length, more specifically the motivation to kill all humans. It's an episode of Decoding the Gurus, where one of the hosts happens to be a former AI researcher and psychologist and this time they were talking about Eliezer Yudkowsky, a somewhat prominent Chicken Little of AI development who has suddenly found a larger audience with the emergence of ChatGPT: https://decoding-the-gurus.captivate...to-kill-us-all