I expect copyright and licensing to be big and thorny issues. Just because you can find it online doesn't mean it's lawful to use it. For example, Getty Images added the following clause to their license agreement:
License restrictions are the main issue, but copyright is likely to raise its head too, as soon as some artist recognizes something in an AI generated work that is substantially similar to their copyrighted work.Originally Posted by Getty Images
EDIT: Upon second thought, maybe we shouldn't worry about stunts like that. The blame rests with the AI artist who tried to sell the work, not with AI. The artist could just as easily paint an image of Mickey Mouse and try to sell that.
Last edited by heywood; 11th Jan 2023 at 17:34.
Well I petitioned this, so I ought to say something.
So I noticed recently that for the latest version of Stable Diffusion, I think v2.0 after v1.5, they completely gutted out any images that weren't clearly public domain or otherwise legally available. So if you ran the 2.0 and 1.5 models side by side, you can see pretty clearly what a difference it makes to the output. 2.0 is noticeably worse, not awful, but like what we were seeing with Dall-E 1, still pretty borked and disappointing by comparison. You wouldn't really want to use it like other high end models.
So my thinking was that of course it's a necessary thing. Rights are rights, and they have to be enforced. But I was also thinking the major value of these systems isn't to copy artists' style per se (except for like the classics, where we're talking about works already in the public domain anyway). It's to be an image generator that can translate concepts to images, where the style should come from the prompts, and taking artists' style would even be counterproductive.
And I believe that it's possible for these systems to do that job with high quality using a data set that's clear in terms of the IP. It'd be a big challenge and probably expense to produce such a data set. But I think in the long term it's worth doing, and I think some group is going to get around to it sooner or later.
Now all that said, I think another big challenge on the horizon is that the idea of IP ownership itself (along with the idea of privacy, and the public-private distinction generally for that matter) is going to be attacked and increasingly unappreciated in the culture over the next few decades ... as in there will be an assumption any works are in the public domain where the creators themselves are also drinking that koolaide and taking it for granted. I'm in favor of having legal rights respected, but that's a different problem that's a bigger can of worms; and I'm not sure how it will play out. I'm unsettled by the idea, and I don't know that it will happen, but it looks like the writing on the wall for me right now. I'll be really interested to see what the culture appears to make of "content creation", "ownership", and "private vs. public" in the coming years.
Edit: Legally no, you can't, but you're proving my point in that last bit. The vast, vast majority of uses have gone on unrestricted by the IP owners, and when you have that kind of rampant unenforcement, it has the effect of eroding respect or even recognition of the law. AI art has just boosted that trend to the next level. It's something you can expect that's hard to deal with from the artists' perspective, or I guess we're calling all of them content creators now, which is another kind of troubling sign.
Also this, yes, I was going to add this above. If the data set itself is clear in terms of IP rights, it can still output images copying the style of others, but then it's not a violation by the model, it's a violation of the person entering the prompt that makes it create a copying image in the same way they'd violate it by drawing it themselves. So it could still be a violation, but of the user, not the model.
Last edited by demagogue; 12th Jan 2023 at 02:02.
*nods* And this new kind of exploitation makes them even harder to enforce. Train your AI on copyrighted material, then lie and say you used only public domain stuff. How can such a lie be exposed?
I have found myself in the strange position of wanting stronger copyright law.
You speak as though it's intentional, but perhaps it can be inadvertent. If one artist produces an outsized amount of a niche subject -- say, if they were the lead character artist on a new game -- then generating an image of that subject might also end up copying the style.But I was also thinking the major value of these systems isn't to copy artists' style per se (except for like the classics, where we're talking about works already in the public domain anyway). It's to be an image generator that can translate concepts to images, where the style should come from the prompts, and taking artists' style would even be counterproductive.
This has already started. Someone has called AI art "the democratization of art," as though artists were aristocratics. Granted, this person was an idiot amplified by the social dynamics of the hellsite Twitter, but the words are out there.]Now all that said, I think another big challenge on the horizon is that the idea of IP ownership itself (along with the idea of privacy, and the public-private distinction generally for that matter) is going to be attacked and increasingly unappreciated in the culture over the next few decades...
There's also a change in the implications of "public domain." Previously this meant it the art was free to copy, repost and reprint. Nobody anticipated training AI to be one of the permissions granted. I expect to see some public licenses appearing that explicitly forbid AI training.as in there will be an assumption any works are in the public domain where the creators themselves are also drinking that koolaide and taking it for granted.
Let me relate three things that have happened that have contributed to my anger. (1) Kotaku published an article about Twitter burning, using a DALL-E image of its mascot burning. In previous eras that header would have been commissioned or licensed artwork. (2) A major figure in the Magic: The Gathering fan community launched a Kickstarter for his own card game, which will use only AI-generated images for the cards. (3) An artist was streaming herself drawing a commission. As a prank, a viewer took an in-progress screenshot, fed it into some AI art program, posted the image before the original artist, and then pretended the artist had copied them.
Last edited by Anarchic Fox; 14th Jan 2023 at 21:57.
Not exactly AI, but related
An Italian startup called Robotor has invented a machine that's nearly as good at carving marble masterpieces out of Carrara marble as its Renaissance-era predecessors.
As CBS News reports, Robotor founder Giacomo Massari is convinced his robot-machined marble statues are nearly as good as those made by humans. Almost.
"I think, let's say we are in 99 percent," he told CBS. "But it's still the human touch [that] makes the difference. That one percent is so important."
Massari even went a step further arguing that "robot technology doesn't steal the job of the humans, but just improves it" — a bold statement, considering the mastership that went into a form of art that has been around for thousands of years.
Robotor's latest robot sculptor, dubbed "1L," stands at 13 feet tall, a zinc alloy behemoth capable of carefully chipping away at a slab of marble day and night.
The company claims the technology is nothing short of revolutionary.
"The quarried material can now be transformed, even in extreme conditions, into complex works in a way that was once considered unimaginable," the company boasts on its website. "We are entering a new era of sculpture, which no longer consists of broken stones, chisels and dust, but of scanning, point clouds and design."
Unsurprisingly, not everybody is happy with robots taking over the craft, arguing that something important could be lost in the process of modernizing processes with new technologies.
"We risk forgetting how to work with our hands," Florence Cathedral sculptor Lorenzo Calcinai told CBS. "I hope that a certain knowhow and knowledge will always remain, although the more we go forward, the harder it will be to preserve it."
Another article
https://www.cbsnews.com/news/robots-...-robotics-art/
Yeah, we're getting good at robotic (and we got high school classes too about robotic in Milan and Bergamo area )
But the problem is the same as hand writing (and brain-level implications), just Calcinai says.
And so begin the lawsuits
https://www.polygon.com/23558946/ai-...art-midjourney
https://www.theverge.com/2023/1/17/2...images-lawsuit
Getty Images is suing Stability AI, creators of popular AI art tool Stable Diffusion, over alleged copyright violation.
In a press statement shared with The Verge, the stock photo company said it believes that Stability AI “unlawfully copied and processed millions of images protected by copyright” to train its software and that Getty Images has “commenced legal proceedings in the High Court of Justice in London” against the firm.
Getty Images CEO Craig Peters told The Verge in an interview that the company has issued Stability AI with a “letter before action” — a formal notification of impending litigation in the UK. (The company did not say whether legal proceedings would take place in the US, too.)
“The driver of that [letter] is Stability AI’s use of intellectual property of others — absent permission or consideration — to build a commercial offering of their own financial benefit,” said Peters. “We don’t believe this specific deployment of Stability’s commercial offering is covered by fair dealing in the UK or fair use in the US. The company made no outreach to Getty Images to utilize our or our contributors’ material so we’re taking an action to protect our and our contributors’ intellectual property rights.”
When contacted by The Verge, a press representative for Stability AI, Angela Pontarolo, said the “Stability AI team has not received information about this lawsuit, so we cannot comment.”
The lawsuit marks an escalation in the developing legal battle between AI firms and content creators for credit, profit, and the future direction of the creative industries. AI art tools like Stable Diffusion rely on human-created images for training data, which companies scrape from the web, often without their creators’ knowledge or consent. AI firms claim this practice is covered by laws like the US fair use doctrine, but many rights holders disagree and say it constitutes copyright violation. Legal experts are divided on the issue but agree that such questions will have to be decided for certain in the courts. (This past weekend, a trio of artists launched the first major lawsuit against AI firms, including Stability AI itself.)
Getty Images CEO Peters compares the current legal landscape in the generative AI scene to the early days of digital music, where companies like Napster offered popular but illegal services before new deals were struck with license holders like music labels.
“We think similarly these generative models need to address the intellectual property rights of others, that’s the crux of it,” said Peters. “And we’re taking this action to get clarity.”
Wombo just added an even more realistic mode (Realistic 2)
I've been playing around in Nightcafe AI and it's pretty interesting but sometimes I think that the are that is produced is like when you make a collage out of old magazine and newspaper clippings. Sometimes the AI does such a great job that I'm amazed but at times the AI goes badly wrong with strange results. It's almost like watching someone with a mental illness creating art.
Battle of the AI's. ChatGPT plays chess against Stockfish:
https://www.reddit.com/r/AnarchyChes...chatgpt_black/
Here's an article where some neuroscience types try to reproduce different aeitiologies of visual hallucinations (aeitiology = the neurological profile underlying each category: neurological conditions, visual loss, and psychedelics) through the parameters of deep learning art/visualization model, and it's apparently accurately reproducing visual content with the different features of (what people report as) the different kinds of hallucinations, depending on the type.
They call their method ‘computational (neuro)phenomenology'. I don't know; maybe one shouldn't make too much of it. But there's something kind of cool and unsettling about being able to dig into some of the traditionally more hidden realms of consciousness in this kind of way.
Last edited by demagogue; 18th Feb 2023 at 07:30.
I asked the Nightcafe generator to make an image of a leprechaun sitting around a large fire in a dark rainy old growth forest. It's pretty cool and weird that it put the fire inside of a tree trunk. The way it interprets language and word order can have some strange results.
Gotta say, the novelty has mostly worn off for me
@Azaran - I was getting pretty bored for a while too but started mixing up my prompts a lot more and trying out different art styles. Sometimes the results are really out there.
Here's one where I asked it to make a pic with Garrett the Thief drinking ale in an old English pub. Garrett is nowhere to be seen.
Then there was this one with Saint Nicholas and Krampus drinking ale in an old dark English Pub.
I've been experimenting with roughly the same theme and text prompts but changing the styles and just hitting print over and over again and there are a lot of strange and interesting results. What I can't figure out is why either Saint Nicholas or Krampus is usually triple fisting ales when I don't use that as a text prompt: at least not the quantity. Also, in several images finger counts are usually three or six or just weird shaped.
Last edited by mxleader; 30th Sep 2023 at 03:37.
More strange Saint Nick stuff:
What's up with the viking-helmet devil-dog, lol.