TTLG|Jukebox|Thief|Bioshock|System Shock|Deus Ex|Mobile
Page 8 of 8 FirstFirst ... 345678
Results 176 to 186 of 186

Thread: Make your own AI-generated art

  1. #176
    Member
    Registered: May 2004
    Quote Originally Posted by Cipheron View Post
    EDIT: also, messing with ChatGPT just now:

  2. #177
    Member
    Registered: Aug 2009
    Location: thiefgold.com
    Quote Originally Posted by Anarchic Fox View Post
    Okay, I've been redirected here. What do people think about the fact that recent AI art programs (DALL-E most notably) used thousands of artists' work as training data without consent?
    Well that was expected. You can already search countless copyrighted images on any search engine and use them as you wish, so it's pretty much fair game

  3. #178
    Member
    Registered: Sep 2001
    Location: The other Derry
    Quote Originally Posted by Anarchic Fox View Post
    Okay, I've been redirected here. What do people think about the fact that recent AI art programs (DALL-E most notably) used thousands of artists' work as training data without consent?
    I expect copyright and licensing to be big and thorny issues. Just because you can find it online doesn't mean it's lawful to use it. For example, Getty Images added the following clause to their license agreement:

    Quote Originally Posted by Getty Images
    k. No Machine Learning, AI, or Biometric Technology Use. Unless explicitly authorized in a Getty Images invoice, sales order confirmation or license agreement, you may not use content (including any caption information, keywords or other metadata associated with content) for any machine learning and/or artificial intelligence purposes, or for any technologies designed or intended for the identification of natural persons. Additionally, Getty Images does not represent or warrant that consent has been obtained for such uses with respect to model-released content.
    License restrictions are the main issue, but copyright is likely to raise its head too, as soon as some artist recognizes something in an AI generated work that is substantially similar to their copyrighted work.

  4. #179

  5. #180
    Member
    Registered: Sep 2001
    Location: The other Derry
    EDIT: Upon second thought, maybe we shouldn't worry about stunts like that. The blame rests with the AI artist who tried to sell the work, not with AI. The artist could just as easily paint an image of Mickey Mouse and try to sell that.
    Last edited by heywood; 11th Jan 2023 at 17:34.

  6. #181
    Moderator
    Registered: Jan 2003
    Location: NeoTokyo
    Quote Originally Posted by Anarchic Fox View Post
    Okay, I've been redirected here. What do people think about the fact that recent AI art programs (DALL-E most notably) used thousands of artists' work as training data without consent?
    Well I petitioned this, so I ought to say something.

    So I noticed recently that for the latest version of Stable Diffusion, I think v2.0 after v1.5, they completely gutted out any images that weren't clearly public domain or otherwise legally available. So if you ran the 2.0 and 1.5 models side by side, you can see pretty clearly what a difference it makes to the output. 2.0 is noticeably worse, not awful, but like what we were seeing with Dall-E 1, still pretty borked and disappointing by comparison. You wouldn't really want to use it like other high end models.

    So my thinking was that of course it's a necessary thing. Rights are rights, and they have to be enforced. But I was also thinking the major value of these systems isn't to copy artists' style per se (except for like the classics, where we're talking about works already in the public domain anyway). It's to be an image generator that can translate concepts to images, where the style should come from the prompts, and taking artists' style would even be counterproductive.

    And I believe that it's possible for these systems to do that job with high quality using a data set that's clear in terms of the IP. It'd be a big challenge and probably expense to produce such a data set. But I think in the long term it's worth doing, and I think some group is going to get around to it sooner or later.

    Now all that said, I think another big challenge on the horizon is that the idea of IP ownership itself (along with the idea of privacy, and the public-private distinction generally for that matter) is going to be attacked and increasingly unappreciated in the culture over the next few decades ... as in there will be an assumption any works are in the public domain where the creators themselves are also drinking that koolaide and taking it for granted. I'm in favor of having legal rights respected, but that's a different problem that's a bigger can of worms; and I'm not sure how it will play out. I'm unsettled by the idea, and I don't know that it will happen, but it looks like the writing on the wall for me right now. I'll be really interested to see what the culture appears to make of "content creation", "ownership", and "private vs. public" in the coming years.

    Edit:
    Quote Originally Posted by Azaran View Post
    Well that was expected. You can already search countless copyrighted images on any search engine and use them as you wish, so it's pretty much fair game
    Legally no, you can't, but you're proving my point in that last bit. The vast, vast majority of uses have gone on unrestricted by the IP owners, and when you have that kind of rampant unenforcement, it has the effect of eroding respect or even recognition of the law. AI art has just boosted that trend to the next level. It's something you can expect that's hard to deal with from the artists' perspective, or I guess we're calling all of them content creators now, which is another kind of troubling sign.

    Quote Originally Posted by heywood View Post
    EDIT: Upon second thought, maybe we shouldn't worry about stunts like that. The blame rests with the AI artist who tried to sell the work, not with AI. The artist could just as easily paint an image of Mickey Mouse and try to sell that.
    Also this, yes, I was going to add this above. If the data set itself is clear in terms of IP rights, it can still output images copying the style of others, but then it's not a violation by the model, it's a violation of the person entering the prompt that makes it create a copying image in the same way they'd violate it by drawing it themselves. So it could still be a violation, but of the user, not the model.
    Last edited by demagogue; 12th Jan 2023 at 02:02.

  7. #182
    Member
    Registered: Aug 2002
    Location: Maupertuis
    Quote Originally Posted by demagogue View Post
    So my thinking was that of course it's a necessary thing. Rights are rights, and they have to be enforced.
    *nods* And this new kind of exploitation makes them even harder to enforce. Train your AI on copyrighted material, then lie and say you used only public domain stuff. How can such a lie be exposed?

    I have found myself in the strange position of wanting stronger copyright law.

    But I was also thinking the major value of these systems isn't to copy artists' style per se (except for like the classics, where we're talking about works already in the public domain anyway). It's to be an image generator that can translate concepts to images, where the style should come from the prompts, and taking artists' style would even be counterproductive.
    You speak as though it's intentional, but perhaps it can be inadvertent. If one artist produces an outsized amount of a niche subject -- say, if they were the lead character artist on a new game -- then generating an image of that subject might also end up copying the style.

    ]Now all that said, I think another big challenge on the horizon is that the idea of IP ownership itself (along with the idea of privacy, and the public-private distinction generally for that matter) is going to be attacked and increasingly unappreciated in the culture over the next few decades...
    This has already started. Someone has called AI art "the democratization of art," as though artists were aristocratics. Granted, this person was an idiot amplified by the social dynamics of the hellsite Twitter, but the words are out there.

    as in there will be an assumption any works are in the public domain where the creators themselves are also drinking that koolaide and taking it for granted.
    There's also a change in the implications of "public domain." Previously this meant it the art was free to copy, repost and reprint. Nobody anticipated training AI to be one of the permissions granted. I expect to see some public licenses appearing that explicitly forbid AI training.

    Let me relate three things that have happened that have contributed to my anger. (1) Kotaku published an article about Twitter burning, using a DALL-E image of its mascot burning. In previous eras that header would have been commissioned or licensed artwork. (2) A major figure in the Magic: The Gathering fan community launched a Kickstarter for his own card game, which will use only AI-generated images for the cards. (3) An artist was streaming herself drawing a commission. As a prank, a viewer took an in-progress screenshot, fed it into some AI art program, posted the image before the original artist, and then pretended the artist had copied them.
    Last edited by Anarchic Fox; 14th Jan 2023 at 21:57.

  8. #183
    Member
    Registered: Aug 2009
    Location: thiefgold.com
    Quote Originally Posted by Anarchic Fox View Post
    As a prank, a viewer took an in-progress screenshot, fed it into some AI art program, posted the image before the original artist, and then pretended the artist had copied them.


  9. #184
    Member
    Registered: Aug 2009
    Location: thiefgold.com
    Not exactly AI, but related

    An Italian startup called Robotor has invented a machine that's nearly as good at carving marble masterpieces out of Carrara marble as its Renaissance-era predecessors.

    As CBS News reports, Robotor founder Giacomo Massari is convinced his robot-machined marble statues are nearly as good as those made by humans. Almost.

    "I think, let's say we are in 99 percent," he told CBS. "But it's still the human touch [that] makes the difference. That one percent is so important."

    Massari even went a step further arguing that "robot technology doesn't steal the job of the humans, but just improves it" — a bold statement, considering the mastership that went into a form of art that has been around for thousands of years.

    Robotor's latest robot sculptor, dubbed "1L," stands at 13 feet tall, a zinc alloy behemoth capable of carefully chipping away at a slab of marble day and night.

    The company claims the technology is nothing short of revolutionary.

    "The quarried material can now be transformed, even in extreme conditions, into complex works in a way that was once considered unimaginable," the company boasts on its website. "We are entering a new era of sculpture, which no longer consists of broken stones, chisels and dust, but of scanning, point clouds and design."

    Unsurprisingly, not everybody is happy with robots taking over the craft, arguing that something important could be lost in the process of modernizing processes with new technologies.

    "We risk forgetting how to work with our hands," Florence Cathedral sculptor Lorenzo Calcinai told CBS. "I hope that a certain knowhow and knowledge will always remain, although the more we go forward, the harder it will be to preserve it."


    Another article
    https://www.cbsnews.com/news/robots-...-robotics-art/

  10. #185
    Member
    Registered: Dec 2006
    Location: Berghem Haven
    Yeah, we're getting good at robotic (and we got high school classes too about robotic in Milan and Bergamo area )
    But the problem is the same as hand writing (and brain-level implications), just Calcinai says.

  11. #186
    Member
    Registered: Aug 2009
    Location: thiefgold.com
    And so begin the lawsuits

    https://www.polygon.com/23558946/ai-...art-midjourney

    https://www.theverge.com/2023/1/17/2...images-lawsuit

    Getty Images is suing Stability AI, creators of popular AI art tool Stable Diffusion, over alleged copyright violation.

    In a press statement shared with The Verge, the stock photo company said it believes that Stability AI “unlawfully copied and processed millions of images protected by copyright” to train its software and that Getty Images has “commenced legal proceedings in the High Court of Justice in London” against the firm.

    Getty Images CEO Craig Peters told The Verge in an interview that the company has issued Stability AI with a “letter before action” — a formal notification of impending litigation in the UK. (The company did not say whether legal proceedings would take place in the US, too.)

    “The driver of that [letter] is Stability AI’s use of intellectual property of others — absent permission or consideration — to build a commercial offering of their own financial benefit,” said Peters. “We don’t believe this specific deployment of Stability’s commercial offering is covered by fair dealing in the UK or fair use in the US. The company made no outreach to Getty Images to utilize our or our contributors’ material so we’re taking an action to protect our and our contributors’ intellectual property rights.”

    When contacted by The Verge, a press representative for Stability AI, Angela Pontarolo, said the “Stability AI team has not received information about this lawsuit, so we cannot comment.”

    The lawsuit marks an escalation in the developing legal battle between AI firms and content creators for credit, profit, and the future direction of the creative industries. AI art tools like Stable Diffusion rely on human-created images for training data, which companies scrape from the web, often without their creators’ knowledge or consent. AI firms claim this practice is covered by laws like the US fair use doctrine, but many rights holders disagree and say it constitutes copyright violation. Legal experts are divided on the issue but agree that such questions will have to be decided for certain in the courts. (This past weekend, a trio of artists launched the first major lawsuit against AI firms, including Stability AI itself.)

    Getty Images CEO Peters compares the current legal landscape in the generative AI scene to the early days of digital music, where companies like Napster offered popular but illegal services before new deals were struck with license holders like music labels.

    “We think similarly these generative models need to address the intellectual property rights of others, that’s the crux of it,” said Peters. “And we’re taking this action to get clarity.”

Page 8 of 8 FirstFirst ... 345678

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •