stability-ai-ecosystem

Stability AI, Stable Diffusion, and Getty Images Copyright Lawsuit

An important news just came out and it’s about the issue of copyright infringement when it comes to the training of AI models like stable Diffusion. In this. Indeed, Getty Images actually sued the creator of stable diffusion, Stability Eye, for unlawfully using the images present on the platform to actually train the model. I really go over in detail […] what can go wrong […] if you go to the previous episode. I’m going to leave it in the show notes about what can go wrong with the partnership between open AI, Microsoft […] and some of the legal issues that might come up as a result of those AI models.

In this episode, I want to focus a little bit more on […] how do you empower an ecosystem based on the new AI models that you created and why this is not just a technological issue, this is a business modeling business ecosystem issue. And if you want to make sure that this technology can actually move forward much quicker, you need to make sure that you are able to involve the different parties in the discussion to enable monetization and Control to actually make those models work at scale and create a business ecosystem around it. So let me explain a little bit what’s going on. It seems that again […] getting images as sued in London […] Stability AI. […] Let’s remember, Stability AI is the creator of a stable diffusion that’s been released as an open source […] image generator initially and now is doing more and more stuff.

Of course, […] they found that many images on Getting Images had been used to actually train stable diffusion. And in the first release of civil diffusion, let’s remember there was no opt out mechanism. They of course from Stability AI announced that in the next release, the opt out mechanism will be available for those who want to opt out their data in the training of the model. And therefore that’s something that is going to be there. But we can assume that of course, a lot of data where there was no consent images were used […] in the development of stable diffusion.

We can also understand what’s going on here. We might assume that […] it wasn’t just naivety […] in the part of Stability AI, it was more like a consideration to do things fast, taking into account the risk of potential lawsuit later on […] and to make sure to keep up quickly with the development of the AI industry. So for instance, stable Diffusion, Stability AI actually a few months back got funding over 100 millions which might be used to actually defend against this lawsuit. But again, […] beyond […] whether how the lawsuit is going to go, which is going to be important because this is going to determine a little bit how fast things will evolve in the industry. It’s also important to highlight how it’s important for those players who are creating AI products to actually think in terms of business ecosystem if they want to this evolution to happen.

Right now. If we can make a comparison with early 2000, we are either in the moment of Napster moment for civil diffusion where Napster was killed by the fact that it was pirating. If you want the […] music industry without striking any deals. Because for instance, the mindset there was we’re not going to strike any deals, we’re just going to go our way developing a product which is cool and that many users like. Or you go like the itunes route where you understand the itunes or if you wish, we wish later on the spotify route which is we do understand that in order for us to create a solid product at scale and we need to build a business ecosystem.

Building a business ecosystem means that we need to involve various scales and different kinds of parties involved in the process. So in this case, what are the parties? Well, first of all in the training side, so when you pretrain those models, as explained in previous episodes, […] you need a huge amount of data which in this case is represented by images to train the model. And you may need probably millions of […] images in order to actually train the model. And […] of course the code of stable diffusion is open sourced and it was quite easy for a way to get images to see that some of those images have been used to train the model.

Because […] if you look the coverage from The Verge, there are some images where there is written getting images on the image generated by stable diffusion. Meaning that the training on top of those images was so clear that […] stable diffusion still generated also the […] watermark of getting images on top of those generated images, which is quite interesting. So first point is about enabling an opt in and opt out mechanism […] in the training data set for both distributors like getting images and also for artists. Because let’s remember, if you’re an artist, if you’re a creator […] most or if you wish, 100% of what you gained as a creator […] is your personal style. And if you have a machine who is actually replicating, copying and pasting at scale your personal style, you lost […] all you gained and all you sacrificed over the years.

Now, as a content creator, I can understand that because, again, I think the main difference between […] a piece of art and just something that can be commoditized is a unique style that you can achieve as an artist or a creator over the years by really crafting your own style over the years after many duration, but also based on your own philosophy or your understanding on the world, on your experience. So there are several components of it that make up your style. And this is critical because if your style can be copied and pasted by a machine at Skill, then it’s fair that you get the chance either to opt in or opt out in the first place. Second place, you need to have the chance to monetize it. Meaning that if you do decide to opt into the training of those models, whoever is going to be using your style and it’s going to be replicating it, you’re going to need to have a royalty.

So imagine a case in which you run a stable diffusion on top of a user iPhone. The user request the generation of images in the style of an artist who is a living artist, who is therefore living on top of royalties for its work, […]for Erie’s work. In that case, you need to give the chance to there are artists to actually monetize that piece of art for instance, by enabling the user to pay a little bit more […]in terms of generation of those images in that specific style so that this money goes back to the artist or like the platform who has a deal with the artist and therefore is distributing the artist. So number one, again you need to involve artists and distributors in the opt in, opt out for the training. Number two, you need to involve artists when it comes to monetization and again […]ability of the user also to request any of the styles that have been approved for instance by artists that have been part of the training and therefore also enable those artists to […]monetize their work. 

And then on the other side, you also need to work deals with the distributors. Where again, if those distributors opt in and therefore they facilitate […]your work as an AI company because they enable you to have a larger data set by striking deals directly with the aggregators. Like getting images. Getting images. Then on his side is actually having […]specific deals with other creators. 

Then you can use these to actually […]improve the data set that you can use to improve those models, like the further version of stable diffusion and also have more choices for the users. So the product might get even more interesting because users can replicate without incurring in legal issues, many of those styles, and those can be, for instance, commercialized. And another key aspect of course is about the development of a whole AI creating industry where you need to also make sure that you can create a platform or like strike deals or enable other platforms to start developing around this new kind of […]art or music making or movie making, whatever. But again, this is […]not just a technological issue. This is a business modeling distribution issue where if you’re an AI company and if you’re a subsidiary, you need to understand that. 

Of course, in order for you […]to keep releasing your AI models, either you enable others to opt out or you understand that you’re part of an ecosystem and therefore you need to strike deals with those parties in order to make the ecosystem viable in the first place. And therefore you avoid the Napster moment and you go and move toward the so called itunes moment. 

About The Author

Scroll to Top
FourWeekMBA