[ad_1]
Self-adulation is pushing AI into the mainstream. But should you join the thousands of people who are experimenting with AI portraits? This new trend, enabled by an app called Lensa AI, raises some very tough questions about craft, consent, and bias.
How does Lensa’s AI work?
Developed by Prisma Labs, the Lensa AI app (iOS/Android) was released in 2018 and offers various photo editing features. Essentially, it’s a photo beautification tool: it makes your selfies prettier by applying filters and removing “blemishes”.
But the latest version of Lensa AI includes something called “Magic Avatars”. This feature allows you to commission up to 200 AI-generated portraits in a variety of styles. All you need to do is share 10-20 pictures of your face and shell out a few dollars. The results, as you can see, are quite impressive.
Alright guys, let’s have some fun! @MKBHD, we generated AI avatars that looked too cool not to share. If you like them, feel free to use them. And guys, let us know who we should do next! pic.twitter.com/9CMtEZDqG5
— Prisma Laboratories (@PrismaAI) November 25, 2022
Now, these “magical avatars” are not generated by a proprietary AI. You’re actually paying Prisma Labs to generate portraits using Stable Diffusion, an open source machine learning model. The Lensa AI app is a middleman and a curator, but it’s easier than dealing with Stable Diffusion on your own.
Stable Diffusion trains on millions of publicly available images. That’s why you can imitate dozens of artistic styles, including manga, science fiction, pop art, and traditional portraiture. (In extremely simple terms, Lensa AI combines your selfies with existing Art. The reality is a bit more complicated, as hundreds of images can contribute to a resulting portrait.)
If you want to try Lensa AI’s “Magic Avatars” tool, you can install the app on iOS and Android and spend a few dollars on the feature. At a minimum, you can order 50 portraits for $2. But most people pay $4 for the maximum 200 portraits, since only a few of the images produced by this AI are actually useful or attractive.
Copyright and privacy concerns are real
The “Magic Avatars” tool generates images through Stable Diffusion, a machine learning model trained on publicly available images. These images are obtained without consent, and there is no way for a person, artist or company to be excluded from the data set.
This obviously raises both personal privacy and copyright concerns. A person might not want his selfies to be included in the AI dataset, it’s creepy! And as Prisma Labs makes money, many artists worry that their work will benefit someone else’s bank account. It doesn’t help that cheap and fast AI image generators threaten the job security of professional artists.
There are also concerns that someone could dump your selfies into Lensa AI, producing artwork of your face without your permission. This is important for a reason that we will highlight later.
I’m cutting these out for privacy reasons/because I’m not trying to call anyone. These are all portraits by Lensa where the tattered remains of an artist’s signature are still visible. They are the remains of the signature of one of the many artists he stole from.
A 🧵 https://t.co/0lS4WHmQfW pic.twitter.com/7GfDXZ22s1
—Lauryn Ipsum (@LaurynIpsum) December 6, 2022
Here’s the problem; current laws and regulations do not define how machine learning data sets should operate. We do not know if this material violates privacy or copyright rules. And for that reason, you will find it quite difficult to argue copyright infringement in court. Images produced by Stable Diffusion contain traces of artist styles, original content, and firms (or watermarks), but they don’t show up identical to any existing image.
On the plus side, well-known corporations are treading lightly in this area. They are openly concerned about how AI can lead to copyright infringement. For example, Getty Images refuses to touch AI until the rules are better defined, and Shutterstock is taking a unique approach to ensuring real-world artists get paid.
Dozens of companies could have invented “Magic Avatars,” but few are willing to take the risk. Even if you don’t see Lensa AI as a privacy or copyright concern, it’s clear that this issue will eventually make its way to lawmakers’ desks.
Where does your data go?
We often share selfies on Instagram or Facebook without blinking. But when using a weird app like Lensa AI, some people get tense. How will this company use your photos and lead to a violation of your privacy?
Well, under the terms of service, photos uploaded to Lensa AI are turned into data to train a machine learning model. The actual images are discarded, while information such as the position and orientation of facial features is preserved. Additionally, images taken with an iPhone selfie camera (which uses TrueDepth technology to map your face) can include data such as facial topology.
Please note that Lensa AI is also a photo beautification app. And when reading the terms of service, it’s often unclear if Prisma Labs is referring to its beautification or “Magic Avatar” features. Unfortunately, I’m not quite sure about the details behind this information. (That said, the TOS explicitly states that “Magic Avatar” selfies contribute to Stable Diffusion training, and that these selfies are deleted after the AI generates their images.)
You can request that all your personal data be deleted by sending an email to: mailto:[email protected]
3. If you upload any of the images to social networks, you have given them permission to use them in advertising. You can rescind the permission by sending an email to: [email protected]— Chanda Prescod-Weinstein (@IBJIYONGI) December 4, 2022
Because this AI is trained on publicly available photos, user privacy may not be a big concern for some people. After all, if you upload a bunch of selfies to Facebook or Instagram, your face may already be included in the data set. (Also, if you’re a fan of AI, you’re welcome to contribute your facial data.)
But some people have a limited online presence. Privacy is priceless, and if you’ve done a good job of keeping your face off the internet, I’d suggest avoiding Lensa AI. After all, we don’t really know where this data will end up.
If you have tried Lensa AI and would like to remove your information from your dataset, please contact [email protected]. Please note that in accordance with the TOS, Lensa AI is not required to delete data upon request.
Lensa AI may lean towards bias
As with all technology, Stable Diffusion and Lens AI are vulnerable to bias. Some people interpret this to mean “the AI is a fan”, which is funny (but technically wrong). Artificial intelligence is just an algorithm, and it is designed by humans using a huge amount of data.
This becomes apparent when you scroll through Lensa AI portraits of strangers. The AI has a strange habit of sexualizing women, probably due to the images that are included in its dataset (softcore fanart makes up a decent part of the dataset, I’d guess). To be clear, I’m not trying to sound like a prude, this AI Really has a thing for big breasts, and like TechCrunch reports, he occasionally spits out porn.
The AI also has issues with racing. There are several reports of Asian women finding that the AI diminished or changed their facial features; as described by one user, the result is “skewed to be more East Asian.” Again, this is likely due to the dataset, which may include too many fanart illustrations (which are generally focused on Japanese styles, ideals, and trends).
Now, this isn’t just an offensive drawback. This is a problem that could easily lead to abuse. What stops someone from taking your photos, uploading them to Lensa AI and producing pornographic or racist images? Is this something we need to ask ourselves about every AI imager?
Should you use Lensa AI?
As with all emerging technologies, AI imaging is a mixed bag. Tools like Lensa AI can generate some amazing portraits for a very low price. It’s faster, more convenient, and more accessible than any artist in the real world, but this convenience can come at a cost.
Unfortunately, we can’t wait to see the impact of this technology. We do not know how it will affect artists, individuals or companies. And from a privacy standpoint, well, can you think of just one way someone could use your facial data? This lack of knowledge can be very worrying.
Using Lensa AI is a personal choice, of course. And I don’t blame anyone for testing this technology. It’s interesting, exciting, and often very flattering. But the potential downsides of this trend should not be ignored.
[ad_2]