HomeTechnologyNewsShould you use Lensa AI, the "magic" portrait app? – Review...

Should you use Lensa AI, the “magic” portrait app? – Review Geeks

- Advertisement -
- Advertisement -
- Advertisement -
- Advertisement -

[ad_1]

Danny Chadwick, Lensa AI

Self-adulation is pushing AI into the mainstream. But should you join the thousands of people who are experimenting with AI portraits? This new trend, enabled by an app called Lensa AI, raises some very tough questions about craft, consent, and bias.

How does Lensa’s AI work?

Developed by Prisma Labs, the Lensa AI app (iOS/Android) was released in 2018 and offers various photo editing features. Essentially, it’s a photo beautification tool: it makes your selfies prettier by applying filters and removing “blemishes”.

But the latest version of Lensa AI includes something called “Magic Avatars”. This feature allows you to commission up to 200 AI-generated portraits in a variety of styles. All you need to do is share 10-20 pictures of your face and shell out a few dollars. The results, as you can see, are quite impressive.

Now, these “magical avatars” are not generated by a proprietary AI. You’re actually paying Prisma Labs to generate portraits using Stable Diffusion, an open source machine learning model. The Lensa AI app is a middleman and a curator, but it’s easier than dealing with Stable Diffusion on your own.

Stable Diffusion trains on millions of publicly available images. That’s why you can imitate dozens of artistic styles, including manga, science fiction, pop art, and traditional portraiture. (In extremely simple terms, Lensa AI combines your selfies with existing Art. The reality is a bit more complicated, as hundreds of images can contribute to a resulting portrait.)

If you want to try Lensa AI’s “Magic Avatars” tool, you can install the app on iOS and Android and spend a few dollars on the feature. At a minimum, you can order 50 portraits for $2. But most people pay $4 for the maximum 200 portraits, since only a few of the images produced by this AI are actually useful or attractive.

Copyright and privacy concerns are real

An illustration of a copyright stamp on a letter.
Maxx-Studio / Shutterstock.com

The “Magic Avatars” tool generates images through Stable Diffusion, a machine learning model trained on publicly available images. These images are obtained without consent, and there is no way for a person, artist or company to be excluded from the data set.

This obviously raises both personal privacy and copyright concerns. A person might not want his selfies to be included in the AI ​​dataset, it’s creepy! And as Prisma Labs makes money, many artists worry that their work will benefit someone else’s bank account. It doesn’t help that cheap and fast AI image generators threaten the job security of professional artists.

There are also concerns that someone could dump your selfies into Lensa AI, producing artwork of your face without your permission. This is important for a reason that we will highlight later.

Here’s the problem; current laws and regulations do not define how machine learning data sets should operate. We do not know if this material violates privacy or copyright rules. And for that reason, you will find it quite difficult to argue copyright infringement in court. Images produced by Stable Diffusion contain traces of artist styles, original content, and firms (or watermarks), but they don’t show up identical to any existing image.

On the plus side, well-known corporations are treading lightly in this area. They are openly concerned about how AI can lead to copyright infringement. For example, Getty Images refuses to touch AI until the rules are better defined, and Shutterstock is taking a unique approach to ensuring real-world artists get paid.

Dozens of companies could have invented “Magic Avatars,” but few are willing to take the risk. Even if you don’t see Lensa AI as a privacy or copyright concern, it’s clear that this issue will eventually make its way to lawmakers’ desks.

Where does your data go?

A phone scanning someone's face.
Prostock-studio / Shutterstock.com

We often share selfies on Instagram or Facebook without blinking. But when using a weird app like Lensa AI, some people get tense. How will this company use your photos and lead to a violation of your privacy?

Well, under the terms of service, photos uploaded to Lensa AI are turned into data to train a machine learning model. The actual images are discarded, while information such as the position and orientation of facial features is preserved. Additionally, images taken with an iPhone selfie camera (which uses TrueDepth technology to map your face) can include data such as facial topology.

Please note that Lensa AI is also a photo beautification app. And when reading the terms of service, it’s often unclear if Prisma Labs is referring to its beautification or “Magic Avatar” features. Unfortunately, I’m not quite sure about the details behind this information. (That said, the TOS explicitly states that “Magic Avatar” selfies contribute to Stable Diffusion training, and that these selfies are deleted after the AI ​​generates their images.)

Because this AI is trained on publicly available photos, user privacy may not be a big concern for some people. After all, if you upload a bunch of selfies to Facebook or Instagram, your face may already be included in the data set. (Also, if you’re a fan of AI, you’re welcome to contribute your facial data.)

But some people have a limited online presence. Privacy is priceless, and if you’ve done a good job of keeping your face off the internet, I’d suggest avoiding Lensa AI. After all, we don’t really know where this data will end up.

If you have tried Lensa AI and would like to remove your information from your dataset, please contact [email protected]. Please note that in accordance with the TOS, Lensa AI is not required to delete data upon request.

Lensa AI may lean towards bias

An android crying because it could be a fan.
This robot is accepting your problematic opinions. Sarah Holmlund/Shutterstock

As with all technology, Stable Diffusion and Lens AI are vulnerable to bias. Some people interpret this to mean “the AI ​​is a fan”, which is funny (but technically wrong). Artificial intelligence is just an algorithm, and it is designed by humans using a huge amount of data.

This becomes apparent when you scroll through Lensa AI portraits of strangers. The AI ​​has a strange habit of sexualizing women, probably due to the images that are included in its dataset (softcore fanart makes up a decent part of the dataset, I’d guess). To be clear, I’m not trying to sound like a prude, this AI Really has a thing for big breasts, and like TechCrunch reports, he occasionally spits out porn.

The AI ​​also has issues with racing. There are several reports of Asian women finding that the AI ​​diminished or changed their facial features; as described by one user, the result is “skewed to be more East Asian.” Again, this is likely due to the dataset, which may include too many fanart illustrations (which are generally focused on Japanese styles, ideals, and trends).

Now, this isn’t just an offensive drawback. This is a problem that could easily lead to abuse. What stops someone from taking your photos, uploading them to Lensa AI and producing pornographic or racist images? Is this something we need to ask ourselves about every AI imager?

Should you use Lensa AI?

As with all emerging technologies, AI imaging is a mixed bag. Tools like Lensa AI can generate some amazing portraits for a very low price. It’s faster, more convenient, and more accessible than any artist in the real world, but this convenience can come at a cost.

Unfortunately, we can’t wait to see the impact of this technology. We do not know how it will affect artists, individuals or companies. And from a privacy standpoint, well, can you think of just one way someone could use your facial data? This lack of knowledge can be very worrying.

Using Lensa AI is a personal choice, of course. And I don’t blame anyone for testing this technology. It’s interesting, exciting, and often very flattering. But the potential downsides of this trend should not be ignored.



[ad_2]

- Advertisement -
- Advertisement -
Stay Connected
[td_block_social_counter facebook="#" manual_count_facebook="16985" manual_count_twitter="2458" twitter="#" youtube="#" manual_count_youtube="61453" style="style3 td-social-colored" f_counters_font_family="450" f_network_font_family="450" f_network_font_weight="700" f_btn_font_family="450" f_btn_font_weight="700" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjMwIiwiZGlzcGxheSI6IiJ9fQ=="]
Must Read
- Advertisement -
Related News
- Advertisement -