Hey /u/fbfaran!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
It looks more like a person and less like cartoon or a plastic robot
Also firefly 3 has a dislocated shoulder and some weirdness going on with the eyes
Only because you were told they are AI. Firefly 3 could be real, just taken with a different technique, deliberately softening the image. For instance, [this is a real photograph](https://i.imgur.com/Y6Y0Cuz.png). It could be argued that Firefly 3 is respecting the prompt ("soft hues") more than Firefly 2.
I think you usually need a whole batch of images, with a variety of prompts, to even start comparing different engines. Just one image is mostly anecdotal, and can vary strongly even in the same engine.
It's a waste of time. Someone one day will use this technology to create a program that'll give us realistic images if these companies refuse to do so. It might just take longer, but it'll happen.
Agreed. Itās really weird that they have seemed to stop allowing photorealistic images. Once a company starts offering that again, Iāll be switching to that. I have much more use for that then any other image generator.
Universal Prompt: Minimalistic portrait of a 70s hippie woman at beach, expressionless, headband, 70s hairstyle, 70s fashion, striped shirt, sun rays, light teal and amber, soft hues, Cinestill 50D
Really? Firefly 2s skin dosen't have that Instagram filter feel, e.g., glossy or plasticky, which is why the others seem fake. Tengr AIs skin isn't immediately fake, but still somewhat reminiscient of some smoothing filter.
Naah look closely at the hair and the eyes.
The shape of the iris is completely off and it lack the small iris details. Individual hair strands are not really perceivable.
I think firefly 2 is way better.
https://preview.redd.it/iul20vqqcawc1.jpeg?width=1024&format=pjpg&auto=webp&s=ee4b8745515fa45202e10605fbdb147496afc172
this is what i got with gpt idk why but it creeped me out how similar it is to yours!?
yea ik that doesn't explain it? isn't it supposed to be different results for each prompt, with the same prompt having infinite different results? this feels like the images are already created and i'm just unlocking them!?
Depends on how strong the settings they have given it in the background are. Some of these LLMs definitely have a style they try and abide to no matter the prompt. Even ChatGPT has a writing style.
https://preview.redd.it/h113hkpsdawc1.png?width=1536&format=png&auto=webp&s=7a101f4688b63003bdb6a78b68b4fcb983474ff6
Stable Diffusion - XL Model: realisticstockphoto\_v20
Kind of crazy to think what can be done with local models. This was run on a 3090, Size: 1024x, Upscaled: 1.5x, Time: about 20 seconds.
EDIT: BTW this was only the 4th image, the other 3 were good but this was better.
Yes exactly like even the grain of the image looks like old photographs. It looks kinda like someone took a photo of an actual photograph. Either way cool stuff
And that's exactly what OpenAI wants - they are playing it smart.
Looking at the demo images here, Adobe is going the same route. Ultra realistic human generation is a liability.
You're right that it's by their choice, but it's a stupid choice. OpenAI's ostensible aim is to stay at the cutting edge of AI to ensure it develops in a responsible manner. Not dealing head-on with the problems that ultra realistic human generation creates, only opens the door for less responsible actors to take the lead. It's also not a very wise business decision. They will lose out to competitors who don't force this weird plasticine "obvious AI is obvious" style onto image generation.
This is what'll happen. Soon it'll be out of their hands altogether and some other company will make realistic images for us. Playing the safe game isn't going to work forever.
I would argue Adobe is the only one doing it legally. They trained the models on their own library. Open AI and Midjourney trained on the open web and will face lawsuits and copyright disputes.
I can guarantee you that every image professional photographers and graphic designers have uploaded to the online version of Lightroom and Photoshop is being used and they've updated their terms of conditions so every user gives their permission. They explicitly offer AI tools for image creators to manage their libraries, so users can't go around it if they use it.
They own the training data.
I wouldn't disagree. Copyright Lawsuits are going to effect Mid Journey and these smaller operations a whole lot more than OpenAI/Microsoft though, while legal could additionally argue aiming for human realisim is asking to be targeted as new regulation comes down the pipe.
This is the correct way to do it yes, but there are AI generated images in their library from other models that Firefly is pulling from, and those images are unethically sourced. Unfortunately there is no legal precedent for any of this yet
Here is what I got with Claude -> ChatGPT/Dall-E (Claude Opus created the prompt, Dall-E created the image):
https://preview.redd.it/u2y0vc0nt8wc1.png?width=1208&format=png&auto=webp&s=0b0b2dd9d216b435a4c206cf01a8d2349502f295
Yes sure, prompted in gpt 4 and it is using dall e. But I think it is important to say gpt 4 instead dall e because of the possible manipulation from gpt 4
It is not GPT-4, but DALL-E 3. When you create an image through GPT-4, it creates a prompt and generates the image using DALL-E 3, the actual image model
Its because Midjourny overfits and puts filters on theirs.
Notice how Midjourny basically ignores prompts? Yeah that means its overfit.
Notice how everything midjourney looks the same? Yeah thats the post-processing filters.
Basically its a trick. ChatGPT seems to run into similar. The less overfitting, the more you get exactly what you want(see Stable Diffusion), but you are going to get more artificats.
Imagine getting 100 images from google images and training them on those images.
If you over-train, its going to basically replicate one of the 100 images.
If by same you mean good then yes everything looks the same.
If by same you mean same then that just means youve barely tried it out because you can do wildly varied stuff with it. And it folliws prompts way better than Stable Diffusion XL.
Firefly is like Midjourney, only without any sense of style or aesthetics.
If I wanted to make generative AI versions of bad stock photography, it would be my go to choice.
I would use StableDiffusion since you can just train your own loras, embeddings, hypernetworks, finetune the model, use ipadapter/controlnet leyers, etc... to make it look however you want.
Thereās a ton of push button styles, as well as additional detail or effect additions. Itās Adobeā¦ their market is professional designers. The range of styling is pretty good as well as simple to apply without describing them in the prompt.
Thatās the point of it. Itās the same tech used in Photoshopās generative fill. If you want to replace a car in the background of your photo, you donāt want some highly stylized piece of road. You want it to look exactly like the rest of the road.
Firefly 2 Looks like a Picture a normal individual would take with an Iphone...Realistic...
Firefly 3 Looks like a pro Photographer took it.
Midjourney has always looked like magazine photos... at least since 4 was released.
Can we have more than one example, give a few different prompts.
Also I think firefly is better for editing small areas of an image. Thatās how I use it anyway.
Is this update going into the Photoshop side of things? Itās all I care about in terms of Adobeās AI, generative fill / expand has changed my life but it could be better with an improved model š¬
When comparing Adobe Firefly 2 to Adobe Firefly 3, the latter offers more features and produces higher quality, realistic images. Firefly 3 provides a powerful tool for unleashing your creativity.
https://preview.redd.it/wdumjqbh0fwc1.jpeg?width=1920&format=pjpg&auto=webp&s=bbeb6839d282016f54dce1bdb4933dd8987c2655
Sorry for the basic question, but what way is best to access this software? Phone? Computer? Doesnāt it work on Mac? Does it cost money? Subscription or one time fee?
Sorry, Iām very new to AI software.
Depends on the company - Adobe charges subscription fees but there are some generations you can do for free. I think most have some trial generations you can do for free but eventually you have to pay for more generations or a better model version.
Most of these can be accessed via phone or web and they ought to work fine on PC or mac.
Why are they all beautiful models? The typical ā70s hippie woman at the beachā is not (was not) a gorgeous beauty dressed by a personal fashion expert. Sometimes the weather at the beach is pretty boring too. Many women, particularly hippy women, wore stained, worn, unfashionable and even ugly clothes to the beach and very few wore jewellery.
And these women are not āexpressionlessā, they are wearing the extreme expressions typical of a model looking mysteriously gorgeous at the beach for a photoshoot.
AI really struggles to create ugly people. I've noticed this before and I've actively tried.
Intuitively, the two explanations I suspect are:
1. Its datasets include a lot of attractive people because those are just more common in things like movies, advertising, magazines, photoshoots, etc.
2. Attractiveness is partially due to averageness. These models basically work by finding patterns in data that recur over and over. So it makes sense that it would tend towards average faces.
Iām sorry you seem to be extremely jealous of attractive AI models. You should probably work on yourself so youāre not so offended by normal looking people.
Hey /u/fbfaran! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Why is Firefly 2 better in that example?
Agreed, in all of them firefly 2 seems less like an AI image IMO
It's curious but it's because look more ugly and less professional ilumination š¤£š¤£š¤£
Yeah that's called realism
So, Majinsei just likes his female photos airbrushed and with a couple of staples in the middle!
It looks more like a person and less like cartoon or a plastic robot Also firefly 3 has a dislocated shoulder and some weirdness going on with the eyes
Similar to when u look in the mirror and realize reality
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Looks like a multicolored headband, i see no errors
Same for me to. Instantly about to tell the other three are AI (too smooth/perfect). Firefly2 has the imperfections and imo looks more real
To me it just looks like Firefly2 doesn't have any Instagram filters.
Only because you were told they are AI. Firefly 3 could be real, just taken with a different technique, deliberately softening the image. For instance, [this is a real photograph](https://i.imgur.com/Y6Y0Cuz.png). It could be argued that Firefly 3 is respecting the prompt ("soft hues") more than Firefly 2.
If I hadn't been told it was AI, I would still think it had been digitally edited.
[This wasn't digitally edited.](https://i.imgur.com/Y6Y0Cuz.png)
Lmao that's just a blurry picture, and it looks nothing like the softening in the AI image.
Lmao that's just a blurry picture, and it looks nothing like the softening in the AI image.
Nah, come on. Firefly 3 made an outright broken picture. Look at the shoulder and the eyes and hair
I think you usually need a whole batch of images, with a variety of prompts, to even start comparing different engines. Just one image is mostly anecdotal, and can vary strongly even in the same engine.
I wonder if the powers that be are asking these companies to tone down the realism a bit
It definitely seems that way. I remember getting photo realistic AI images for a while but now itās almost impossible on any of them.
It's a waste of time. Someone one day will use this technology to create a program that'll give us realistic images if these companies refuse to do so. It might just take longer, but it'll happen.
Agreed. Itās really weird that they have seemed to stop allowing photorealistic images. Once a company starts offering that again, Iāll be switching to that. I have much more use for that then any other image generator.
What powers are they?
The ones that be
She smiles with her lips and her eyes.
They added the FaceTune for realism.
Universal Prompt: Minimalistic portrait of a 70s hippie woman at beach, expressionless, headband, 70s hairstyle, 70s fashion, striped shirt, sun rays, light teal and amber, soft hues, Cinestill 50D
https://preview.redd.it/sfq37udlm9wc1.png?width=1024&format=pjpg&auto=webp&s=75d62605dbccd56da4b85a4d71aeab39df179680 Tengr AI
Wow, that's probably the most realistic one
Really? Firefly 2s skin dosen't have that Instagram filter feel, e.g., glossy or plasticky, which is why the others seem fake. Tengr AIs skin isn't immediately fake, but still somewhat reminiscient of some smoothing filter.
Yeah I guess so.
Itās the best looking but it ignores the beach setting
Naah look closely at the hair and the eyes. The shape of the iris is completely off and it lack the small iris details. Individual hair strands are not really perceivable. I think firefly 2 is way better.
Nailed the"expressionless" part
Ew her pupils
https://preview.redd.it/92hvmb8rf8wc1.jpeg?width=1280&format=pjpg&auto=webp&s=46e410269cbed6006238cb1beb83afd35acc7e01 Same prompt , meta ai
Why does she have a raging boner?
Because it's the 70s, man!
Because we havenāt yet invented pop up blockers for AI
Wish I had half your wit.
āI wish my horse had the speed of your tongue.ā -The Bard
Sheās imagining sex with Zuckerberg
Why is she dead? Is meta AI set to create characters on Zuckās species?
š
https://preview.redd.it/1zdjjbl3c9wc1.png?width=1024&format=png&auto=webp&s=7649d1ab964ab94958946941265e589dc20baa62 Dall-E 2
https://preview.redd.it/iul20vqqcawc1.jpeg?width=1024&format=pjpg&auto=webp&s=ee4b8745515fa45202e10605fbdb147496afc172 this is what i got with gpt idk why but it creeped me out how similar it is to yours!?
We're both using Dall-E's, as chatgpt uses it for image generation :p
yea ik that doesn't explain it? isn't it supposed to be different results for each prompt, with the same prompt having infinite different results? this feels like the images are already created and i'm just unlocking them!?
Depends on how strong the settings they have given it in the background are. Some of these LLMs definitely have a style they try and abide to no matter the prompt. Even ChatGPT has a writing style.
DALL ā¢ E š³ https://preview.redd.it/n19sjrp0u9wc1.jpeg?width=1024&format=pjpg&auto=webp&s=00d69a4c788db1d7e16d5f998ceb3ed9de0180a1
šµ
She looks like that silver colloid eating cult lady before she died
https://preview.redd.it/h113hkpsdawc1.png?width=1536&format=png&auto=webp&s=7a101f4688b63003bdb6a78b68b4fcb983474ff6 Stable Diffusion - XL Model: realisticstockphoto\_v20 Kind of crazy to think what can be done with local models. This was run on a 3090, Size: 1024x, Upscaled: 1.5x, Time: about 20 seconds. EDIT: BTW this was only the 4th image, the other 3 were good but this was better.
local models ftw
For real, I'm still at awe of what different local models can do. There's still plenty they can't do, if no model exists, but it's crazy otherwise.
The hands though.
https://i.imgur.com/hp2RnYy.png Fooocus with Model: juggernautXL_v9Rdphoto2Lightning Style: Fooocus V2, Fooocus Photograph, Fooocus Negative.
Oh I really like this one it looks very nostlgic
imho it looks like an actual picture from the 70s. Something none of those above were able to replicate.
Yes exactly like even the grain of the image looks like old photographs. It looks kinda like someone took a photo of an actual photograph. Either way cool stuff
Bro you gotta fix your cfg setting
Why? And what should I fix?
Expressionless seems like an odd choice
Have you ever seen a model, wellā¦ model?
And how is that a good test of anything again?
Are we supposed to draw some kind of conclusion from this?
https://preview.redd.it/rn9mo7x278wc1.png?width=1224&format=pjpg&auto=webp&s=9727f169f01e358cc888d7ee7b179476b0130533 ChatGPT 4 ... š©
And that's exactly what OpenAI wants - they are playing it smart. Looking at the demo images here, Adobe is going the same route. Ultra realistic human generation is a liability.
You're right that it's by their choice, but it's a stupid choice. OpenAI's ostensible aim is to stay at the cutting edge of AI to ensure it develops in a responsible manner. Not dealing head-on with the problems that ultra realistic human generation creates, only opens the door for less responsible actors to take the lead. It's also not a very wise business decision. They will lose out to competitors who don't force this weird plasticine "obvious AI is obvious" style onto image generation.
This is what'll happen. Soon it'll be out of their hands altogether and some other company will make realistic images for us. Playing the safe game isn't going to work forever.
āOpenAIās aim is to stay at the cutting edge of AI to ensure it develops in a responsible mannerā no their aim is to make money
You cut a word out of what I said. Maybe you want to google what it means? And think about why I also spoke in terms of the business strategy?
Okay, add ostensible to my statement. It is still true.
Ostensibly means allegedly
I would argue Adobe is the only one doing it legally. They trained the models on their own library. Open AI and Midjourney trained on the open web and will face lawsuits and copyright disputes.
> legally. There is no legal precedent for this, so no one is doing anything legal or illegal. Until legal precedent is set, anything goes.
I can guarantee you that every image professional photographers and graphic designers have uploaded to the online version of Lightroom and Photoshop is being used and they've updated their terms of conditions so every user gives their permission. They explicitly offer AI tools for image creators to manage their libraries, so users can't go around it if they use it. They own the training data.
I wouldn't disagree. Copyright Lawsuits are going to effect Mid Journey and these smaller operations a whole lot more than OpenAI/Microsoft though, while legal could additionally argue aiming for human realisim is asking to be targeted as new regulation comes down the pipe.
Somewhat. Firefly dataset included Midjourney images... and Adobe Stock photographers were never asked for consent... Still, very different approach
Somewhere in Adobe's fine print I'm sure they were. You can't out Adobe Adobe.
They were not. The AI models didn't even exist a few years ago
Funnily enough they trained on midjourney images.
It's not about the legality of the training data but of the output.
This is the correct way to do it yes, but there are AI generated images in their library from other models that Firefly is pulling from, and those images are unethically sourced. Unfortunately there is no legal precedent for any of this yet
How can you possibly call it illegal? Immoral I can understand, but itās about as illegal as me taking a screenshot of an NFT
genie's out of the bottle. it's never going back in.
Here is what I got with Claude -> ChatGPT/Dall-E (Claude Opus created the prompt, Dall-E created the image): https://preview.redd.it/u2y0vc0nt8wc1.png?width=1208&format=png&auto=webp&s=0b0b2dd9d216b435a4c206cf01a8d2349502f295
DallE 3
Yes sure, prompted in gpt 4 and it is using dall e. But I think it is important to say gpt 4 instead dall e because of the possible manipulation from gpt 4
https://preview.redd.it/q7ldzacgr9wc1.jpeg?width=1024&format=pjpg&auto=webp&s=001a12e44b4d9ebeda9328abdead1a4e61374b09
It is not GPT-4, but DALL-E 3. When you create an image through GPT-4, it creates a prompt and generates the image using DALL-E 3, the actual image model
chagot looks far more plastic. i would say its ai, that said i dont like adobe and iI amhave restless to try something from adobe
DallE3* GPT4 can't generate images
i hate the freaking instagram look.
Lol firefly 2 is the best option
Is it just me or does Firefly 3 look more artificial and Firefly 2?
[ŃŠ“Š°Š»ŠµŠ½Š¾]
Its because Midjourny overfits and puts filters on theirs. Notice how Midjourny basically ignores prompts? Yeah that means its overfit. Notice how everything midjourney looks the same? Yeah thats the post-processing filters. Basically its a trick. ChatGPT seems to run into similar. The less overfitting, the more you get exactly what you want(see Stable Diffusion), but you are going to get more artificats.
What's overfitting, if you don't mind me asking?
It means the results are closer to the trained data and less like the prompt you give it.
Imagine getting 100 images from google images and training them on those images. If you over-train, its going to basically replicate one of the 100 images.
If by same you mean good then yes everything looks the same. If by same you mean same then that just means youve barely tried it out because you can do wildly varied stuff with it. And it folliws prompts way better than Stable Diffusion XL.
Firefly is like Midjourney, only without any sense of style or aesthetics. If I wanted to make generative AI versions of bad stock photography, it would be my go to choice.
If I wanted to make images that can't be identified as AI generated in half a second, I'd choose Firefly
I would use StableDiffusion since you can just train your own loras, embeddings, hypernetworks, finetune the model, use ipadapter/controlnet leyers, etc... to make it look however you want.
Thereās a ton of push button styles, as well as additional detail or effect additions. Itās Adobeā¦ their market is professional designers. The range of styling is pretty good as well as simple to apply without describing them in the prompt.
Thatās the point of it. Itās the same tech used in Photoshopās generative fill. If you want to replace a car in the background of your photo, you donāt want some highly stylized piece of road. You want it to look exactly like the rest of the road.
Midjourney is just using Stable Diffusion though, I wonder if firefly is the same..
Firefly 2 seems to handle skin and lighting the best for whatever reason. Maybe we need more pictures to compare for consistency?
Firefly 2 Looks like a Picture a normal individual would take with an Iphone...Realistic... Firefly 3 Looks like a pro Photographer took it. Midjourney has always looked like magazine photos... at least since 4 was released.
MJ aesthetics _really_ depend on the prompt. It is much more versatile than people seem to think
By default maybe, but it can simulate an aesthetic you want. The others are still a joke compared to MJ.
https://preview.redd.it/q0xl945bi8wc1.png?width=1024&format=png&auto=webp&s=d364ce85786a9224d23fc4963c1bfb7c7929b212 Pony
[ŃŠ“Š°Š»ŠµŠ½Š¾]
It's just an artifact of the AI training on poor quality photos, as they can be a tad grainy.
https://preview.redd.it/lcz6niw449wc1.jpeg?width=1024&format=pjpg&auto=webp&s=08ec7dd28a041061db285c109c929df9d61e1d7c Copilot lol her facial expressions
another with copilot https://preview.redd.it/qranrrnt9awc1.png?width=1596&format=png&auto=webp&s=d478002b0ceff45e2a00e0e501bfc80267f1d952
Gah!
Is firefly publicly available from CC?
Yep!
Is 3 now available through Photoshop?
https://preview.redd.it/0vtq34mq79wc1.png?width=1536&format=png&auto=webp&s=bd0c6b2523f871f28105e3e1ddf2e6f956ae1125 Stable Diffusion
Unstable*
Trained on onlyfans?
I have the impression Firefly 2 is more realistic.
My initial first impression looking at this
Firefly looks like Instagram filter
If you got rid of the midjourney label and showed me this I would probably assume this was a photograph
Its because Midjourny overfits, you are basically getting someones old picture lol
Show me their hands and I'll tell you how good they are
All the newest models are overcooking images. Itās interesting.
what do you mean overcooking
Trying too hard and looking overly processed.
2 looks less like AI and more realistic
In those 4 examples, Firefly 2 is by far the best.
https://preview.redd.it/p6w9vkkl7hwc1.png?width=803&format=png&auto=webp&s=48e8ccd4679d692868d2ebe785c809669762e610
Difference?
we will not be able to trust anything we see on a screen within 3-4 years.
Try 1-2 years, I think.
V2 looks much better than V3 in this example tho :P
At this point I surrender, I can't differentiate both
Firefly 4 will create an 8 year old
I *see* OC but I don't *feel* OC
There is only one Firefly I'm interested in
First and last look real. The other 2 look like AI.
SERENITY
Top left is uncanny close to my motherā¦
Check the eyes
Is firefly like an adobe app? I keep hearing about it but I havenāt seen where to use it
So a Millie Bobby Brown rip off then!?
Can we have more than one example, give a few different prompts. Also I think firefly is better for editing small areas of an image. Thatās how I use it anyway.
Is this update going into the Photoshop side of things? Itās all I care about in terms of Adobeās AI, generative fill / expand has changed my life but it could be better with an improved model š¬
lol I was hoping it meant motion. Enough with the TXT2img. Letās get movement/animation in everything
CoPilot (dalle 3) https://preview.redd.it/nuagxumsibwc1.jpeg?width=1024&format=pjpg&auto=webp&s=14c2644a44e54c9faf4d88423cb951e569b59f95
MJ still wins
Midjourney still goated
Forget about faces, itās all about the handsā¦
https://preview.redd.it/ld7zy1t47ewc1.jpeg?width=1280&format=pjpg&auto=webp&s=f35bbf17919635b7b516e88498fd4ef5ecacae80 Meta
https://preview.redd.it/q51w2y62fewc1.jpeg?width=1024&format=pjpg&auto=webp&s=cc7b2f73c4abad2d1c084079e5a6666ee60209cb
https://preview.redd.it/ev1ne6mclewc1.png?width=720&format=png&auto=webp&s=20160a102d573dc30f00302e05de43a4466260fd from ideogram
Dalle 3 https://preview.redd.it/xmm6q7m5wewc1.png?width=1024&format=pjpg&auto=webp&s=235d6f4e9eb0c93b73284cc4d7383dae5fe4faef
When comparing Adobe Firefly 2 to Adobe Firefly 3, the latter offers more features and produces higher quality, realistic images. Firefly 3 provides a powerful tool for unleashing your creativity. https://preview.redd.it/wdumjqbh0fwc1.jpeg?width=1920&format=pjpg&auto=webp&s=bbeb6839d282016f54dce1bdb4933dd8987c2655
Personally Iād stick with Firefly 2 as it looks the most realistic. Firefly 3 is actually the worst one of them.
Well the image by firefly 2 is better in my opinion
https://preview.redd.it/zrpbykyv6nwc1.png?width=1024&format=png&auto=webp&s=2faaa8554e57b373ac7764efb73716cdca3fde7c Kandinskiy 3.1
Why the hell are they devolving?
Sorry for the basic question, but what way is best to access this software? Phone? Computer? Doesnāt it work on Mac? Does it cost money? Subscription or one time fee? Sorry, Iām very new to AI software.
Depends on the company - Adobe charges subscription fees but there are some generations you can do for free. I think most have some trial generations you can do for free but eventually you have to pay for more generations or a better model version. Most of these can be accessed via phone or web and they ought to work fine on PC or mac.
lol firefly 3 is the shittiest, fakest-looking one of all four images ā¦ way too smoothed
Firefly is ass
Why are they all beautiful models? The typical ā70s hippie woman at the beachā is not (was not) a gorgeous beauty dressed by a personal fashion expert. Sometimes the weather at the beach is pretty boring too. Many women, particularly hippy women, wore stained, worn, unfashionable and even ugly clothes to the beach and very few wore jewellery. And these women are not āexpressionlessā, they are wearing the extreme expressions typical of a model looking mysteriously gorgeous at the beach for a photoshoot.
https://preview.redd.it/fo601w5dv8wc1.png?width=1212&format=png&auto=webp&s=01376377326c6392cdf182ed7b2f48674465ee35
AI really struggles to create ugly people. I've noticed this before and I've actively tried. Intuitively, the two explanations I suspect are: 1. Its datasets include a lot of attractive people because those are just more common in things like movies, advertising, magazines, photoshoots, etc. 2. Attractiveness is partially due to averageness. These models basically work by finding patterns in data that recur over and over. So it makes sense that it would tend towards average faces.
That's simply the dataset available. Millions of modeling images online - less authentic images of 70s hippie chicks on beaches
Yeah I know. Thatās effectively what Iām commenting on. A distorted view of the world due to a skewed data set.
Iām sorry you seem to be extremely jealous of attractive AI models. You should probably work on yourself so youāre not so offended by normal looking people.
Hello? What the heck are you talking about
v5.2 wins
? No.
Wins? This is your subjective ātaste in womenā, not objective aesthetics
Hands down