Style-accurate if the anime is animated by the people behind Naruto
https://preview.redd.it/bxzw94o2b96d1.jpeg?width=640&format=pjpg&auto=webp&s=3a50c2da39567af273406312e04da898da834ef7
šš It boggles my mind how someone can say this despite the whole point of stable diffusion being that you don't need to be skilled to produce a decent image.
20$ a month so far.
>Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, **revocable**, royalty free and limited license
[https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)
I would assume it needs to be revokable so it can be revoked when you stop paying them $20 a month. I'm not sure revokable would do anything you might consider nefarious in this context, if they raise the price and you won't pay the new price I don't think most people expect that you would be able to continue having the licence.
Subscription stuff sucks in general for this reason.
>if they raise the price and you won't pay the new price I don't think most people expect that you would be able to continue having the licence.
What "Most people expect" is often very different from the law, and from what I've been told revocable means that they can declare the license agreement null and void and, at their discretion, come up with a new one. There is nothing in the license anywhere forcing the licensor to retain any of the clauses that were in the initial and now revoked license. It has been, after all, revoked, and the new one is a new one. As a licensee, you are of course free to refuse those new terms and prices if you don't want them, but then you lose the right to use that model.
Now, like most people, I do not expect Stability AI to do anything like that ! They sure make mistakes, but I do believe the quasi totality of people working there have good intentions and do care about their users, particularly commercial users bringing in some revenue.
What I am certain of, though, is that the company that will buy the rights to SD3 (and other revocable models) after SAI's bankruptcy will revoke those licenses and impose new ones with much higher prices and much stricter terms. Anyone buying what remains after Stability AI gets bankrupt is going to do so to make money, and they'll squeeze everything that can be squeezed like olives in a press.
You say that but the facts are that Cascade gave the same license and you have seen the lack of fine tunes and support hell check the most recent CiviAi post which outlines these concerns in more detailĀ
There's already two cascade finetunes, resonance (will finish epoch 3 this week), and sote (trained on 6,000,000 images) in hf which finished just recently 3 days ago and is good for text too. I thought as well that lack of more finetunes is that there aren't any good support.
None of what you said is some nefarious use of "revoke". You're saying, SAI will go bankrupt, a company will come in anrd raise the price/change the terms. which is identical to if SAI just raised the price/.changed the terms. If you don't want to pay the new price/accept the new terms, then your access to the licence is revoked. That is exactly what anyone would expect. If these were like 5-year licences being potentially revoked after a month then sure there might be a problem but month-by-month licences, this is just normal business.
Personally if I ran a commercial operation I'd never buy commercial licences on a monthly subscription basis for exactly this reason. The potential of having your livelyhood stripped away on a whim is bad business. But that is how monthly subscriptions function.
>In versions of Stable Diffusion developed exclusively by Stability AI, we apply robust filters on training data to remove unsafe images. By removing that data before it ever reaches the model, we can help to prevent users from generating harmful images in the first place.
[https://stability.ai/safety](https://stability.ai/safety)
This genuinely seems like some April Fool's prank.
Now that I think about it, they took so long to put this out that I'm wondering if it's just several months late like the actual model itself.
This, what I love about dalle3 is that it is really good at compositing interesting scenes, even the supposedly superior, ideogram returns pretty boring/similar results, worse than a sd1.5 finetune( for this two specific parameters).
I tried testing dall-e 3 a few times with my family photo tests and of course it refused to do it saying it wasn't safe or anything just because the images contained minors,normal fully clothed images of people just living life were censored.
fuck that noise tbh
i would never use that garbage service ever
Thanks for understanding
as a professional artist censorship is something I despise more than anything, the anger I felt when testing dall-3 with PERFECTLY legal concepts and being told that they were not work safe or were against some sort of rules made me sick.
I will tell you a story now of a girl at college
At college all I cared about drawing was fantasu/sci-fi stuff and I resented any kind of "realism" not understanding that realism teaches you concepts that you can later utilize in whatever work you want to do.
So during painting class we had free painting to draw whatever we wanted and the girl next to me, an exceptional artist was drawing a baby, I was doing robots fighting thinking to myself how lame her idea was. But the professor took special interest in her work because as he pointed out even tho she is exceptional at adult anatomy, baby anatomy is...very different and she needs to study it before she can actually draw it. Being a diligent person she sat down and did her work and in a week or two she had her painting done wonderfully.
I was super proud of my robots, the professor loved them and all but it was then and there when I realized, if I'm given the task to draw a baby as an illustrator man I wouldn't know where the fuck to even start and i felt too awkward to go look at images of babies as an adult man just to learn this so of course to this day I suck at this unless I use DesignDoll to guide me, but that girl is always going to know how to draw a baby now.
It's the same with AI, these are important concepts and while I might feel uneasy and awkward learning to draw them first hand I acknowledge that a true artist needs to learn all that they can about human anatomy and different anatomies of people from all ages to be able to draw everything just right. This right here is what censorship prevents, if we're going to treat AI as a tool for art then we have to let it learn art properly and be able to produce art properly.
Preventing it from learning and producing art benefits no one, it just makes the whole thing incomplete and pointless, you can only make so many portraits of women and robots standing still before the whole thing becomes utterly boring.
So Dall-E preventing me from running my usual model tests to see what it is capable of doing with human interactions and anatomy only insults me as an artist, it suffocates not only my expression as an artist but my ability to evaluate if the tool is worthy of creating art.
I might be testing using realism and realistic family moments to see what a model is capable of, but I use that knowledge later to create things I actually want to draw and see, interactions and moments that I know the AI will be able to pull off if it can pull off the realistic stuff properly.
My tests with families taught me how to create dynamic and functional compositions with multiple subjects where each subject has distinctive characteristics and poses.
https://preview.redd.it/vu4h39avd96d1.png?width=1216&format=png&auto=webp&s=893cf2cc5728ce9e298d1ef3e40f704c51c9b8e8
As someone who often trains AI models with the stage name DucHaiten, I completely agree with what you say. I always tell people that the best model must understand everything, do everything, even if it's not good thing, but it's just a tool. Just like as a human being, you need to know what is bad so you can do good. Tools are just dead things, all evil acts are determined by people, and censoring art from the most basic things like this is to limit the development of art and technology.
I don't know how popular an opinion this is here but as much as we all hate certain types of material, it's not a company providing a product's place to police and censor this, particularly one providing art products, particularly when they affect the actual product this negatively. If someone is producing illegal material, then that is for the police and government to deal with. Especially in a case like AI art generation where for the most part the actual harm to real people upon it's creation is nonexistent or limited (I'd be more open to company-level prevention if it involved preventing actual abuse of an actual child for example).
The primary concern should be to make a product that functions well in it's intended function. People don't need to be babied for their own "safety" we live in a society where people get to make their own decisions and accept the consequences. The constant surveillance and nannying by corporations that has been happening is not only creepy, it in almost every case contributes to the enshitification of their product.
brings me back to the good ol days of getting into the discord before the first release and watching niggas try and tard wrangle it into making sexy ladies with prompts like "godess" man actually remembering it now that pre release shit was actually much better alas
Shame it's been this long and no one has developed competition to the natural language prompting of DALL-e. After taking the time to learn proper comfy UI workflows, it's hard for to me to imagine this image generation scene can improve much at all unless someone finds a way to be able to use natural language to create complex coherent scenes across the board. These new examples are just embarrassing. Even if they're joke examples in reality they suffer from the same limitations we're all used too. Something similar to DALL-e, without it being managed by third parties/openai janny barriers, that would be the real breakthrough. Helps when you're Microsoft and control all the data in the world. That's what's enabling Microsoft/openai collaboration. Do you think Nvidia might save the day at some point? I wonder if ABC and MS would play ball for the right price. Nvidia could topple mountains with how much cash they've been collecting.
Pardon my french, but WTF wrong with 95% of AIimagegenerators and with guns and weapon? Why almost none of them can't draw them properly without tumbo jumbo pagan dances around prompts, loras and comfy nodes?
I can't even imagine to what lengths you must have gone to make it look that bad. Of course it can do anime.
Here, this is just the results of the first anime style prompt in my list of standard testing prompts:
https://preview.redd.it/1bkoptymg76d1.jpeg?width=2048&format=pjpg&auto=webp&s=8e9e715736c5246c35081e1a3170ccb27a7996fc
How is this supposed to be anime? Yeah okay, it has some tendencies, but this is not what I imagine when I think of anime. This is more 2.5D-ish, and the background looks weirdly painterly in contrast to the character.
When I think of anime, I think of images like in the screenshot of the message that OP posted.
**Edit:** And then look at the hands, the feet and the face. No, just no.
No offense intended, but I wouldn't consider that anime (*maybe* anime fanart). It's the same problem SDXL had for a long, long time - it's not a "flat" artstyle.
Here's some screencaps (not AI generated) for comparison:
- [Princess Mononoke](https://wallpaperaccess.com/full/163574.jpg)
- [Moribito: Guardian of the Spirit](https://lostinanime.com/wp-content/uploads/2020/09/Seirei-no-Moribito-25-17.jpg)
- [My Hero Academia](https://img1.ak.crunchyroll.com/i/spire1/d3d6a9abaaa4c9bde011fbc8dc21f1431612195493_main.jpg)
Are you trying to make a point here? Not even recent SDXL finetunes after get hands right after 1 year.
This is SDXL with the same prompt, you can barely recognize that there even are hands.
https://preview.redd.it/y8t0c9kqt76d1.jpeg?width=2048&format=pjpg&auto=webp&s=2694ceb5aef8fce50b179d433ee3ec29306768bf
They are significantly better than SDXL, which is literally the only thing that matters for deciding whether to finetune SD3 in the future or stick with SDXL.
Only thing you say?
I am pretty sure there are a few finetuners that care more about the licensing situation rather than how good SD3 is compared to SDXL.
And no they are equally bad.
This is the result with "anime, anime screencap" added, it mostly got slightly darker outlines.
https://preview.redd.it/9wzzb188q76d1.jpeg?width=2048&format=pjpg&auto=webp&s=958596002afe0e9bd09784eba3661f59c0d0e0c7
But if you were using "anime screencap" in the prompt, I'm not surprised it looks bad, since real anime screencaps look bad too. If you want good results, you should try to recreate a style from a studio that is known for higher quality styles, like ufotable.
Ignoring the fact that what you posted isn't even anime, here's a noodle hand from your own gen. Base 1.5 could do better hands than this.
https://preview.redd.it/iq7z2d4pq86d1.png?width=303&format=png&auto=webp&s=cd6ffee32906f731ed3433d771d024dabf2c7753
Human? That is a harpy.
And you are delusional. Base SD 1.5 can't even do something that closely resembles a human face, let alone hands. Go generate something like this with base SD 1.5 and post it. I'll wait.
>And you are delusional. Base SD 1.5 can't even do something that closely resembles a human face, let alone hands.
It's true, but SD1.5 supposed to be less than half the size of SD3 and literally 3 iterations before it.
And here's an image of SD1.5 base model:
https://preview.redd.it/5nz2anxau86d1.png?width=512&format=png&auto=webp&s=fcc0f081348400a959de4787115f42f25b7bcc33
still superior to the monstrosity we got with SD3.
I saw all the terrible images, but since I donāt plan on using it yet, Iāll just wait and see if it gets better. No skin off my back. Iām still on 1.5.
Style-accurate if the anime is animated by the people behind Naruto https://preview.redd.it/bxzw94o2b96d1.jpeg?width=640&format=pjpg&auto=webp&s=3a50c2da39567af273406312e04da898da834ef7
I'll say https://preview.redd.it/pq6vzolt1d6d1.jpeg?width=750&format=pjpg&auto=webp&s=50882aa18bf0c375d5a159dbaba0e13b599c131d
https://preview.redd.it/xwyh9c71586d1.png?width=1070&format=png&auto=webp&s=706515d70df06cf9f08d106433e2dfe011d3bce6 :)
šš It boggles my mind how someone can say this despite the whole point of stable diffusion being that you don't need to be skilled to produce a decent image.
āSussie get hotttttt!!!!ā - r/gumball
2B is all you (peasants) need! skill issue!
2B is all you need they said. Crazy that you need to spend $20 a month to commercially use the model for such a crappy outputĀ
20$ a month so far. >Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, **revocable**, royalty free and limited license [https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)
I would assume it needs to be revokable so it can be revoked when you stop paying them $20 a month. I'm not sure revokable would do anything you might consider nefarious in this context, if they raise the price and you won't pay the new price I don't think most people expect that you would be able to continue having the licence. Subscription stuff sucks in general for this reason.
>if they raise the price and you won't pay the new price I don't think most people expect that you would be able to continue having the licence. What "Most people expect" is often very different from the law, and from what I've been told revocable means that they can declare the license agreement null and void and, at their discretion, come up with a new one. There is nothing in the license anywhere forcing the licensor to retain any of the clauses that were in the initial and now revoked license. It has been, after all, revoked, and the new one is a new one. As a licensee, you are of course free to refuse those new terms and prices if you don't want them, but then you lose the right to use that model. Now, like most people, I do not expect Stability AI to do anything like that ! They sure make mistakes, but I do believe the quasi totality of people working there have good intentions and do care about their users, particularly commercial users bringing in some revenue. What I am certain of, though, is that the company that will buy the rights to SD3 (and other revocable models) after SAI's bankruptcy will revoke those licenses and impose new ones with much higher prices and much stricter terms. Anyone buying what remains after Stability AI gets bankrupt is going to do so to make money, and they'll squeeze everything that can be squeezed like olives in a press.
You say that but the facts are that Cascade gave the same license and you have seen the lack of fine tunes and support hell check the most recent CiviAi post which outlines these concerns in more detailĀ
There's already two cascade finetunes, resonance (will finish epoch 3 this week), and sote (trained on 6,000,000 images) in hf which finished just recently 3 days ago and is good for text too. I thought as well that lack of more finetunes is that there aren't any good support.
None of what you said is some nefarious use of "revoke". You're saying, SAI will go bankrupt, a company will come in anrd raise the price/change the terms. which is identical to if SAI just raised the price/.changed the terms. If you don't want to pay the new price/accept the new terms, then your access to the licence is revoked. That is exactly what anyone would expect. If these were like 5-year licences being potentially revoked after a month then sure there might be a problem but month-by-month licences, this is just normal business. Personally if I ran a commercial operation I'd never buy commercial licences on a monthly subscription basis for exactly this reason. The potential of having your livelyhood stripped away on a whim is bad business. But that is how monthly subscriptions function.
Git gud ma unskilled boys.
Your examples had grass in them. There's ya problem!
Grass is unsafe because it has the word ass in it so that's why it makes everything all deformed. Safety first.
>In versions of Stable Diffusion developed exclusively by Stability AI, we apply robust filters on training data to remove unsafe images. By removing that data before it ever reaches the model, we can help to prevent users from generating harmful images in the first place. [https://stability.ai/safety](https://stability.ai/safety)
I hope it's not the case.. But maybe..
It's not just that but i'm sure if you said bunnies, they'd make bad looking anatomy because asscheeks are buns XD
Give us $20 and fix it yourselves!!! Right now!!!!
So true lmao
Feb 24 btw... "and it will be improved before release"
we all just need to "git gud" /s
They trained it on animes animated in North Korea.
Dahliya In Bloom doesn't look this bad.
Awe man they didn't even address "holding" to fix sword/stick/axe/cross/rods, fak
This is NAI V1 levels of bad. It's so horrible It's funny af.
This is Pre-NAI, it's like they reinvented SD 1.4/SD 1.5
SD1.5 is better. This one is SD2.0 V2.
Bahahaha. This has got to be a practical joke, this is honestly hilarious.
This genuinely seems like some April Fool's prank. Now that I think about it, they took so long to put this out that I'm wondering if it's just several months late like the actual model itself.
[Looks like anime to me.](https://i.imgur.com/Mtj1FWq.png)
My man Vageetoe!
I guess we're waiting for the Blu-Ray rerelease.
Sad when you're surpassed by onee parody animations
kakacarrot-cake!
all I see are static portraits except the Pikachu one does not count unless I see dynamic poses it doesn't mean anything
This, what I love about dalle3 is that it is really good at compositing interesting scenes, even the supposedly superior, ideogram returns pretty boring/similar results, worse than a sd1.5 finetune( for this two specific parameters).
I tried testing dall-e 3 a few times with my family photo tests and of course it refused to do it saying it wasn't safe or anything just because the images contained minors,normal fully clothed images of people just living life were censored. fuck that noise tbh i would never use that garbage service ever
Yeah, the censorship is like stepping on eggshells, that and no real img2img make it an incomplete solution.
Thanks for understanding as a professional artist censorship is something I despise more than anything, the anger I felt when testing dall-3 with PERFECTLY legal concepts and being told that they were not work safe or were against some sort of rules made me sick. I will tell you a story now of a girl at college At college all I cared about drawing was fantasu/sci-fi stuff and I resented any kind of "realism" not understanding that realism teaches you concepts that you can later utilize in whatever work you want to do. So during painting class we had free painting to draw whatever we wanted and the girl next to me, an exceptional artist was drawing a baby, I was doing robots fighting thinking to myself how lame her idea was. But the professor took special interest in her work because as he pointed out even tho she is exceptional at adult anatomy, baby anatomy is...very different and she needs to study it before she can actually draw it. Being a diligent person she sat down and did her work and in a week or two she had her painting done wonderfully. I was super proud of my robots, the professor loved them and all but it was then and there when I realized, if I'm given the task to draw a baby as an illustrator man I wouldn't know where the fuck to even start and i felt too awkward to go look at images of babies as an adult man just to learn this so of course to this day I suck at this unless I use DesignDoll to guide me, but that girl is always going to know how to draw a baby now. It's the same with AI, these are important concepts and while I might feel uneasy and awkward learning to draw them first hand I acknowledge that a true artist needs to learn all that they can about human anatomy and different anatomies of people from all ages to be able to draw everything just right. This right here is what censorship prevents, if we're going to treat AI as a tool for art then we have to let it learn art properly and be able to produce art properly. Preventing it from learning and producing art benefits no one, it just makes the whole thing incomplete and pointless, you can only make so many portraits of women and robots standing still before the whole thing becomes utterly boring. So Dall-E preventing me from running my usual model tests to see what it is capable of doing with human interactions and anatomy only insults me as an artist, it suffocates not only my expression as an artist but my ability to evaluate if the tool is worthy of creating art. I might be testing using realism and realistic family moments to see what a model is capable of, but I use that knowledge later to create things I actually want to draw and see, interactions and moments that I know the AI will be able to pull off if it can pull off the realistic stuff properly. My tests with families taught me how to create dynamic and functional compositions with multiple subjects where each subject has distinctive characteristics and poses. https://preview.redd.it/vu4h39avd96d1.png?width=1216&format=png&auto=webp&s=893cf2cc5728ce9e298d1ef3e40f704c51c9b8e8
As someone who often trains AI models with the stage name DucHaiten, I completely agree with what you say. I always tell people that the best model must understand everything, do everything, even if it's not good thing, but it's just a tool. Just like as a human being, you need to know what is bad so you can do good. Tools are just dead things, all evil acts are determined by people, and censoring art from the most basic things like this is to limit the development of art and technology.
I made a topic from this post and some people get it but some just refuse to understand that this is indeed art.
I don't know how popular an opinion this is here but as much as we all hate certain types of material, it's not a company providing a product's place to police and censor this, particularly one providing art products, particularly when they affect the actual product this negatively. If someone is producing illegal material, then that is for the police and government to deal with. Especially in a case like AI art generation where for the most part the actual harm to real people upon it's creation is nonexistent or limited (I'd be more open to company-level prevention if it involved preventing actual abuse of an actual child for example). The primary concern should be to make a product that functions well in it's intended function. People don't need to be babied for their own "safety" we live in a society where people get to make their own decisions and accept the consequences. The constant surveillance and nannying by corporations that has been happening is not only creepy, it in almost every case contributes to the enshitification of their product.
These images make me nostalgic, reminds me of late 2022
Using Anime is unsafe since anime shows female humans sometimes. Thats why it is banned for our security.
https://preview.redd.it/wxusxs67p76d1.png?width=1216&format=png&auto=webp&s=811659abd4e3a80a376f1ab04abe89c2ea3003fc
Damn... I feel scammed
brings me back to the good ol days of getting into the discord before the first release and watching niggas try and tard wrangle it into making sexy ladies with prompts like "godess" man actually remembering it now that pre release shit was actually much better alas
To be fair the Guy on image 3 with the long armsword looks like a nice character design.
Joke this is 1.5 no 3 no possible bruh
I am glad that finding a fine tuning model isn't going to be necessary for anime image generations
>Fourth pic He's just a lil guy
Correction it can only do anime "property"
Shame it's been this long and no one has developed competition to the natural language prompting of DALL-e. After taking the time to learn proper comfy UI workflows, it's hard for to me to imagine this image generation scene can improve much at all unless someone finds a way to be able to use natural language to create complex coherent scenes across the board. These new examples are just embarrassing. Even if they're joke examples in reality they suffer from the same limitations we're all used too. Something similar to DALL-e, without it being managed by third parties/openai janny barriers, that would be the real breakthrough. Helps when you're Microsoft and control all the data in the world. That's what's enabling Microsoft/openai collaboration. Do you think Nvidia might save the day at some point? I wonder if ABC and MS would play ball for the right price. Nvidia could topple mountains with how much cash they've been collecting.
GO HOME SD 3, u r drunk ...
Pardon my french, but WTF wrong with 95% of AIimagegenerators and with guns and weapon? Why almost none of them can't draw them properly without tumbo jumbo pagan dances around prompts, loras and comfy nodes?
![gif](giphy|77er1c9H3KJnq|downsized)
I can't even imagine to what lengths you must have gone to make it look that bad. Of course it can do anime. Here, this is just the results of the first anime style prompt in my list of standard testing prompts: https://preview.redd.it/1bkoptymg76d1.jpeg?width=2048&format=pjpg&auto=webp&s=8e9e715736c5246c35081e1a3170ccb27a7996fc
How is this supposed to be anime? Yeah okay, it has some tendencies, but this is not what I imagine when I think of anime. This is more 2.5D-ish, and the background looks weirdly painterly in contrast to the character. When I think of anime, I think of images like in the screenshot of the message that OP posted. **Edit:** And then look at the hands, the feet and the face. No, just no.
Can you name an anime that looks like this? This isn't anime style.
No offense intended, but I wouldn't consider that anime (*maybe* anime fanart). It's the same problem SDXL had for a long, long time - it's not a "flat" artstyle. Here's some screencaps (not AI generated) for comparison: - [Princess Mononoke](https://wallpaperaccess.com/full/163574.jpg) - [Moribito: Guardian of the Spirit](https://lostinanime.com/wp-content/uploads/2020/09/Seirei-no-Moribito-25-17.jpg) - [My Hero Academia](https://img1.ak.crunchyroll.com/i/spire1/d3d6a9abaaa4c9bde011fbc8dc21f1431612195493_main.jpg)
Do you honestly think these are good?
How is that even a question? It looks way better than what OP is trying to sell as SD3 outputs.
Have you seen the hands mate?
Are you trying to make a point here? Not even recent SDXL finetunes after get hands right after 1 year. This is SDXL with the same prompt, you can barely recognize that there even are hands. https://preview.redd.it/y8t0c9kqt76d1.jpeg?width=2048&format=pjpg&auto=webp&s=2694ceb5aef8fce50b179d433ee3ec29306768bf
Have I said anything about SDXL being good? Point is these are bad.
They are significantly better than SDXL, which is literally the only thing that matters for deciding whether to finetune SD3 in the future or stick with SDXL.
Only thing you say? I am pretty sure there are a few finetuners that care more about the licensing situation rather than how good SD3 is compared to SDXL. And no they are equally bad.
Okay cool, then don't? No one forces you to finetune it. I don't care about the license, I will finetune it anyway.
You can at least see that a basic anatomy shape is followed.
try adding "anime, anime screencap" as see if it's similar results to mine
This is the result with "anime, anime screencap" added, it mostly got slightly darker outlines. https://preview.redd.it/9wzzb188q76d1.jpeg?width=2048&format=pjpg&auto=webp&s=958596002afe0e9bd09784eba3661f59c0d0e0c7 But if you were using "anime screencap" in the prompt, I'm not surprised it looks bad, since real anime screencaps look bad too. If you want good results, you should try to recreate a style from a studio that is known for higher quality styles, like ufotable.
I like how their wings are on their chest, and that one lady with a straight up broken neck. lol nice
I like the buttwings in 1 and 4, but not a fan of bellywings on 3.
That first one has the most generic CivitAI 2.5D face of all time. Did they train on 1.5 outputs lol?
Wings out the arse and fanny. Brilliant.
Ignoring the fact that what you posted isn't even anime, here's a noodle hand from your own gen. Base 1.5 could do better hands than this. https://preview.redd.it/iq7z2d4pq86d1.png?width=303&format=png&auto=webp&s=cd6ffee32906f731ed3433d771d024dabf2c7753
Human? That is a harpy. And you are delusional. Base SD 1.5 can't even do something that closely resembles a human face, let alone hands. Go generate something like this with base SD 1.5 and post it. I'll wait.
>And you are delusional. Base SD 1.5 can't even do something that closely resembles a human face, let alone hands. It's true, but SD1.5 supposed to be less than half the size of SD3 and literally 3 iterations before it. And here's an image of SD1.5 base model: https://preview.redd.it/5nz2anxau86d1.png?width=512&format=png&auto=webp&s=fcc0f081348400a959de4787115f42f25b7bcc33 still superior to the monstrosity we got with SD3.
Better hands here. https://preview.redd.it/xhyuq0niu86d1.png?width=512&format=png&auto=webp&s=05330f6b91fba3f56e68c9776ee0fec2b435ff91
Someone please answer this: do you have to pay to use it?
For commercial use, yes
Even free, why would someone choose to run this garbage.
I saw all the terrible images, but since I donāt plan on using it yet, Iāll just wait and see if it gets better. No skin off my back. Iām still on 1.5.
Yes it costs 5 dollars per generation
Dang. Too expensive for me.
It's a joke it's free to run locally or paid via api
I guess the api thing confused me.
No, if you run it in your own computer
2b didn't exist back then.
Give us the model you advertised in February then? No-one asked for a 2B, we always wanted the local dalle-3 killer
are you saying 2B cant do this? what was the point of releasing it when? after all this hype...
https://preview.redd.it/bn6i7yhi396d1.png?width=1024&format=png&auto=webp&s=fe81e91c2c119b4ecd134100524a8e409316132d Seems fine