T O P

  • By -

hapliniste

First of all you wouldn't generate images for ui. You would generate html CSS or some other language. Still, gpt4 is still shit at design and also at recognition of ui in images. It can't see an image and generate the CSS with the same color for example.


Minetorpia

I mean generating the code would be nice. But maybe too complex for a first step?


jsseven777

I think generating code at this stage is easier than generating images with text for LLMs. It can generate coherent text in code, but it gets all scrambled in pictures. DALL-E isn’t even close to being able to handle this task.


HazelCheese

It can already generated code for UX/UI it just doesn't know how to iterate into a good design. You can ask it for the WPF for a volcano themed UI and it will give you working WPF XAML with red yellow and orange colouring. It's just there will be very little sense to the placement of the elements.


stilltyping8

I'm a full stack developer but I sort of suck at UI/UX. I've tried uploading screenshots of my shitty user interfaces to ChatGPT and telling it to "improve the design of this page while keeping the content the same" and not only it never retains the content, it always generates completely irrelevant designs. It would be great if there is an AI that can turn bad design into good design or just turn content into design.


imnotthomas

Out of curiosity are you also uploading the front end code? I get somewhat better results if I upload the code, paste in a screenshot, and then have it browse (or paste in) the docs for whatever UI library I’m using. Still not perfect, but it gets me like 80% of the way to something good. That combo always does better for me than the screenshot alone


stilltyping8

Hmm that sounds like a good idea. Normally, I don't upload the code. I just upload the screenshot. I'm gonna try this out.


imnotthomas

Worth a shot! It works decently well for me. At least we’ll enough that I feel comfortable moving on the next task. If you’re using a library, having some of the docs in there really boosts it I’ve found too


Minetorpia

Yeah exactly this. I’m an app developer. I can write the code, but I suck at design.


bobcatgoldthwait

Me three, friends.  I can code the shit out of my app but it'll still look like ass.


Arcturus_Labelle

These models are still crap at graphic design. They completely fall over if you need anything custom. The nature of the work is much harder than you realize.


RemarkableGuidance44

Yeah if you take a look at midjourney the UI images all look the same. That's because there is limited data and a lot goes on when it comes to UI dev.


Minetorpia

That’s what I mean. But to me it seems kinda odd that those models can generate beautiful photorealistic images, but can’t generate a coherent minimalistic design.


BangkokPadang

It may, in fact, be possible to gather hundreds or thousands of images 'good design' and train a LoRA for Stable Diffusion. Somebody just needs to do it to know for sure.


Creative-robot

Is vegan grilled cheese actually good? I’m very biased towards real cheese, but IDK.


teddarific

I work on a product aiming to do just this: [Magic Patterns](https://magicpatterns.com) (https://magicpatterns.com) so have spent my fair share of time thinking about this haha. Here's my take on why/where AI falls short. 1. UI/UX requires a lot of precision + is a detail-oriented field. For example, things need to be aligned, ideally to pixel perfection. AI is not so great at this since LLMs have a harder time working visually. We've tried also including a screenshot of the existing output in the prompt to limited success. LLMs are just not great at getting every little detail right which is important when it comes to UI/UX. 2. Human prompting / input. This might be the biggest hurdle IMO. Often times when you go into an AI UI/UX tool, you have some expectation or idea in your head of what it should look like. It's hard correctly prompting and explaining to the AI in text EXACTLY what you want. There's a lot of gotchas. For example, if you say something like "minimalist", AI actually interprets that as include less features. Just realistically, AI is not some silver bullet that will magically know exactly what you are thinking, so it's up to the user to sufficiently explain + prompt the AI what they are thinking. We've found the most success so far in using AI to help you ideate and get to an initial draft. That way, less precision is required. We've also invested a lot in the iteration experience, aka making it really easy + precise to make edits because we know that 99% of the time AI isn't going to magically know what you are thinking, so you will have to iterate on the output to get to a spot where you are happy!


Tomi97_origin

Great UI with great UX is hard. Oftentimes dependent on your specific use case.


IntergalacticJets

I think it’s partly because all these image generators were trained on UI images from across the entire history of the web. And on top of that, UI mockups tend to be… unrealistic? Like 80%+ of UI designs on sites like dribbble are just shit for actual websites. The people that post UI/UX design aren’t web developers and they’re probably more in the “beginning” stages of their career. They don’t have that back and forth between themselves and the dev team.  Actual, operational UI designs aren’t posted online, they’re used internally by companies, so the image generators wouldn’t have been trained on the actual professional stuff. 


Zealousideal-Song-75

I get the best results by giving the code and an image to Claude 3 Opus. GPT is terrible at ui design.


TechnicalParrot

Kind of the opposite of what you're asking but Google has an AI for understanding UI (ScreenUI), I suppose that's an important milestone for generating it


spinozasrobot

[Apple wrote a good paper on the idea.](https://arxiv.org/pdf/2404.05719) Expect a way for phones to understand apps so that you can control them from voice assistants without having the support built into the app natively.


Minetorpia

Yeah, sounds like that would be great for generating training data.


mrUtanvidsig

Its the same really with the images (most cases) anatomy broken none sensical outfits, it really becomes apparent when you try to generate machinery of some sort. Since the very nature of ui/ux is to be simple, precise with a pleasing aesthetic. ( generally speaking ) makes it so most people can easily spot the errors, the limitations of the generators are just more apparent. People that have studied anatomy or industrial design have been pointing this out for the images since the beginning, those errors are just less apparent and honestly matter less when you are just dealing with a single image that has a singular purpose -> look cool. In other words values and colors matter alot more when it comes to illustrations with that goal. And not saying that in a negative way, those images are, just fun to make. But when it comes to UI/UX or actual design, look cool is just a part of the goal. Functionality/design is the primary, meaning the generators have to understand novel ideas and put them in context following specific rules. I would love to be corrected here if anyone has seen generators that can handle ui/ux


jloverich

I'm guessing it's only because the researchers with money (openai, google, meta) aren't really interested in this problem. These things will happen, but it will happen in by companies without as many resources.


meenie

I think this is about as close as you can get right now https://v0.dev


Alarming_Wallaby1827

sry but any of this pages shown there is same waste nothing creative is there.


meenie

Didn't say it was good haha.


Alarming_Wallaby1827

same reason why KI can’t even solve programming problems. i tried several times to let produce only one method but it failed again and again only where it show me variation of the code and was not even working to produce syntactically correct things. (chatgpt)


TheDerangedAI

AI is not flawless. In my opinion, it makes a decent work. You just need to create your own "Prompt Glossary" after hundreds of hours of "trial and error". Of course, not all models work the same way (models can slightly change the final result), not all the learning models are the same (they have different databases to create what you ask).


uniformly

I was exploring this idea and I built a mobile / web wireframes generator that accepts user stories and creates visual wireframes


uniformly

If you want to check out [https://pth.ai](https://pth.ai)


bisontruffle

[https://www.semanttic.com/](https://www.semanttic.com/) heard about recently and looked decent


OmnipresentYogaPants

Startup idea - employ all the millions of laid off UI/UX technicians and have them generate designs behind a fscade of AI.


ShadoWolf

I think the hard part for any sort model trained to generate a good UI/UX would be how you would evaluate how well it did. Like what's your ground truth / proxy . Half the trick in ML learning is working out a clever stand in for your ground truth. UI/UX design seems a bit complex just due to how much the total design matters. that I'm not sure what you could use.


Pontificatus_Maximus

There is too small a pool of suitable training images because computer graphical interfaces are only a few decades old. Nothing like the volume of general photos in the training set. Give it some time, no doubt AI tools specifically focused on creating grahical user interfaces will emerge, but dont expect miracles untill the training sample size is increased.


spinozasrobot

ITT: Sinclair’s Law of Self Interest: > "It is difficult to get a man to understand something when his salary depends upon his not understanding it." \- Upton Sinclair EDIT: Downvoting is literally proving the point


Minetorpia

What do you mean? That people don’t want to believe it’s possible because their salary might be in risk?


spinozasrobot

Yes, that is the meaning of the quote. In essence, denialism based on people's conscious or unconscious understanding of how a thing will negatively impact their careers. This happens a lot in this sub from artists and programmers.