jan
1
Some time in the next few weeks, we’ll be releasing the ability to build workflows from textual prompts. It works quite well at this stage, although of course it’s not yet perfect.
This is one of the most requested features of n8n, so I wanted to give you a sneak peek and share some details about how we’re planning to roll out the new AI workflow builder.
Roll out
-
In the beta phase, we’ll be releasing the feature to Cloud users, since this is now the most popular way to consume n8n and was the faster version to build.
-
We’re still figuring out the best way to bring this to self-hosted. It’s not intended to remain a cloud-only feature, but we do have some more work to do to get it in everyone’s hands. So please watch this space for updates there.
To be clear: we see this feature as a way to get more people using n8n, rather than a way to make money. Our goal is to put it in as many people’s hands as possible (including on self-hosted).
Pricing
There are also real costs to this feature (they’re measured in millions
), so unfortunately, we will have to meter it. This means that there will be a certain number of interactions included with each Cloud plan, as well as the trial.
This pricing is only designed to cover the costs of the models behind it, since high prices run against our goal of making the feature available to more people. We will, of course, review pricing once we have more information on how it’s used.
We’re really excited about this feature and its potential to completely change how and by whom n8n is used. We hope you like it!
146 Likes
Thanks @jan , That’s great to hear from you. It will really be very helpful for all of us………
3 Likes
Hi @jan , a great way to side-step the costs issue is by allowing n8n Cloud and n8n self-hosted customers the ability to provide their own API credentials into the LLM provider of their choice.
That way, customers can use the feature as much (or as little) as their budgets afford… without you having to juggle these costs on behalf of customers directly.
If/when you do add that option, I’d request that self-hosted customers be allowed to provide an alternative BASE_URL so that they can proxy the traffic through a high-performance LLM proxy (like https://litellm.ai) – that way, all their requests can be load-balanced according to their individual needs.
42 Likes
jan
5
@dkindlund Yes, that is one of the things we have also discussed but was not built for the beta as for now it is about learning and moving fast.
11 Likes
Will this prompt logic work for edit/updating existing workflows for error handling or improvements?
4 Likes
Yes, the idea is that you can edit existing workflows and also debug executions using the feature
14 Likes
Gotta be honest, I’m not really sure this will land for the self-hosted crowd.
I can see how it’s a fun addition for Cloud users — and I get that the aim is to bring new users into n8n — but for most of us running self-hosted (especially those using n8n as an ADM backbone), this isn’t something we’d reach for day-to-day.
In fact, features that try to “auto-build” workflows from prompts often feel a bit fragile in production-grade environments and can end up breaking more than they help. So while I understand the appeal of lowering the entry barrier for new people, as a self-hosted/ADM user I don’t personally see a use-case for it at this stage.
P.S. n8n is the best 
4 Likes
solomon
12
I really thought this was going to be exclusive to the Cloud plans, because of the IP involved in the prompts and agent orchestration. THANK YOU for releasing it also for the self-hosted version. Super excited for this one.
3 Likes
Cfomodz
13
Hey @jan I am excited to see this, and figured I should reach out. Let me know if email or DM is better. As I’m sure is not a surprise I’ve been building workflows with AI assistance for months. I don’t want to be too presumptuous, but my original system was over 1,100% token usage of my final system. Based on what you’ve shared, that would represent hundreds of thousands of dollars. Again, I’m confident you have already thought of everything I did, but on the off chance one of them hasn’t come up already, it does seem like I should mention it considering the millions in play. <3 either way 
2 Likes
ibaikov
14
I’m self-hosting, and I never used chat-to-workflow tools that are out there, I will probably use this from time to time. I’d like to be able to get a simple part done instead of spending time doing it myself. I’ll manually review and tweak if needed, but it might speed things up, I won’t need to search for nodes, drag them etc. Sometimes it might help you brainstorm something complex that you can’t simplify or find another approach to.
For example, right now to make Home Assistant do things via AI agent node I have to make a HA node for every service call in home assistant: one for light toggle, one for light brightness, one for media player, one for button clicking. While waiting for it to be merged into a single node somehow, I could ask AI to add these nodes for me.
I’d love it to work via openrouter, but I guess we’re talking about a custom model and idk if that’s feasible for you folks.
4 Likes
@jan amazing! I have been seeing so many attempts (most half-assed) at solving this exact problem.
We have even been creating a version of our own…
When this comes to self-hosted, and/or is public code on the GitHub, I would love to put my own effort into helping develop that further directly for n8n! 
4 Likes
The cost of using AI can’t be avoided, but it’s interesting. I’m ready to try it. If you need a tester, I’m available. This is a community: seeking value in democratizing the use of technology.
1 Like
Amazing!! My clients and students will absolutely love this. Makes it more approachable to a wider audience, while still allowing for the ability to pull twist the various knobs so to speak to get the full power of n8n. Sign me up!
BTW, definitely worth something. There’s value to this. No need to take on the costs, you can just use the LLM credentials i already have set up and let the user pay the token costs.
2 Likes
I think the release of this feature is very positive for the community but do have a few concerns…
For those who have been following the Replit fiasco when they moved from Agent 2 to Agent 3 these are real and well documented in Redit and a few press articles.
If the feature is for beginners, position it and promote it as so…don’t overhype for intermediate or advanced developers…one size doesn’t fit all.
If the feature is “expensive” that’s in the mind of the customer. Spending $20 to $50 per day is not for beginners but may be acceptable for larger dev shops. This likely means the feature is likely an add-on and should be unbundled for a period of time.
Test, test and retest against metrics. Don’t assume prelaunch stats will hold post launch - they won’t and you need to game out the consequences before launch. Badly managed, AI implementation can be an existential threat to a company.
Don’t drink your own Kool aid…we likely want the feature as much as you want to deliver it but take the feedback on the chin - don’t plow through based on people who are already well vested.
I love the ideas already here of using your own models to control costs, timelines and quality and focusing on different features for the different audience types.
3 Likes
Thank you for sharing, i”d like to sign up for the beta. I could start to use this tomorrow!
2 Likes
qa_xcn
21
Your writing was very helpful. Thank you!!
2 Likes
It will be something like n8nchat add-on for Chrome?
Maybe better trained since it has access to underlying code in some way…
And for the self-hosted option, curios about how it will be implemented since it has to be “free” … right?
Cheers!
2 Likes