Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

AI Automation Society

Public • 3.7k • Free

AI Automation Society Plus

Private • 128 • $59/m

107 contributions to AI Automation Society
Hi Nate, Hi community
We are having a problem right now with our AI Agency, basically we already connect Manychat with N8N, for create a chatbot agent for WhatsApp that in a few words is going to be given information regarding the products and also trying to close the deal, using seal strategies and when it is time to pay for the product, send the conversation with a real person to confirm the payment, but we have want to add a manager into manychat that can interact with the chats when the payment process start, but we don't know how to do it at all we are having some troubles performing this agent and also we have a deadline until 30 of December so If someone knows how I can solve this problem please let me know int the comments, thank you so much @Nate Herk
1
2
New comment 1d ago
1 like • 1d
Have you seen this? https://community.manychat.com/general-q-a-43/human-handover-4122?tid=4122&fid=43
N8N Supabase Use Case Recommendations
I followed Nate's advice in terms of setting up a Supabase RAG database and I have a question related to how to make that work in my use case. I am an executive coach and I have around 25 clients with an average two 60-90 minute meetings a month. I get transcripts for all of those calls through Zoom and also through Otter. I'm also getting summaries and next step items from other AI assistants. My goal is to load each individual historic and new transcript into a new google doc using Title to distinguish the client. That doc gets pushed into Supabase RAG database. Before a meeting with a client, I ask a question in my chatbot before a client meeting to review most recent challenges, actions, plus anything else I should be aware of or recall. My question related to Supabase is this. Can I do what I've defined above or should I be setting up subcategories or tags specific to that client such that client A always goes into the client A database, client B goes to the client B database etc? Or is even asking this an irrelevant question at this point? I want to ensure it's not pulling in conversational data from other clients. I’m trying to make sure I have the best approach to set this up now instead of just starting to add transcripts. I have almost 10 years of transcripts, so I want to get right. Let me know if you have any advice. Thank you.
0
3
New comment 1d ago
0 likes • 2d
As a business strategist / coach myself I look to tie the most secure route and separate the database or tables for each client. Also do a test to see if a vector database may suit more for transcript text than a relational database like supabase. Separating by table / database will mean the agent will never pull results from more than one client, and in your prompt you can ask it to confirm the information source. You should also be able to save as text/json if that saves you from needing a google doc. Zoom AI and otter both seem to have an api for direct insertion via n8n (I haven't tried this yet!). Unfortunately I use fathom.video which doesn't yet have an api.
1 like • 1d
@Tom Adams I hear you. Fathom seems to have superior context-aware transcription. Additional platforms isn't ideal but part of the world we're in at the moment. I highly suggest a vector database for transcript searches, and you can also tag comprehensive metadata. We use MS SQL for structured data (e.g. financial feeds), but similarity searches are far better with vector. We self host (which I like from a security point of view) and n8n helps greatly for creating backups when and where I want (I use it to backup my docker / MS SQL instance). Self hosting also means I use QDrant.
Communication Agent
Hello everybody, I'm in the process of building a Project Management Multi-Agent framework for a client, and part of the multi-agents includes a Communication Agent that needs to communicate with end user via Email. I've been having a hard time building this, and these are the two methods I have tried. An agent with Send Mail and Get Mail, but in this scenario cannot keep the workflow running until a response is received.Therefore moved to the next workflow, which was an agent with the Send Mail tool, and a Get Mail trigger connected to the agent. Unfortunately, this also hasn't been able to bring out any fruitful results. Does anybody have any experience building an agent like this. Please do share any thoughts you might have.
0
2
New comment 2d ago
1 like • 3d
Just a left-field idea for keeping the workflow running... you could write a subworkflow for get email, and send the response as a HTTP node to a wait node on the main workflow?
How can i start learning AI Automation?
I'm a ZOHO CRM Consultant with experience in automation for CRM's. But, Im interested in AI Automation. Can someone help me with a roadmap and maybe some resources from where i can learn from scratch?
2
5
New comment 2d ago
3 likes • 3d
Have a look at @Nate Herk's paid community - it's got some great resources to get going and develop further...
Asynchronous processes
I have a workflow which writes out to a Google Sheet to log messages, but it waits for that node to complete before it sends the reply to the user. I'm wondering is if I can have a workflow that can run nodes asynchronously, so it doesn't need to wait for that one to complete before it moves on to the next. Is that possible?
1
6
New comment 4d ago
0 likes • 6d
@Ed Dowding just an update. Today I installed Redis on docker and set up two n8n workers. However, because in n8n works in a serial manner, even if I ask it to call two separate workflows, it will wait for one to finish before moving onto the next. From what I can tell the way around this is to set up web hooks for each workflow. Have the web hooks respond immediately to continue the workflow on and then when I need to bring the data back together, advertise another two hooks to receive the data back. I will do some testing to see whether a merge node would wait for both workbooks to respond before continuing on. Also, with all these hooks floating around, I will look to use a CLI command to activate my Web hook workflows when my master workflow starts and deactivate them when it finishes. The aim here is not to load up the system with unneeded overhead. It will probably be in the new year, but I'll let you know how it goes
1 like • 4d
@Ed Dowding an update. I had a play with parallel execution. I tried @Rick Wong 's fan out suggestion, but I found that exact "execute workflow" still worked sequentially (waiting for the response to the first workflow before running the second one. However, I moved to enabling a web trigger on the sub workflows (respond immediately) and called them with an http node which passed my data via POST. I then used 2 wait triggers on my main workflow, and presto, they both executed within a few milliseconds of each other. The trick is to pass the dynamic $execution.resumeURL with your output to the sub workflows, noting that each wait node uses Wait nodes and as the webhook URL is called it will resume each Wait node sequentially. This should also work without setting up workers to a degree, but workers will add to parallel execution capacity. With web n8n, the level of concurrency you can achieve is apparently based on your plan.
1-10 of 107
James Walls
5
328points to level up
@james-walls-8225
Company Owner & Business Strategist, using n8n & AI to provide a better experience and outcomes for my business consulting clients.

Active 16h ago
Joined Oct 30, 2024
Florida, USA
powered by