Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Grow X Impact™

Private • 130 • Free

AI Automation Agency Hub

Private • 71.1k • Free

InsightAI Academy

Private • 7k • Free

Automate What Academy

Public • 853 • Free

Institute of AI

Private • 46 • $99/m

Skool Masterclass (Free)

Private • 89.9k • Free

Content Academy

Public • 9.2k • Free

No-Code Nation

Private • 215 • Free

AI Agency Mastermind 🤖

Private • 397 • Free

22 contributions to Content Academy
Help with the NCA toolkit to add text overlay
Hi everyone, I've been trying to add text overlay on a video using NCA toolkit, specifically the ffmpeg compose endpoint. I also use the NCA toolkit api GPT and it recommended me to use a filter called "drawtext" but that's not working, it returns an error 500 with the following statements: - Either text, a valid file, a timecode or text source must be provided - Error initializing filters - Failed to set value 'drawtext' for option 'filter_complex': Invalid argument - Error parsing global options: Invalid argument I would really appreciate any help to get this working! Maybe a dedicated endpoint to this as well would be a banger. Thank you.
3
2
New comment 1d ago
1 like • 1d
@Stephen G. Pope Early Christmas present! Haha thanks man 🔥
Stop Wasting Money! Cancel ALL Your API Subscriptions NOW!!
In this video, I’ll show you exactly how to ditch costly monthly API subscriptions and replace it with one free tool, the No-Code Architect Toolkit. Replace, ChatGPT Whisper, Cloud Convert, Createomate, JSON2Video, PDF(dot)co, Placid, OCodeKit. ----------------- Docker Image → stephengpope/no-code-architects-toolkit:latest Postman Template → https://bit.ly/49Gkh61 NCA Toolkit API GPT → https://bit.ly/4feDDk4 Github Repository → https://bit.ly/3DhFo2A
60
14
New comment 9h ago
Stop Wasting Money! Cancel ALL Your API Subscriptions NOW!!
0 likes • 3d
Awesome!! Question, has anyone managed to add text overlay using the ffmpeg compose function? When I try it (with the help of the NCA gpt) it doesn't work, it returns an error related to the 'drawtext' filter I'm using to add the text.
How to merge multiple videos into one, with text overlay (timed according to voice audio)
Hi everyone, I'll do my best to explain. So I have a database of real estate properties, each one ha videos of a house and data like the amount of rooms, bathrooms, etc. I'll feed all that house data into chatgpt so that it can create a video script, and then I'll turn that script into voice with Elevenlabs. After that I want to merge all those video shots of the house into one, for example let's say the scripted voice talks about the kitchen for the first 5 seconds, so the kitchen video and also some text overlay saying "kitchen" would appear in that part. Then for the next video shot, the process would repeat and so on...And the final result would be like a slideshow video showing each parts of the house, with appropriate text overlay on each part, and the elevenlabs voice on top (timed accordingly of course). Is the no code architects tool kit capable of doing this? I really wanna use it haha. And if not what tools would you recommend? Thank you! I would appreciate any help.
8
9
New comment 1d ago
2 likes • 7d
@Sriram Kota Yes I'm planning on doing exactly that! And NCA toolking does have an ffmpeg function so all good. Thanks for the help
1 like • 6d
@Dustin Jenkins Thanks but I was looking to automate that whole process
Tool to split audio into 50-70 second audio clips
Hi everyone, I have multiple talking audio clips and I need to find way to programmatically split them into small clips. Do you know of any tool or API, which could identify gaps inside the talking audio and produce multiple ~1 minute WAV/MP3 files? Use case: These clips are talked by my assistant who is not very good in English, or has a strong accent. I recently found a SieveData API, where AI can help me to dub the video with the same voice, but better English. The problem is that when I upload clips of 8-12 minutes, the result is the audio of the same length, but if algorithm sees it has spoken everything too fast, it will try to slow down the end of the talk. So it sounds a bit weird, when the same person is talking slower at the end. My idea is to split audio into a shorter clips, then process via API, and in this way reduce a risk that the voice will be slower or faster.
5
3
New comment 14d ago
0 likes • 15d
You can use the elevenlabs dubbing api, I think it has a limit of 2.5 hours. But either way I think you can split your videos using the "nocode architects toolkit" api you can host on your own google cloud server (https://github.com/stephengpope/no-code-architects-toolkit) specifically using the ffmpeg compose v1 endpoint, but I think its bugged out a bit
0 likes • 14d
@Augustas Kligys I think if you get the transcript first (again using the nocode architects toolkit) you can use it someway to control where it cuts
Is the v1/ffmpeg/compose endpoint working in the nocode architects toolkit?
It gives me error 404 "The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again." The other endpoints work fine, though.
4
0
New comment 16d ago
1-10 of 22
Ricardo Taipe
4
88points to level up
@ricardo-taipe-1725
wsg

Active 10h ago
Joined Oct 10, 2024
Lima, Peru
powered by