Which features are live today?
FFmpeg as a Service and MCP integration are fully live. Run any FFmpeg command via the REST API, or have AI agents drive video jobs through 6 MCP tools. Higher-level operations (managed watermarking, captioning, GDPR redaction, transcoding presets) are on the roadmap and accessible via raw FFmpeg in the meantime.
Why is everything FFmpeg-based under the hood?
FFmpeg is the most battle-tested media processing tool — every major video platform uses it. Building Rendobar on FFmpeg means reliable behavior, predictable output, and the same filters and codecs you'd use locally. Higher-level features will be FFmpeg pipelines too, just packaged with sensible defaults.
Can I request a feature or change roadmap priority?
Yes. Email [email protected] with your use case. Coming-soon features ship faster when paying customers ask for them. We don't take requests for planned items unless they map to a real workflow you'd build today.
Do features work across video, image, and audio inputs?
Yes — anything FFmpeg can read, Rendobar can process. The live FFmpeg backend handles video, audio, image, and subtitle inputs. Watermarking, QR generation, and other coming-soon features will follow the same auto-detection model, but for now you can do all of it today via raw FFmpeg commands.
Are MCP tools different from REST endpoints?
No — MCP tools wrap the same REST API. The 6 MCP tools (submit_job, get_job, list_jobs, probe_media, get_account, upload_media) are 1:1 with REST endpoints. Use whichever fits your client. AI agents prefer MCP because tool-calling is native; humans and apps use REST.