Has anyone explored text-to-video/image AI tools (e.g., Textideo) for scientific visuals?
Hi all, I’m curious whether anyone here has used AI-driven text-to-video or image generation tools for creating visuals tied to research communication, tutorials, or workflows. One example I came across is [makeshot.ai](https://makeshot.ai/) — a service that generates videos and images from text prompts. The idea is that you write a description of a concept, process, or workflow, and the tool produces media content that reflects that description. Potential use cases I’m considering include: Generating concept visuals for tutorials or presentations Creating animations that illustrate data analysis pipelines or experimental workflows Producing quick draft media to support blog posts or documentation I haven’t used it extensively yet, so wanted to ask the community: Has anyone experimented with text-to-video/image AI tools (Textideo or similar) for science communication or documentation? If so, what types of prompts and outputs worked best in practice? Where have you found these tools useful — and where do they fall short? Looking forward to hearing thoughts and experiences — especially from those who integrate visuals into teaching or outreach. Thanks!
participants (1)
-
ethanparkx56@gmail.com