Text to Video Generation using Deep Learning
Technology developments have resulted in the creation of techniques that can provide desired visual multimedia. Particularly, deep learning-based image generation has been the subject of in-depth research in many different disciplines. On the other hand, it is still challenging for generative models...
Saved in:
Published in: | 2023 Eighth International Conference on Science Technology Engineering and Mathematics (ICONSTEM) pp. 1 - 7 |
---|---|
Main Authors: | , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
06-04-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Technology developments have resulted in the creation of techniques that can provide desired visual multimedia. Particularly, deep learning-based image generation has been the subject of in-depth research in many different disciplines. On the other hand, it is still challenging for generative models to produce films from text, a topic that is less focused. This research tries to fill this gap by training the model to generate a clip that matches a given written sentences. The field of conditional video creation is largely underdeveloped. With the help of a conditional generative adversarial network, which develops frame-by-frame and ultimately creates a full-length film, our project's goal is to transform text to image to video. This focuses on creating a single, superb video frame in the initial step while learning how to connect text and visuals. As the stages go, our model is gradually trained on an increasing count of continuous frames. This approach of learning in stages stabilizes the training and makes it easier to understand. High-definition movies may be created using conditional text descriptions. To demonstrate the efficacy of the recommended strategy, results from qualitative and quantitative trials on various datasets are required. |
---|---|
DOI: | 10.1109/ICONSTEM56934.2023.10142725 |