$ researcher --field "video-generation" --mode "building"

Haoxin Chen

Video Generation · Algorithm Researcher & Engineer

Building the future of AI-generated video. Contributed to VideoCrafter, HunyuanVideo, and Seedance. Currently exploring real-time interactive multimodal generation at vivix.ai.

2,500+ total citations · Published at CVPR, ECCV, ICLR, SIGGRAPH Asia.

Currently looking for like-minded partners in real-time interaction and multimodal generation to collaborate on AI-native algorithm R&D. Get in touch if you're interested.

Haoxin Chen
About

I am a video generation algorithm researcher and engineer specializing in diffusion-based generative models. I obtained my Bachelor's and Master's degrees from South China University of Technology (SCUT) (2015 – 2022). I have worked at Tencent (2021–2024) and ByteDance (2024–2026), participating in VideoCrafter, HunyuanVideo, Seedance, and AI-powered short drama platforms (Douyin & Honguo). Currently, I am at vivix.ai, a startup building multimodal real-time interactive generation technology.

Video Generation Diffusion Models Real-time Multimodal Interaction AI-native Internet Computer Vision
Experience
2026 – Present

Algorithm Researcher & Engineer

vivix.ai

Multimodal real-time interactive generation R&D at an early-stage startup pushing the boundaries of AI interaction.

2024 – 2026

Algorithm Researcher & Engineer

ByteDance

Participated in Seedance video generation model R&D. Worked on algorithm systems for Douyin and Honguo AI short drama platforms.

2021 – 2024

Algorithm Researcher & Engineer

Tencent

Participated in the VideoCrafter series and HunyuanVideo — open-source video generation foundation models.

2015 – 2022

B.Eng & M.Eng

South China University of Technology (SCUT)

Bachelor's and Master's degrees. Research on computer vision, video understanding, and video object segmentation.

Representative Works

HunyuanVideo

Tencent

Video generation model developed by Tencent.

Seedance

ByteDance

Video generation model developed by ByteDance.

VideoCrafter2

CVPR 2024 · 550+ citations

Overcoming data limitations for high-quality video diffusion models through innovative training strategies and data curation.

VideoCrafter1

arXiv 2023 · 540+ citations

Open diffusion models for high-quality video generation, establishing the foundation for the VideoCrafter series.

DynamiCrafter

ECCV 2023 · 430+ citations

Animating open-domain images with video diffusion priors, enabling high-quality image-to-video generation.

ScaleCrafter

ICLR 2023 · 100+ citations

Tuning-free higher-resolution visual generation with diffusion models, enabling resolution scaling without additional training.

Research

Video Diffusion — Data & Training Recipe

Focused on overcoming data limitations and establishing open-source foundations for high-quality video diffusion models.

Image-to-Video & Animation

Exploring how to breathe life into static images through video diffusion priors and controllable generation.

Evaluation & Benchmark

Building rigorous evaluation frameworks for video generation models.

Video Object Segmentation — Earlier Work

Master's research on few-shot video understanding with attention-based architectures.

View all publications on Google Scholar →

Contact

Open to discussions about collaboration, internships, and job opportunities. Feel free to reach out via email.

Email
Google Scholar
Haoxin Chen
GitHub
@scutpaul