I am a third-year Ph.D. candidate in Computer Engineering at Northeastern University, USA, where I am part of the SMILE Lab, fortunate to be advised by Professor Yun Raymond Fu (Member of the Academy of Europe; Fellow of NAI, AAAS, AAAI, IEEE, AIMBE, OSA, SPIE, IAPR, AAIA). My research focuses on generative models, including Visual Language Models, Video Generation, and Trajectory & Motion Synthesis/Prediction. I am particularly interested in bridging these areas to enhance efficient video understanding, AI-driven content creation and autonomous systems.
Beyond academia, I have industry experience in applied AI research. Most recently, I was an Applied Scientist Intern at Amazon AWS AI Lab, where I worked on video understanding and large language models. Previously, I was a research intern at Tencent (腾讯), focusing on generative models for images and videos.
I am actively seeking a research internship for 2025 and open to collaborations. If you are interested in working together, feel free to reach out!
My research lies in Computer Vision and Artificial Intelligence. Aims to explore the potential of generative models for AIGC and Trajectory Prediction.
I worked on Diffusion Models, AIGC, VLM, Video Synthesis/Editing, Image Editing, Multimodal Learning, Trajectory Prediction, NeRF, GANs.
Token Dynamics as Long Video Representation for Video Understanding
(ongoing with Amazon)
Several years ago, I delved into the fascinating world of sensor modalities and signal processing, sparking a keen interest in embedded platforms. That experience led me to explore further into artificial intelligence and computer vision.
Proposed to detect eye blink EMG noise mixed in EEG signal, which uses the intense eye blink signal to control the direction of wheelchairs, while analysis EEG to predict tension and relaxation degree to control the speed of the wheelchair.
An affordable solution for paralyzed patients to control their wheelchairs and move independently.
Responsible for developing upper computer software which received and filtered signals in the spectral domain from the MSP430 PCB board and developing an algorithm to detect the abnormal ECG.
Sign language recognition system of wearable bending sensor gloves
First Prize at Mobile Application Innovation Contest of North China
Jul. 2016
Responsible for programming the embedding microprocessor to sample the analog signal of the bending sensor on the gloves, which is used to predict the sign language, and showing prediction results on the app.
Vision-based paper money and coin sorting machine
Summer 2015
Responsible for programming the embedding microprocessors to control the mechanical structure and developing upper machine software to detect the kind of paper money in traditional image processing method, then sort them.