CV

Changwang ZHANG is currently a senior researcher and he is a Standing Committee Member of CCF Theoretical Computer Science Technical Committee. He received his MRes and PhD degrees from University College London in 2011 and 2015, respectively. He worked at Alibaba on LBS Data Mining from 2016 to 2017, and Tencent on Advertising Recommendation & User Profiling from 2018 to 2022.

Changwang ZHANG’s current interests are the research and application of Information Retrieval (Search, Recommendation, Advertising), Generative AI (LLM, Agent, RAG), and Big Data Mining. He served as a Senior Program Committee (SPC) Member for the AAAI Conference on Artificial Intelligence.

Changwang Zhang’s research has got lots of media attention and is reported by the Guardian and the Daily Mail. He received the Tencent Gold Award for Excellence in R&D, the Tencent Operation Excellence Award, the Tencent Open Source Collaboration Award, and the Tencent Micro Innovation Award. For Tencent and Alibaba, he interviewed more than 200 candidates from fresh graduates to experienced hires. And He was the official technical interviewer & speaker for the offline on-site campus recruitment of Tencent.

We propose the EulerFormer model (SIGIR’24, pdf) that significantly improves the expressive power and robustness of the Transformer model through the complex attention network and adaptive rotational position coding. EulerFormer provided a unified theoretical framework to formulate both semantic and positional information, thus possessing a stronger expressive capacity in sequential modeling. Specially, in EulerFormer, both semantic difference and positional difference among tokens can be directly modeled in a unified rotation form of complex vector. Compared with prior methods (e.g., RoPE), EulerFormer is more robust to semantic variations and have more superior theoretical properties (e.g., long-term decay).

We present PoseCrafter (ECCV’24, pdf), a novel approach for personalized video generation with precise control over flexible poses. Leveraging the Stable Diffusion and ControlNet frameworks, we meticulously devise an inference process to yield top-tier videos without relying on corresponding ground-truth frames. We employ straightforward latent editing through an affine transformation matrix encompassing facial and hand landmarks. Comprehensive experiments across multiple datasets demonstrate that PoseCrafter outperforms baselines pre-trained on an extensive array of videos across 8 widely used metrics. Additionally, PoseCrafter is capable of adhering to poses from various individuals or artificial edits while preserving human identity in open-domain training videos simultaneously.

Changwang Zhang proposes the PromptAppGPT - a low-code prompt-based rapid app development framework, aiming to enable natural language app development based on GPT. PromptAppGPT significantly lowers the barrier to GPT application development, allowing anyone to develop AutoGPT-like applications with a few lines of low code. And PromptAppGPT has been featured in many high-impact media including the top Chinese AI media: 新智元1, 新智元2.

You are wellcome to:

  1. Join us to collaboratively develop the framework: https://github.com/mleoking/PromptAppGPT/.
  2. Visit the website to try the framework: https://promptappgpt.wangzhishi.net/.
  3. See the example apps including the 70-line low-code implementation of the AutoGPT-like AI auto-assistant: https://github.com/mleoking/PromptAppGPT/blob/main/PagApps.md#my-autogpt.

Contacts: mleoking {at to remove} qq.com

Researchers Hiring and Funding Now: We are looking for outstanding highly motivated students and researchers to work together on the research and application of Information Retrieval (Recommendation & Search), Natural Language Processing, and Big Data Mining. Please email me with your CV and projects.

WeChat Official Account: Please follow the WangKnowledge for more updated AI lectures, interviews, and R&D experience in Chinese.

Changwang ZHANG's WeChat Official Account

Education

Work Experience

Services

Selected Publications

Selected Media Coverage

Selected Talks