cv

You can download a PDF version. Although the cv is more updated on this website. I intend to disclose the whole journey of mine here, however it may look winding and strange to others. I want to build, one honest work a time, and then one item in this page. In the end it's life worth living.

Basics

Name Zhonghao He (何忠豪)
Label AI Alignment and Human-AI Interaction Researcher
Affiliation Leverhulme Center for Future Intelligence, University of Cambridge
Email zh378@cam.ac.uk
Url https://www.linkedin.com/in/hezhonghao
Summary I build aligned and ethical LLMs for the benefits of many. I also build my skills to work on HCI, AI for science, interpretability.

Education

  • 2022.09 - PRESENT

    Cambridge, UK

    Master
    University of Cambridge
    AI Ethics and Society
    • Machine Learning Alignment Bootcamp
    • AI Ethics
    • AI Governance
    • History of AI
    • CS230 Deep Learning
    • Mathematics for Computer Science
    • ML Safety
    • Discrete Mathematics
    • CS234 Reinforcement Learning
    • CS109 Probability for Computer Scientists
    • CS106 Programming Methodology
    • Mechanistic Interpretability
    • Algorithms and Data Structure
  • 2019.06 - 2019.09

    Palo Alto, USA

    Summer Student
    Stanford University
    Cognitive Science & Philosophy
    • Mathematics Foundation of Computing
    • Minds and Machines
    • Introduction to Neuroscience
  • 2014.08 - 2019.06

    Shantou, China

    Bachelor of Arts
    Shantou University
    English & Global Studies
    • Machine Learning and relevant maths
    • Research Methodology
    • Linguistics

Projects

  • 2024.10 - Present
    LLM's influence on epistemic diversity and values
    We are concerned with the problems of LLM-incured value lock-in, knowledge collapse, value collapse (as probable as model collapse since increasingly our discourses are mediated by AI systems and iterative training becomes more prevalence), with the consequence being more destructive. We created a team to build empirical demonstration, human subject experiments, simulation and interventions to address this set of problems we call 'AI influence'. 
    • Position: Project Co-founder
    • Collaborators: Max Kleiman-Weiner, Tianyi (Alex) Qiu, Tejasveer Chugh
  • 2023.12 - Present
    Multilevel analytical framework for interpretability
    Research on cognitive science and neuroscience to address interpretability challenges in ML.
    • Position: Project Lead
    • Goal: Publication in Transactional Machine Learning Research
    • Senior authors: Adrian Weller, Grace W. Lindsay
  • 2023.07 - 2023.10
    Comprehensive Survey on AI Alignment
    Survey paper on alignment research for newcomers.
    • Focus: Interpretability challenges in ML
    • Collaborators: Yaodong Yang, Jiaming Ji, Tianyi Qiu
  • 2022.12 - 2023.03
    Harms from agentic algorithmic systems
    Research on safety and harms from agentic systems in AI.
    • Highlight: Published paper cited by GPT-4 and high-profile AI safety reports
  • 2021.06 - 2022.02
    Stanford Existential Risks Initiative (SERI)
    Research on China's AI global approach.
    • Position: Research Fellow

Awards

Skills

Machematics
Calculus
Information Theory
ML/AI
Convolution Neural Networks
Transformers
Autoencoders
Experimental
Data Visualization
Mechanitic Interpretability
Programming
Python (advanced)
R (intermediary)
web stuff (intermediary)
Matlab (basic)
C/C++ (basic)

Languages

English
Fluent
Chinese
Native

Interests

Physical Activities
Rowing
Hiking
Other Interests
Debate