Matthieu Meeus
I am a researcher with a broad interest in how frontier technological systems impact our society. Currently, I’m pursuing/wrapping up my PhD in computer science at Imperial College London as part of the AI Security and Privacy Lab under supervision of Yves-Alexandre de Montjoye.
My research has primarily focused on privacy/memorization of AI, ranging from traditional machine learning models to LLMs. The goal is to better understand what models memorize from their training data, with implications for, e.g., privacy and confidentiality, copyright, model utility and evaluation etc.
contact
I’m always happy to chat about new research directions or other opportunities. Right now, I’m looking for full-time opportunites in industry, preferably on the East Coast 🇺🇸! Please reach out if of interest.
You can find more about my work through Google Scholar, and can stay updated by following me on Twitter (X) or LinkedIn.
a short bio
Originally from Belgium 🇧🇪, I obtained my BSc from KU Leuven in mechanical engineering. Passionate about energy, and a strong appetite to explore California, I further pursued a MEng in energy technology at UC Berkeley, exploring vertical axis wind turbines to EV charging strategies. Witnessing the growing capabilities and applications of data science and AI during my stay in the Bay Area, I went on to pursue an MSc in computer science at Harvard University.
I extended my stay in the US by working as a data scientist. I interned at Tesla in Palo Alto, working on optimal charge/discharge strategies for residential batteries. Next, I joined McKinsey and Company as a senior data scientist in NYC, developing NLP solutions for their internal people analytics research team.
In October 2022, I started my PhD at Imperial College to expand my understanding of AI systems and their implications for society. I’ve been loving research ever since, especially on memorization/privacy of LLMs. During my PhD, I interned at Microsoft Research in Cambridge (2024) and am soon/currently at Meta in NYC (from March 2026).
For more information see my full resume.
news
| Mar 01, 2026 | Joining Meta as a Research Scientist Intern in NYC for 6 months! Will be working on memorization and privacy of AI with the Central Applied Intelligence team. |
|---|---|
| Jul 21, 2025 | Presented three papers at ICML ‘25! One at the main conference on privacy auditing of synthetic text and two papers at the memorization workshop, both (1,2) touching upon how near-duplicates (and beyond) contribute to LLM memorization. |
| Apr 11, 2025 | We received the best paper award at SaTML 2025 in Copenhagen! It’s a great recognition for our learnings on MIAs against LLMs throughout the last two years. Checkout the paper here. |
| Dec 13, 2024 | Very excited to release ChocoLlama, a family of 6 open-source (3 base, 3 pretrained) Dutch LLMs based on Llama-2/3. Checkout our learnings in the paper! |
| Oct 18, 2024 | We’re organizing a meetup on privacy in machine learning at Imperial College on November 7. Sign up here! |
selected publications
- Nature CommsThe Mosaic Memory of Large Language ModelsNature Communications, 2026
Mentioned in the Financial Times.
- PreprintChocoLlama: Lessons Learned From Teaching Llamas Dutch2024
All 6 models are available on HuggingFace. Press coverage in Flemish newspaper De Tijd.
- ICMLCopyright Traps for Large Language ModelsIn Forty-first International Conference on Machine Learning (ICML), 2024
Press coverage in MIT Technology Review and Nature News.