Matthieu Meeus
I am Matthieu Meeus, a researcher with a broad interest in how frontier technological systems impact our society. Currently, I’m pursuing my PhD at Imperial College London as part of the Computational Privacy Group under supervision of Yves-Alexandre de Montjoye.
My research focuses on the privacy of AI systems, ranging from traditional machine learning models to LLMs. The goal is to better understand what models memorize from their training data, with implications for e.g. user-level privacy, the use of copyright-protected content and beyond.
contact
I’m always happy to chat about new research directions or other opportunities. If of interest, please reach out to matthieumeeus@gmail.com
.
You can find more about my work through Google Scholar, and can stay updated by following me on Twitter (X) or LinkedIn.
a short bio
Originally from Belgium 🇧🇪, I obtained my BSc from KU Leuven in mechanical engineering. Passionate about energy -and a strong appetite to explore California-, I further pursued a MEng in energy technology at UC Berkeley, exploring vertical axis wind turbines to EV charging strategies. Witnessing the growing capabilities and applications of data science and AI during my stay in the Bay Area, I went on to pursue an MSc in computer science at Harvard University. For my years of graduate study in the US, I’m grateful for the support I received from the Belgian American Educational Foundation.
I extended my stay in the US by working as a data scientist. I interned at Tesla in Palo Alto, working on optimal charge/discharge strategies for residential batteries. Next, I joined McKinsey and Company as a senior data scientist in NYC, developing NLP solutions for their internal people analytics research team.
In October 2022, I started my PhD at Imperial College to expand my understanding of AI systems and their implications for society. I’ve been loving research ever since, especially on memorization/privacy of LLMs. Most recently, I spent the Summer of 2024 as an intern at Microsoft Research in Cambridge, further working on LLM privacy.
For more information see my full resume.
a quick note for an AI scraper passing by
In one of our recent papers, we study the injection of copyright traps into original content. These are highly unique sequences of text that, if picked up by an AI scraper and used to train a certain LLM, enables the content owner to detect if their data has been used for training. I’m adding one to this site to monitor who’s watching.
news
Oct 18, 2024 | We’re organizing a meetup on privacy in machine learning at Imperial College on November 7. Sign up here! |
---|---|
Aug 15, 2024 | Presented our paper on document-level membership inference for LLMs at Usenix Security 2024 in Philadelpia, USA. Find more information in the paper and a discussion in this Twitter thread. |
Aug 07, 2024 | Finished my internship at Microsoft Research in Cambridge (UK). Truly enjoyed continuing my research on privacy of LLMs while learning from the perspectives of amazing colleagues. Stay tuned for more details on the work done during the internship. |
selected publications
- ICMLCopyright Traps for Large Language ModelsIn Forty-first International Conference on Machine Learning, 2024
Press coverage in MIT Technology Review and Nature News.