Lucy Farnik is a PhD researcher at the University of Bristol, focusing on the inner workings of modern machine learning systems to enhance their safety and inform effective policymaking. Currently supervised by Dr. Conor Houghton, Lucy is engaged in mechanistic interpretability of large language models. As a research scholar with ML Alignment & Theory Scholars, Lucy is examining circuit-style analysis of SAE features under Neel Nanda. Lucy co-founded and co-leads the Bristol AI Safety Centre, which focuses on AI interpretability and regulation while supporting student development. Previous positions include research roles at Epoch and AI Safety Camp and a senior full stack developer at Longitude 103, where Lucy contributed to product development and software architecture. Lucy Farnik is pursuing a Doctor of Philosophy in Interactive Artificial Intelligence at the University of Bristol, having previously earned a Bachelor of Engineering in Computer Science with Innovation.
This person is not in the org chart
This person is not in any teams