Joe Collman has a diverse work experience in the field of existential risks and AI safety. Joe has worked as a Technical Lead at Stanford Existential Risks Initiative, where they provided guidance and feedback on scholar research. Joe also worked as an AI Safety Researcher as an Independent Researcher, focusing on long-term AI safety. Additionally, they were a Technical Generalist at Berkeley Existential Risk Initiative, where their responsibilities included providing research support and designing program strategies.
Joe has also contributed to the AI safety field as a Technical AI Safety cause area manager at Stanford Existential Risks Initiative, facilitating the AIS side of SERI summer research fellowship. Joe has participated in the Toronto AI Safety Camp as a participant, collaborating on clarifying and formalizing goal-directedness. Joe has volunteered at OpenAI as a Collaborating Researcher, specifically working on AI Safety via Debate.
In their earlier career, Joe had experience working as a Designer/Programmer in self-employment, focusing on game design, AI, and programming. Joe has also worked as a Developer at Dynamo Analytics, where they were involved in software development. Furthermore, they had a role as a Mathematics Tutor during their self-employment.
Overall, Joe Collman has a strong background in existential risks, AI safety, and technical research, with a range of experience in both leadership and individual contributor roles.
Joe Collman earned a Bachelor's Degree in Mathematics from the University of Warwick, where they attended from 2000 to 2004.
Sign up to view 0 direct reports
Get started