The AI Governance Fellowship
in 0 days
0 hours
0.0 min
⎿ ETA - Jan 2024
What is The AI Governance Fellowship?
The AI Governance Fellowship is a pioneering program dedicated to advancing the fairness and integrity of AI technologies in the Global South, with a focus on African nations. We are looking to collaborate with experts from organizations, government bodies, academia, and industry leaders to inform the responsible development and deployment of AI technologies (such as LLMs).
AI Fellowship
The AI Fellowship program is designed for individuals with relevant backgrounds, such as law, social sciences, economics, etc., who have a strong interest in AI Governance. It offers a comprehensive curriculum and hands-on experience to develop skills in AI model evaluation and ethical deployment.
The AI Fellowship program offers a comprehensive curriculum and hands-on experience to develop skills in AI model evaluation and ethical deployment. The program includes the following components:
- AI Governance: Developing frameworks and policies to ensure responsible and ethical use of AI technology.
- AI Alignment: Aligning AI systems with human values and goals to avoid unintended consequences.
Upon completion of the AI Fellowship, participants will gain a deep understanding of AI model evaluation methods and ethical considerations. They will be equipped with the skills to contribute to the responsible development and deployment of AI models in various contexts.
Speakers 📣
Esben Kran
Esben is the founder and co- director of Apart.His background is in brain - computer interfacing research and in startups as a game developer and data scientist.He recognizes the immense potential of artificial intelligence, as well as the significant international security risks posed by this technology.
Oreva (Orevaoghene) Ahia
PhD student in Computer Science and Engineering at the University of Washington. Previously, Research Engineer @Apple, and Instadeep working on AI solutions. Her research interests include topics in Multilingual NLP, Model Efficiency, and Model Interpretability.
Dylan Hadfield-Menell
Schmidt Futures AI2050 Early Career Fellow. Dylan runs the Algorithmic Alignment Group in the MIT @CSAIL). My research focuses on the problem of agent alignment: the challenge of identifying behaviors that are consistent with the goals of another actor or group of actors.
Daniel Omeiza
Postdoctoral researcher at the Oxford Robotics Institute (ORI) where he research explainability in robotic systems. He is also developing new quantitative metrics for assessing the overall health of autonomous driving agents
Susan Otieno
Legal professional, specialists AI and Data protection. Ms.Otieno has worked as a training lead at the Lawyers Hub; a think tank that works on digital policy and justice innovation.
Chinasa T. Okolo
Fellow at the Brookings Institute, Governance Studies program. Her research focuses on AI governance in emerging markets, AI explainability, the future of data work, and leveraging AI to advance global health.
Vibbi
Leonard is an Innovation Officer at UNICEF. Previously, a Masters student at the MIT Media Lab and a Research Assistant with MIT GOV/LAB. He is also a social and ethical responsibilities of computing
Narmeen Oozeer
Mathematics at the University of Ottawa. Mechanistic interpretability of neural networks using math techniques, abstract algebra and representation theory
Charlotte Siegmann
PhD MIT, Economics. I research how to make the development and deployment of advanced AI systems safer and more beneficial.
Herbie Bradley
PhD Cambridge, Cambridge ML. I build evaluations for frontier AI systems at the UK AI Safety Institute.
Brian Muhia
Brian Muhia is CTO of the Causality, Interpretability Representation Learning group at the pan-African AI Safety research lab Equiano Institute. He is co-founder and data maintainer of Sound of Nairobi, co-founder of AI startup Fahamu Inc
Tai-Danae Bradley
Research mathematician Sandbox AQ, Prof The Master's University. Research interests lie in the intersection of quantum physics, machine intelligence, and category theory.
Benjamin Sturgeon
Ben is a dedicated researcher and advocate in the field of AI safety. With a focus on technical alignment research, he strives to ensure that AI systems align with human values and avoid catastrophic consequences.
Organisers
Natalie Kiilu
Curriculum Dev, ILINA
Aishwarya Gurung
Fellowship Developer , FLI
Leo Hyams
Events & Operations
Jonas Kgomo
Founder, Equiano Institute
Imaan Khadir
Events Manager
Joel Christoph
Director of Effective Thesis
Co-Founder Equiano Institute
Apply to The Fellowship
If you're interested in joining the program and contributing to the responsible development and deployment of AI models
@ Equiano Institute