We believe that knowledge representation and symbolic approaches are essential for making machine learning interpretable. We will pursue the following principles in our work:

Product design

We will design our products to be as interpretable as possible, both for humans and for other machines.

Experimentation

We will conduct rigorous experiments to evaluate the interpretability of our methods.

Theory

We will develop theoretical foundations for interpretable machine learning.

Representation

We will develop new knowledge representation and symbolic approaches for machine learning.

We believe that this work is essential for building trustworthy and responsible machine learning systems. We are committed to sharing our work openly and to collaborating with others in the field.

We invite you to join us in this important endeavour.

← Go home