We moderate and mitigate bias.
We make efforts to minimize bias and ensure fairness in the development and use of AI. We work to identify and address biases that may emerge, striving to create AI tools that treat everyone fairly, irrespective of their race, gender, ethnicity, age, or any other protected characteristic. We aspire to educate Product and Engineering teams on fairness principles so that training data used in our models is diverse, representative, and inclusive of different demographics, backgrounds, and perspectives.