Getting the AI ethics policy right is a high-stakes affair for an organization. Well-published instances of gender biases in hiring algorithms or job search results may diminish the company’s reputation, pit the company against regulations, and even attract hefty government fines. Sensing such threats, organizations are increasingly creating dedicated structures and processes to inculcate AI ethics proactively. Some companies have moved further along this road, creating institutional frameworks for AI ethics.
How Companies Can Take a Global Approach to AI Ethics
Many efforts to build an AI ethics program miss an important fact: ethics differ from one cultural context to the next. Ideas about right and wrong in one culture may not translate to a fundamentally different context, and even when there is alignment, there may well be important differences in the ethical reasoning at work — cultural norms, religious tradition, etc. — that need to be taken into account. Because AI and related data regulations are rarely uniform across geographies, compliance can be difficult. To address this problem, companies need to develop a contextual global AI ethics model that prioritizes collaboration with local teams and stakeholders and devolves decision-making authority to those local teams. This is particularly necessary if their operations span several geographies.