The application of algorithms to decision making processes both within the public governance and administrative sectors, specifically targeted toward serving policy goals is increasing. Computer algorithms and analytics are playing an increasingly influential role in government, business and society. They underpin popular information services and autonomous intelligence systems such as machine learning and artificial intelligence applications and all its subsets: Machine Learning, Deep Learning and Reinforcement Learning Powered by algorithms, these smart technologies are having a direct and significant impact on human lives across a wide socioeconomic and cultural spectrum. Algorithms allow the exploitation of rich and varied data sources from different spheres including cultures, in order to support human decision-making and/or take direct actions that could serve the diverse interests of the societies in which they operate.
As a result, this has invariably led to a standardized, speedy, efficient and comprehensive system of public decision making and implementation. Nevertheless, there are growing concerns regarding the social, ethical, political and legal implications of this and as to whether these implications are in fact being produced or reinforced by these systems.
As gatekeepers of consequences in delegated administrative decisions including policing, there are rising concerns as to who is liable when decisions are made that violate cultural and societal norms in a given setting. Such cases are prompting the formation of multi-disciplinary teams, for example, the IEEE P7003 Working Group on Algorithm Bias to re-think old ethical and moral questions, such as what constitutes cultural fairness and social equality and whether such definitions change when they intersect with algorithmic culture. Asking such questions is necessary since varying degrees of bias and exclusion, depending on the context, may emerge in the way that algorithms determine knowledge and present them in ways that are both digestible and consumable by users. For instance, it is evident that the employment of algorithms used to categorize and present knowledge, depending on the context, results in not only engendering hegemonic assumptions in a given culture but causes these assumptions to become coded as the default.
The identification of this phenomenon has led international multi-disciplinary teams to work on research and to translate their findings into evidence based tutorials, guidelines and standards. The aim is to eliminate or to reduce the probability of such algorithms and autonomous intelligence systems making decisions that are biased or that are not sensitive to the diverse cultures that they are meant to serve.
This paper proposes that achieving cultural parity in algorithm design and application could benefit from three major routes: Firstly, through the technological route, designers can direct algorithm and associated AI’s outputs by way of hard coded instructions and constraints for fairness and nondiscrimination, this will open up opportunities for affirmative action equivalents and paying attention to the ‘translation’ problems that arise when policy goals need to be converted into computer code. Secondly, governments can institute legal systems, including data protection laws that are capable of reigning in AI’s monopolistic aspects. Lastly, through the social route, designers can retrain datasets to diversify variables that are used either to train algorithms or inform their decisions. This may include raising awareness, data localization and internationalization through both economic, social policy and other stimuli.
Key Words: Algorithm Culture, Algorithm biased decisions, Social inclusion