Publisher's Synopsis
The U.S. Department of Defense (DoD) places a high priority on promoting diversity, equity, and inclusion at all levels throughout the organization. Simultaneously, it is actively supporting the development of machine learning (ML) technologies to assist in decisionmaking for personnel management. There has been heightened concern about algorithmic bias in many non-DoD settings, whereby ML-assisted decisions have been found to perpetuate or, in some cases, exacerbate inequities. This report is an attempt to equip both policymakers and developers of ML algorithms for DoD with the tools and guidance necessary to avoid algorithmic bias when using ML to aid human decisions. The authors first provide an overview of DoD's equity priorities, which typically are centered on issues of representation and equal opportunity within personnel. They then provide a framework to enable ML developers to develop equitable tools. This framework emphasizes