Abstract
Fairness is a foundational pillar in the development of ethical and responsible artificial intelligence. One of the most pressing issues in this context is discrimination, which occurs when an algorithm displays unequal treatment towards data from different groups without an objective justification for such disparity. Artificial intelligence is increasingly taking on decision-making roles in society at large and in engineering fields in particular. In domains such as autonomous vehicle control systems, where unbiased decision-making can impact safety and trust, and in smart grid management, where equitable energy distribution is crucial, fairness must be a primary consideration. This study introduces a novel metric to measure fairness, consisting of a two-dimensional vector: Equality and Equity. When applied to benchmark datasets, this metric demonstrated superior informativeness by effectively distinguishing equality-related issues from equity-related challenges, surpassing traditional methods like Disparate Impact. Contributions of this work include (1) a pioneering metric for measuring equity; (2) a pure measure of fairness definition that takes into account equity and equality; (3) a vector to guide the mitigation algorithms; and, (4) a Fairness curve where the disparities between groups can be interpreted and explained.