Mobile QR Code QR CODE

2024

Acceptance Ratio

21%

REFERENCES

1 
ISO/IEC TR 24028:2020, ``Information technology -- Artificial intelligence -- Overview of trustworthiness in artificial intelligence,'' 2020.URL
2 
ISO/IEC TR 24027:2021, ``Information technology -- Artificial intelligence (AI) -- Bias in AI systems and AI aided decision making,'' 2021.URL
3 
ISO/IEC 23894:2023, ``Information technology -- Artificial intelligence -- Guidance on risk management,'' 2023.URL
4 
OECD, ``Artificial Intelligence in Society,'' June 2019.URL
5 
UNESCO, ``Recommendation on the Ethics of Artificial Intelligence,'' November 2021.URL
6 
National Institute of Standards and Technology, ``Artificial Intelligence Risk Management Framework (AI RMF 1.0),'' January 2023.URL
7 
D. Leslie, ``Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector,'' Zenodo, June 2019.DOI
8 
European Commission, Directorate-General for Communications Networks, Content and Technology, ``Ethics guidelines for trustworthy AI,'' Publications Office, November 2019.URL
9 
Australia, ``Australia's AI Ethics Principles,'' https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principlesURL
10 
Ministry of Science and ICT, ``Strategy to realize trustworthy artificial intelligence,'' May. 2021, https://www.msit.go.kr/eng/bbs/view.do?sCode=eng&mId=4&mPid=2&pageIndex=&bbsSeqNo=42&nttSeqNo=509&searchOpt=ALL&searchTxtURL
11 
Google, ``Objectives for AI applications,'' https://ai.google/responsibility/principlesURL
12 
IBM, ``Our fundamental properties for trustworthy AI,'' https://www.ibm.com/artificial-intelligence/ai-ethics-focus-areasURL
13 
Microsoft, ``Microsoft responsible AI principles,'' https://www.microsoft.com/en-us/ai/our-approach?activetab=pivot1%3aprimaryr5URL
14 
J. Pesenti, ``Facebook's five pillars of Responsible AI,'' June 2021, https://ai.facebook.com/blog/facebooks-five-pillars-of-responsible-ai/URL
15 
Naver, ``AI Ethics Principles,'' https://www.navercorp.com/value/aiCodeEthicsURL
16 
Kakao, ``AI Ethics,'' https://www.kakaocorp.com/page/responsible/detail/algorithmURL
17 
N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galdyan, ``A survey on bias and fairness in machine learning,'' ACM Computing Surveys (CSUR), vol. 54, no. 6, pp. 1-35, 2021.DOI
18 
M. Benjamin, P. Gagnon, N Rostamzadeh, C. Pan, Y. Bengio, and A. Shee, ``Towards standardization of data licenses: The montreal data license,'' arXiv preprint arXiv:1903.12262, 2019.DOI
19 
T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Ill, and K. Crawford, ``Datasheets for datasets,'' Communications of the ACM, vol. 64, no. 12, pp. 86-92, 2021DOI
20 
S. Holland, A. Hosny, S. Newman, J. Joseph, and K. Chmielinski, ``The dataset nutrition label,'' Data Protection and Privacy, vol. 12, 2020.DOI
21 
R. A. Kievit, W. E. Frankenhuis, L. J. Waldorp, and D. Borsboom, ``Simpson's paradox in psychological science: a practical guide,'' Frontiers in Psychology, vol. 4, 513, 2013.DOI
22 
N. Alipourfard, P. G. Fennell, and K. Lerman, ``Can you trust the trend?'' Proc. of the 11th ACM International Conference on Web Search and Data Mining, pp. 19-27, February 2018.DOI
23 
N. Alipourfard, P. G. Fennell, and K. Lerman, ``Using Simpson's paradox to discover interesting patterns in behavioral data,'' Proc. of 12th International AAAI Conference on Web and Social Media, June 2018.DOI
24 
L. Zhang, Y. Wu, and X. Wu, ``Achieving non-discrimination in data release,'' Proc. of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1335-1344, August 2017.DOI
25 
L. Zhang, Y. Wu, and X. Wu, ``Causal modeling-based discrimination discovery and removal: Criteria, bounds, and algorithms,'' IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 11, pp. 2035-2050, 2018.DOI
26 
S. Hajian and J. Domingo-Ferrer, ``A methodology for direct and indirect discrimination prevention in data mining,'' IEEE transactions on Knowledge and Data Engineering, vol. 25, no. 7, pp. 1445-1459, July 2013.DOI
27 
F. Kamiran and T. Calders, ``Classifying without discriminating,'' Proc. of 2009 2nd International Conference on Computer, Control and Communication, pp. 1-6, February 2009.DOI
28 
F. Kamiran and T. Calders, ``Classification with no discrimination by preferential sampling,'' Proc. of 19th Machine Learning Conference in Belgium and The Netherlands, vol. 1, no. 6, May 2010.URL
29 
F. Kamiran and T. Calders, ``Data preprocessing techniques for classification without discrimination,'' Knowledge and Information Systems, vol. 33, no. 1, pp. 1-33, 2012.DOI
30 
M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian, ``Certifying and removing disparate impact,'' Proc. of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259-268, August 2015.DOI
31 
S. Radovanovi´c, G. Savi´c, B. Deliba˘si´c, and M. Suknovi´c, ``FairDEA-removing disparate impact from efficiency scores,'' European Journal of Operational Research, vol. 301, no. 3, pp. 1088-1098, 2022.DOI
32 
R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork, ``Learning fair representations,'' Proc. of International Conference on Machine Learning, PMLR, pp. 325-333, 2013.URL
33 
F. P. Calmon, D. Wei, B. Vinzamuri, K. N. Ramamurthy, and K. R. Varshney, ``Optimized pre-processing for discrimination prevention,'' Proc. of Conference on Neural Information Processing Systems, 2017.DOI
34 
F. Kamiran and T. Calders, ``Data preprocessing techniques for classification without discrimination,'' Knowledge and Information Systems, 2012.DOI
35 
https://github.com/fairlearn/fairlearn/blob/main/fairlearn/preprocessing/_correlation_remover.pyURL
36 
A. K. Menon and R. C. Williamson, ``The cost of fairness in binary classification,'' Proc. of Conference on Fairness, Accountability and Transparency, PMLR, pp. 107-118, January 2018.DOI
37 
T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma, ``Fairness-aware classifier with prejudice remover regularizer,'' Machine Learning and Knowledge Discovery in Databases, Springer, Berlin, Heidelberg, pp. 35-50, 2012.DOI
38 
L. Oneto, M. Doninini, A. Elders, and M. Pontil, ``Taking advantage of multitask learning for fair classification,'' Proc. of Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, Berlin, Heidelberg, pp. 35-50, January 2019.DOI
39 
S. Jung, T. Park, S. Chun, and T. Moon, ``Re-weighting based group fairness regularization via classwise robust optimization,'' arXiv preprint arXiv:2303.00442, 2023.DOI
40 
E. Krasanakis, E. Spyromitros-Xioufis, S. Papadopoulos, and Y. Kompatsiaris, ``Adaptive sensitive reweighting to mitigate bias in fairness-aware classification,'' Proc. of the 2018 World Wide Web Conference, pp. 853-862, April 2018.DOI
41 
T. Calders and S. Verwer, ``Three naive Bayes approaches for discrimination-free classification,'' Data Mining and Knowledge Discovery, vol. 21, no. 2, pp. 277-292, July 2010.DOI
42 
C. Dwork, N. Cynthia, A. T. Kalai, and M. Leiserson, ``Decoupled classifiers for group-fair and efficient machine learning,'' Proc. of Conference on Fairness, Accountability and Transparency, PMLR, pp. 119-133, January 2018.DOI
43 
R. Jiang, A. Pacchiano, T. Stepleton, H. Jiang, and S. Chiappa, ``Wasserstein fair classification,'' Uncertainty in Artificial Intelligence, PMLR, pp. 862-872, August 2019.URL
44 
F. Kamiran and T. Calders, ``Classification with no discrimination by preferential sampling,'' Proc. of 19th Machine Learning Conference in Belgium and The Netherlands, vol. 1. no. 6, May 2010.URL
45 
N. Mehrabi, U. Gupta, F. Morstatter, G. V. Steeg, and A. Galstyan, ``Attributing fair decisions with attention interventions,'' arXiv preprint arXiv:2109.03952, 2021.DOI
46 
https://github.com/Trusted-AI/AIF360/blob/master/aif360/algorithms/inprocessing/art_classifier.pyURL
47 
L. E. Celis, L. Huang, V. Keswani, and N. K. Vishnoi, ``Classification with fairness constraints: A meta-algorithm with provable guarantees,'' Proc. of the Conference on Fairness, Accountability, and Transparency, pp. 319-328, January 2019.DOI
48 
B. H. Zhang, B. Lemoine, and M. Mitchell, ``Mitigating unwanted biases with adversarial learning,'' Proc. of of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, USA, February 2018.DOI
49 
M. Kearns, S. Neel, A. Roth, and Z. S. Wu, ``Preventing fairness gerrymandering: Auditing and learning for subgroup fairness,'' Proc. of International Conference on Machine Learning, PMLR, pp. 2564-2572, July 218.DOI
50 
A. Agarwal, A. Beygelzimer, M. Dudik, J. Langford, and H. Wallach, ``A reductions approach to fair classification,'' Proc. of International Conference on Machine Learning, 2018.DOI
51 
N. Mehrabi, F. Morstatter, N. Peng, and A. Galstyan, ``Debiasing community detection,'' Proc. of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ACM, August 2019.DOI
52 
A. J. Bose and W. L. Hamilton, ``Compositional fairness constraints for graph embeddings,'' Proc. of International Conference on Machine Learning, PMLR, pp. 715-724, May. 2019.DOI
53 
A. Backurs, P. Indyk, K. Onak, B. Schieber, A. Vakilian, and T. Wagner, ``Scalable fair clustering,'' Proc. of International Conference on Machine Learning, PMLR, pp. 405-413, May 2019.DOI
54 
X. Chen, B. Fain, L. Lyu, and K. Munagala, ``Proportionally fair clustering,'' Proc. of International Conference on Machine Learning, PMLR, pp. 1032-1041, May 2019.DOI
55 
R. Berk, H. Heidari, S. Jabbari, M. Joseph, M. Kearns, J. Morhenstern, S. Neel, and A. Roth, ``A convex framework for fair regression,'' arXiv preprint arXiv:1706.02409, 2017.DOI
56 
A. Agarwal, M. Dudík, and Z. S. Wu, ``Fair regression: Quantitative definitions and reduction-based algorithms,'' Proc. of International Conference on Machine Learning, PMLR, pp. 120-129, May 2019.DOI
57 
S. Aghaei, M. J. Azizi, and P. Vayanos, ``Learning optimal and fair decision trees for non-discriminative decision-making,'' Proc. of the AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI), vol. 33, pp. 1418-1426, July 2019.DOI
58 
H. Zhao, ``Costs and benefits of fair regression,'' arXiv preprint arXiv:2106.08812, 2021.DOI
59 
E. Chzhen, C. Denis, M. Hebiri, L. Oneto, and M. Pontil, ``Fair regression via plug-in estimator and recalibration with statistical guarantees,'' Advances in Neural Information Processing Systems, vol. 33, pp. 19137-19148, 2020.URL
60 
E. Chzhen, C. Denis, M. Hebiri, L. Oneto, and M. Pontil, ``Fair regression with wasserstein barycenters,'' Advances in Neural Information Processing Systems, vol. 33, pp. 7321-7331, 2020.DOI
61 
S. Samadi, U. Tantipongpipat, J. Morgenstern, M. Singh, and S. Vempala, ``The price of fair PCA: One extra dimension,'' Advances in Neural Information Processing Systems, vol. 31, 2018.DOI
62 
J. Lee, G. Kim, M. Olfat, M. Hasegawa-Johnson, and C. D. Yoo, ``Fast and efficient MMD-based fair PCA via optimization over Stiefel manifold,'' Proc. of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7, pp. 7363-7371, June 2022.DOI
63 
T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma, ``Fairness-aware classifier with prejudice remover regularizer,'' Machine Learning and Knowledge Discovery in Databases, pp. 35-50, 2012.DOI
64 
D. Kenna, ``Using adversarial debiasing to remove bias from word embeddings,'' arXiv preprint arXiv:2107.10251, 2021.DOI
65 
S. Chiappa and W. S. Isaac, ``A causal Bayesian networks viewpoint on fairness,'' Privacy and Identity Management: Fairness, Accountability, and Transparency in the Age of Big Data, Springer International Publishing, pp. 3-20, 2019.DOI
66 
J. R. Loftus, C. Russell, M. J. Kusner, and R. Silva, ``Causal reasoning for algorithmic fairness,'' arXiv preprint arXiv:1805.05859, 2018.DOI
67 
L. Zhang, Y. Wu, and X. Wu, ``A causal framework for discovering and removing direct and indirect discrimination,'' arXiv preprint arXiv:1611.07509, 2016.DOI
68 
C. Louizos, K. Swersky, Y. Li, M. Welling, and R. Zemel, ``The variational fair autoencoder,'' arXiv preprint arXiv:1511.00830, 2015.DOI
69 
D. Moyer, S. Gao, R. Brekelmans, G. V. Steeg, and A. Galstyan, ``Invariant representations without adversarial training,'' Advances in Neural Information Processing Systems, vol. 31, 2018.DOI
70 
A. Amini, A. P. Soleimany, W. Schwarting, S. N. Bhatia, and D. Rus, ``Uncovering and mitigating algorithmic bias through learned latent structure,'' Proc. of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, ACM, January 2019.DOI
71 
B. H. Zhang, B. Lemoine, and M. Mitchell, ``Mitigating unwanted biases with adversarial learning,'' Proc. of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, ACM, December 2018.DOI
72 
D. Xu, S. Yuan, L. Zhang, and X. Wu, ``FairGAN: Fairness-aware generative adversarial networks,'' Proc. of 2018 IEEE International Conference on Big Data (Big Data), IEEE, December 2018.DOI
73 
T. Bolukbasi, K.-W. Chang, J. Zou, V. Saligrama, and A. Kalai, ``Man is to computer programmer as woman is to homemaker? Debiasing word embeddings,'' Advances in Neural Information Processing Systems, vol. 29, 2016.DOI
74 
J. Zhao, T. Wang, M. Yatskar, R. Cotterell, V. Ordonez, and K.-W. Chang, ``Gender bias in contextualized word embeddings,'' arXiv preprint arXiv:1904.03310, 2019.DOI
75 
J. Chen, Y. Wang, and T. Lan, ``Bringing fairness to actor-critic reinforcement learning for network utility optimization,'' Proc. of IEEE INFOCOM 2021 - IEEE Conference on Computer Communications, IEEE, May. 2021.DOI
76 
M. Hardt, E. Price, and N. Srebro, ``Equality of opportunity in supervised learning,'' Proc. of Conference on Neural Information Processing Systems, 2016.DOI
77 
G. Pleiss, M. Raghavan, F. Wu, J. Kleinberg, and K. Q. Weinberger, ``On fairness and calibration,'' Proc. of Conference on Neural Information Processing Systems, 2017DOI
78 
F. Kamiran, A. Karim, and X. Zhang, ``Decision theory for discrimination-aware classification,'' Proc. of IEEE International Conference on Data Mining, 2012.DOI
79 
A. Agarwal, A. Beygelzimer, M. Dudík, J. Langford, and H. M. Wallach, ``A reductions approach to fair classification,'' Proc. of Machine Learning Research, PMLR, vol. 80, pp. 60-69, 2018.DOI
80 
A. Agarwal, M. Dudík, and Z. S. Wu,, ``Fair regression: Quantitative definitions and reduction-based algorithms,'' Proc. of Machine Learning Research, PMLR, vol. 97, pp. 120-129, 2019.DOI
81 
IBM Trusted-AI, ``AI Fairness 360 (AIF360),'' https://github.com/Trusted-AI/AIF360URL
82 
R. K. E. BellamyK. Dey, M. Hind, et al., ``AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias,'' arXiv preprint arXiv.1810.01943, 2018.DOI
83 
S. Bird, M. Dudík, R. Edgar, B. Horn, R. Luts, V. Milan, M. Sameki, H. Wallach, and K. Walker, ``Fairlearn: A toolkit for assessing and improving fairness in AI,'' Microsoft, Tech. Rep. MSR-TR-2020-32, May 2020.URL
84 
Microsoft, ``Fairlearn,'' https://github.com/fairlearn/fairlearnURL
85 
https://fairlearn.org/URL
86 
Google Tensorflow, ``Fairness Indicators,'' https://github.com/tensorflow/fairness-indicatorsURL
87 
http://www.aitimes.com/news/articleView.html?idxno=143283URL
88 
P. Saleiro, B. Kuester, A. Stevens, A. Anisfeld, L. Hinkson, J. London, and R. Ghani, ``Aequitas: A bias and fairness audit toolkit,'' arXiv preprint arXiv:1811.05577, 2018.DOI
89 
Data Science for Social Good, ``Aequitas,'' https://github.com/dssg/aequitasURL
90 
S. Verma and J. Rubin, ``Fairness definitions explained,'' Proc. of the International Workshop on Software Fairness, 2018.DOI
91 
S. Barocas, M. Hardt, and A. Narayanan, Fairness and Machine Learning: Limitations and Opportunities, MIT Press, 2023.URL