The problem of explanation in Artificial Intelligence: considerations from semiotics

Authors

  • Joel Carbonera Federal University of Rio Grande do Sul, Computer Science Institute, Post-Graduate Program in Computing, Porto Alegre, Rio Grande do Sul, Brasil.
  • Bernardo Gonçalves University of São Paulo, Polytechnic School, Philosophy Department, São Paulo, São Paulo, Brasil.
  • Clarisse de Souza Pontifical Catholic University of Rio de Janeiro, Computer Science Department, Rio de Janeiro, Rio de Janeiro, Brazil.

DOI:

https://doi.org/10.23925/1984-3585.2018i17p59-75

Keywords:

Artificial intelligence, Explanability, Semiotics and Pragmatism, Semiotic Engineering

Abstract

Since the expert systems of the 1980s and 1990s, Artificial Intelligence (AI) researchers have tried to solve the the problem of explanation, namely, given an inference from the system, how to identify the steps or mechanisms that have led to the conclusion. With the recent success of AI systems, especially those based on deep learning, this problem has come to the fore again more forcefully since the processes are opaque as far as their inferences are concerned, in contrast to expert systems, which are based on logical rules. In this text, we present the problem of explanation, including highlights from its most recent literature in the area of AI. Next, we indicate gaps in past and recent approaches, and then present considerations from Peirce's semiotics, which, as we argue, could contribute to a balanced management of this technology in society.

Author Biographies

Joel Carbonera, Federal University of Rio Grande do Sul, Computer Science Institute, Post-Graduate Program in Computing, Porto Alegre, Rio Grande do Sul, Brasil.

PhD in Computer Science from the Federal University of Rio Grande do Sul, member of the BDI group (intelligent databases group) at UFRGS and of the IEEE RAS funded working group entitled Standard for Ontologies for Robotics and Automation (IEEE RAS WG ORA), coordinator of standardization of the Robotics and Automation field in the IEEE South Brazil Robotics & Automation Society chapter.

Bernardo Gonçalves, University of São Paulo, Polytechnic School, Philosophy Department, São Paulo, São Paulo, Brasil.

Postdoctoral fellow at the University of Michigan-Ann Arbor, Ph.D. in Computational Modeling with a focus on Data Science/National Laboratory for Scientific Computing (LNCC), doctoral candidate in Philosophy of Science/USP, member of the Professional Association of Scientiae Studia and the Association for the Philosophy and History of Science of the Southern Cone.

Clarisse de Souza, Pontifical Catholic University of Rio de Janeiro, Computer Science Department, Rio de Janeiro, Rio de Janeiro, Brazil.

Full Professor at the Department of Computer Science PUC-Rio, PhD in Applied Linguistics (focus on human-computer interaction), Creator of Semiotic Engineering, in 2010 she was awarded the ACM SIGDOC Rigo Award and in 2013 became a member of the ACM SIGCHI CHI Academy. In 2014 she was awarded the title of HCI Pioneer by the IFIP Technical Committee on Human-Computer Interaction (TC13). Also in 2014 she was selected as one of 52 women researchers to be featured in the first edition of the CRA-W / Anita Borg Institute Notable Women in Computing Card Deck. In 2016 she received the SBC Scientific Merit Award and in 2017 the | Outstanding Career in HCI award, granted by the SBC's Special Committee on Human-Computer Interaction. On sabbatical leave from PUC-Rio, working as a Senior Researcher at IBM Research Brazil.

References

AIZENBERG, I.; AIZENBERG, N.; VANDEWALLE, J. Multi-valued and universal binary neurons: theory, learning and applications. Dordrecht: Springer, 2000.

BIRAN, O.; COTTON, C. Explanation and justification in machine learning: a survey. Proceedings of IJCAI-17, Workshop on explainable AI (XAI), 2017.

CADWALLADR, C. Google, democracy and the truth about internet search. The Guardian, 4 dec 2016. Disponível em: http://www.theguardian.com/technology/2016/dec/04/google-democracy-truthinternet-search-facebook. Acesso em: 26 maio, 2018.

CLANCEY W. J.; SHORTLIFFE, E. (Orgs.). Readings in medical artificial intelligence: the first decade. Reading, MA: Addison Wesley, 1984.

DE SOUZA, C. S. The semiotic engineering of human-computer interaction. Cambridge, MA: MIT Press, 2005.

DECHTER, R. Learning while searching in constraint-satisfaction problems. Proceedings of the 5th National Conference on Artificial Intelligence. Philadelphia, PA, August 11-15. Vol. 1, p. 178-183, 1986.

DENG, Jia; DONG, Wei; SOCHER, Richard; LI, Li-Jia; LI, Kai; FEI-FEI, Li. Imagenet: a largescale hierarchical image database. In: Proceedings of IEEE, Conference on Computer Vision and Pattern Recognition. p. 248-255, 2009.

DHURANDHAR, Amit; CHEN, Pin-Yu; LUSS, Ronny; TU, Chun-Chen; TING, Paishun; SHANMUGAM, Karthikeyan; DAA, Payel. Explanations based on the missing: towards contrastive explanations with pertinent negatives, 2018. Disponível em:

http://arxiv.org/abs/1802.07623. Acesso em: 26 mai. 2018.

DORAN, Derek; SCHULZ, Sarah; BESOLD, Tarek. What does explainable ai really mean? a new conceptualization of perspectives. Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML, arXiv:1710.00794, 2017.

ECO, U. On truth: a fiction. In: ECO, U.; SANTAMBROGGIO, M.; VIOLI, P. (Orgs.). Meaning and mental representations. Bloomington, IN: Indiana University Press. p. 41-59, 1988.

GOODMAN, B; FLAXMAN, S. European union regulations on algorithmic decisionmaking and a “right to explanation”. AI Magazine, vol. 38, no. 3, 2016. Disponível em: http://doi.org/10.1609/aimag.v38i3.2741. Acesso em: 26 mai. 2018.

HE, Kaiming; ZHANG, Xiangyu; REN, Shaoqing; SUN, Jian. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceeding ICCV '15 Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). p.

-1034, 2015.

HERLOCKER, J.; KONSTAN, J.; RIEDL, J. Explaining collaborative filtering recommendations. In: Proceedings of the Third Conference on Computer Supported Cooperative Work (CSCW), p. 241-250, 2000.

HERN, A. 'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft”. The Guardian: International Edition, 29/06/2016. Disponível em: http://www.theguardian.com/technology/2016/sep/28/google-facebook-amazonibm-microsoft-partnership-on-ai-tech-firms. Acesso em: 4 maio, 2018. Ver também sítio oficial do consórcio: http://www.partnershiponai.org/.

LIPTON, Zachary. The mythos of model interpretability. ICML Workshop on Human Interpretability in Machine Learning, 2016. Disponível em: https://arxiv.org/pdf/1606.03490.pdf. Acesso 17 junho, 2018.

NADIN, M. In folly ripe. In reason rotten: putting machine theology to rest, 2017. Disponível em: https://arxiv.org/abs/1712.04306v1. Acesso em: 4 maio, 2018.

PEARL, J. Probabilistic reasoning in intelligent systems. San Francisco, CA: Morgan Kaufmann, 1988.

PEIRCE, C. S. Philosophical writings of Peirce, Buchler, J. (Org.). New York, NY: Dover Publications, 1955.

SANTAELLA, L. O método anticartesiano de C. S. Peirce. São Paulo: Ed. Unesp, 2004.

SEBE, N. Human-centered computing. In: NAKASHIMA, Hideyuki; AGHAJAN, Hamid; AUGUSTO, Juan Carlos (Orgs.). Handbook of ambient intelligence and smart environments. Dordrecht: Springer, p. 349-370, 2010.

YOSINSKI, Jason; CLUNE, Jeff; NGUYEN, Anh; FUCHS, Thomas; LIPSON, Hod. Understanding neural networks through deep visualization. In: Proceedings of the Deep Learning Workshop, 31st International Conference on Machine Learning, Lille, France. 2015.

Published

2018-05-29