A common cartography bringing together Artificial Intelligence, Philosophy and Psychology
DOI:
https://doi.org/10.23925/1984-3585.2018i17p76-94Keywords:
Artificial Intelligence, Philosophy of Mind, Cognitive Psychology, Knowledge representations, Epistemology of AIAbstract
This paper presents a discussion of four philosophical problems from
perspectives in which Artificial Intelligence, Philosophy and Psychology have a common interest. The issues are: the classical frame problem, which originated from AI research concerning the restriction of representations in first order logic; David Hume's view on representation and on reasoning about representations. In a line of argumentation presented by William Frawley, the paper focuses on two philosophical problems, (1) Plato's question of how world knowledge can grow from the mere fragements of which our cognition of facts is made up and (2) Wittgenstein's question conerning the compatibility between natural and computational languages. The main purpose of the paper is to show how certain fundamental philosophical premises may exclude the
References
AIZENBERG, I.; AIZENBERG, N.; VANDEWALLE, J. Multi-valued and universal binary neurons: theory, learning and applications. Dordrecht: Springer, 2000.
BIRAN, O.; COTTON, C. Explanation and justification in machine learning: a survey. Proceedings of IJCAI-17, Workshop on explainable AI (XAI), 2017.
CADWALLADR, C. Google, democracy and the truth about internet search. The Guardian, 4 dec 2016. Disponível em: http://www.theguardian.com/technology/2016/dec/04/google-democracy-truthinternet-search-facebook. Acesso em: 26 maio, 2018.
CLANCEY W. J.; SHORTLIFFE, E. (Orgs.). Readings in medical artificial intelligence: the first decade. Reading, MA: Addison Wesley, 1984.
DE SOUZA, C. S. The semiotic engineering of human-computer interaction. Cambridge, MA: MIT Press, 2005.
DECHTER, R. Learning while searching in constraint-satisfaction problems. Proceedings of the 5th National Conference on Artificial Intelligence. Philadelphia, PA, August 11-15. Vol. 1, p. 178-183, 1986.
DENG, Jia; DONG, Wei; SOCHER, Richard; LI, Li-Jia; LI, Kai; FEI-FEI, Li. Imagenet: a largescale hierarchical image database. In: Proceedings of IEEE, Conference on Computer Vision and Pattern Recognition. p. 248-255, 2009.
DHURANDHAR, Amit; CHEN, Pin-Yu; LUSS, Ronny; TU, Chun-Chen; TING, Paishun; SHANMUGAM, Karthikeyan; DAA, Payel. Explanations based on the missing: towards contrastive explanations with pertinent negatives, 2018. Disponível em:
http://arxiv.org/abs/1802.07623. Acesso em: 26 mai. 2018.
DORAN, Derek; SCHULZ, Sarah; BESOLD, Tarek. What does explainable ai really mean? a new conceptualization of perspectives. Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML, arXiv:1710.00794, 2017.
ECO, U. On truth: a fiction. In: ECO, U.; SANTAMBROGGIO, M.; VIOLI, P. (Orgs.). Meaning and mental representations. Bloomington, IN: Indiana University Press. p. 41-59, 1988.
GOODMAN, B; FLAXMAN, S. European union regulations on algorithmic decisionmaking and a “right to explanation”. AI Magazine, vol. 38, no. 3, 2016. Disponível em: http://doi.org/10.1609/aimag.v38i3.2741. Acesso em: 26 mai. 2018.
HE, Kaiming; ZHANG, Xiangyu; REN, Shaoqing; SUN, Jian. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceeding ICCV '15 Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). p.
-1034, 2015.
HERLOCKER, J.; KONSTAN, J.; RIEDL, J. Explaining collaborative filtering recommendations. In: Proceedings of the Third Conference on Computer Supported Cooperative Work (CSCW), p. 241-250, 2000.
HERN, A. 'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft”. The Guardian: International Edition, 29/06/2016. Disponível em: http://www.theguardian.com/technology/2016/sep/28/google-facebook-amazonibm-microsoft-partnership-on-ai-tech-firms. Acesso em: 4 maio, 2018. Ver também sítio oficial do consórcio: http://www.partnershiponai.org/.
LIPTON, Zachary. The mythos of model interpretability. ICML Workshop on Human Interpretability in Machine Learning, 2016. Disponível em: https://arxiv.org/pdf/1606.03490.pdf. Acesso 17 junho, 2018.
NADIN, M. In folly ripe. In reason rotten: putting machine theology to rest, 2017. Disponível em: https://arxiv.org/abs/1712.04306v1. Acesso em: 4 maio, 2018.
PEARL, J. Probabilistic reasoning in intelligent systems. San Francisco, CA: Morgan Kaufmann, 1988.
PEIRCE, C. S. Philosophical writings of Peirce, Buchler, J. (Org.). New York, NY: Dover Publications, 1955.
SANTAELLA, L. O método anticartesiano de C. S. Peirce. São Paulo: Ed. Unesp, 2004.
SEBE, N. Human-centered computing. In: NAKASHIMA, Hideyuki; AGHAJAN, Hamid; AUGUSTO, Juan Carlos (Orgs.). Handbook of ambient intelligence and smart environments. Dordrecht: Springer, p. 349-370, 2010.
YOSINSKI, Jason; CLUNE, Jeff; NGUYEN, Anh; FUCHS, Thomas; LIPSON, Hod. Understanding neural networks through deep visualization. In: Proceedings of the Deep Learning Workshop, 31st International Conference on Machine Learning, Lille, France. 2015.
Downloads
Published
Issue
Section
License
Copyright (c) 2018 Luciano Frontino de Medeiros; Alvino Moser; Marilene S. S. Garcia
This work is licensed under a Creative Commons Attribution 4.0 International License.
Esta revista oferece acesso livre imediato ao seu conteúdo de acordo com a licença CC BY 4.0, em conformidade com a definição de acesso público do Directory of Open Access Journals (DOAJ).
Ao submeter um texto à TECCOGS, os autores asseguram que o material submetido à avaliação e eventual publicação não infringe de modo algum qualquer direito proprietário ou copyright de outros. Com a submissão, o autor transfere em efetivo os direitos de publicação do artigo para a TECCOGS. A transferência de copyright cobre os direitos exclusivos de publicação e distribuição do artigo, incluindo reimpressões ou quaisquer outras reproduções de natureza similar, além de traduções. Os autores mantém o direito de usar todo ou partes deste texto em trabalhos futuros de sua autoria e de conceder ou recusar a permissão a terceiros para republicar todo ou partes do texto ou de suas traduções. Para republicar números da revista na íntegra, qualquer interessado precisa obter permissão por escrito tanto dos autores como também dos editores da TECCOGS. A TECCOGS por si só pode conceder direitos relativos a emissões de periódicos como um todo.
Imagens com direitos autorais pertencentes a terceiros, que não foram concedidos ao autor do texto, devem ser utilizadas somente quando necessárias à análise e ao argumento da pesquisa, sempre indicando as respectivas fontes e autoria. A TECCOGS dispensa o uso de imagens meramente ilustrativas. Se desejar ilustrar um conceito, o autor deve indicar, em forma de URL ou referência bibliográfica, uma referência em que a ilustração esteja disponível.
---------------------------------------------------------------------------------
This journal offers free immediate access to its content under CC BY 4.0, in accordance with Directory of Open Access Journals' (DOAJ) definition of Open Acess.
When submitting a text to TECCOGS, authors ensure that the material submitted for evaluation and eventual publication does not infringe any proprietary right or copyright. Upon submission, authors effectively transfer the publication rights of the article to TECCOGS. The copyright transfer covers the exclusive rights of publication and distribution of the article, including reprints or any other reproduction of similar nature, in addition to translations. Authors retain the right to use all or parts of the text in future works of their own, as well as to grant or refuse permission to third parties to republish all or parts of the text or its translations. In order to fully republish issues of the magazine, anyone interested must obtain written permission from both the authors and the editors of TECCOGS. TECCOGS alone can grant rights relating to issues of journals as a whole.
Images whose copyright belongs to third parties that have not been granted to the author of the text should be used only when essential for the analysis and argument, always indicating theirs respective sources and authorship. TECCOGS dismisses any use of merely illustrative images. To illustrate a concept, the author must indicate, in the form of a URL or bibliographic reference, a source in which the illustration is available.