An explainable artificial intelligence – human collaborative model for investigating patent novelty

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

With the accumulation of technology-related big data, including the patent database, existing studies have proposed a framework for patent analysis using natural language processing models. Artificial intelligence (AI) applications require human experience and insight based on the understanding of complex environments and uncertainties and model predictive performance. However, existing research has focused on applying big data and developing automated processes. Actual user understanding and the consideration of model usability are insufficient. Studies must consider the human–machine cooperation-based approach in developing the AI model. This study proposes a collaborative approach through which the explainable AI (XAI) model, a self-explaining deep neural network for text classification, communicates with users. The proposed XAI model provides users with an explanation for the model prediction along with the prediction results for patent evaluation. Users provide feedback based on the model predictions and their explanations. The source XAI model is refined via relearning by reflecting on user feedback. This study experiments to assess model improvement using the human collaboration method. As for the human collaborative method, this study considers the process of human intervention independent of the XAI model's results as well as the method of human participation based on the explanation presented by the XAI model. The experimental results verified the XAI model performance, showing the highest accuracy (0.890) and F1 score (0.916), such that the model can be applied efficiently to patent evaluation. The XAI–human collaboration model presented in this study can also be expanded and applied to technology intelligence tasks. However, the collaborative approach in this study has complete trust in human advice from technical experts; thus, subsequent collaborative XAI models could be improved by communicating bidirectionally with human resources as a complementary relationship.

Original languageEnglish
Article number110984
JournalEngineering Applications of Artificial Intelligence
Volume154
DOIs
StatePublished - 15 Aug 2025

Keywords

  • Explainable artificial intelligence (XAI)
  • Human–machine collaboration
  • Natural language processing (NLP)
  • Patent mining
  • Patent novelty analysis
  • Technology intelligence

Fingerprint

Dive into the research topics of 'An explainable artificial intelligence – human collaborative model for investigating patent novelty'. Together they form a unique fingerprint.

Cite this