Authors: Ayat A. Najjar, Huthaifa I. Ashqar, Omar A. Darwish, Eman Hammad
Abstract: This study seeks to enhance academic integrity by providing tools to detect
AI-generated content in student work using advanced technologies. The findings
promote transparency and accountability, helping educators maintain ethical
standards and supporting the responsible integration of AI in education. A key
contribution of this work is the generation of the CyberHumanAI dataset, which
has 1000 observations, 500 of which are written by humans and the other 500
produced by ChatGPT. We evaluate various machine learning (ML) and deep
learning (DL) algorithms on the CyberHumanAI dataset comparing human-written
and AI-generated content from Large Language Models (LLMs) (i.e., ChatGPT).
Results demonstrate that traditional ML algorithms, specifically XGBoost and
Random Forest, achieve high performance (83% and 81% accuracies respectively).
Results also show that classifying shorter content seems to be more challenging
than classifying longer content. Further, using Explainable Artificial
Intelligence (XAI) we identify discriminative features influencing the ML
model’s predictions, where human-written content tends to use a practical
language (e.g., use and allow). Meanwhile AI-generated text is characterized by
more abstract and formal terms (e.g., realm and employ). Finally, a comparative
analysis with GPTZero show that our narrowly focused, simple, and fine-tuned
model can outperform generalized systems like GPTZero. The proposed model
achieved approximately 77.5% accuracy compared to GPTZero’s 48.5% accuracy when
tasked to classify Pure AI, Pure Human, and mixed class. GPTZero showed a
tendency to classify challenging and small-content cases as either mixed or
unrecognized while our proposed model showed a more balanced performance across
the three classes.
Source: http://arxiv.org/abs/2501.03203v1