Disclosing Generative AI Use in Digital Humanities Research

Project Developers

  1. Site Under Development
  2. Abstract
  3. Background
  4. Research Questions
  5. Methodology
  6. Main Project Objective
  7. Results
  8. Conference Presentation
  9. Site Code

Site Under Development

This project is under active development. We anticipate it being completed in 2026. Get notified about updates using this form.

Abstract

This project investigates how digital humanists perceive and approach generative AI (GenAI) disclosure in research. It is based on a survey that we conducted in 2024-2025. The results indicate that while digital humanities scholars acknowledge the importance of disclosing GenAI use, the actual rate of disclosure in research practice remains low. Respondents differ in their views on which activities most require disclosure and on the most appropriate methods for doing so. Most also believe that safeguards for AI disclosure should be established through institutional policies rather than left to individual decisions. The study’s findings will offer empirical guidance to scholars, institutional leaders, funders, and other stakeholders responsible for shaping effective disclosure policies.

Background

Generative AI (GenAI) now permeates almost every aspect of daily life and has become integral to academic work. Although its use offers clear benefits to researchers, it also raises challenging questions about academic integrity and research ethics (Van Noorden et al., 2023; Hosseini et al., 2023; Abdelhafiz et al., 2024; Dedema & Ma, 2024; Ng et al., 2025). Disclosure has therefore emerged as a potential policy solution.

Resnik et al. (2025) propose a three-tier framework distinguishing mandatory, optional, and unnecessary disclosure. This framework demonstrates the increasingly nuanced guidance now issued by publishers. Publishers agree that GenAI tools cannot be listed as authors, yet their disclosure policies vary:

Aspect Current State
Disclosure Distinctions Most publishers distinguish between routine language polishing (grammar, spelling) and substantive content generation: the former typically escapes disclosure, whereas the latter demands it (STM, 2023)
Strict Implementation Journals such as Science (2023) and Wiley (2023) titles require detailed statements describing the software, provider, and prompts used, with authors affirming full responsibility
Moderate Implementation Springer Nature (2023) insists that generative use be “properly documented” while exempting minor editing
Lenient Implementation IOP Publishing (n.d.) merely encourages transparency and reminds authors they remain accountable for all content

However, policies that mandate AI disclosure can produce unintended consequences. Empirical research has identified an “AI disclosure penalty” in both professional and creative domains. The magnitude varies by context: professional settings show moderate penalties (Reis et al., 2024; Proksch et al., 2024); creative contexts show smaller but still significant penalties (Horton Jr. et al., 2023). Studies consistently suggest AI disclosure reduces perceived credibility across multiple contexts (Longoni et al., 2021; Henestrosa and Kimmerle, 2024).

Given these context-dependent penalties, understanding how different audiences interpret AI disclosure is important. A recent Nature survey reveals deep divisions among scholars over whether GenAI use should be declared at all and, if so, how much detail such declarations should include (Kwon, 2025).

Research Questions

Summary

This survey study investigates how digital humanists perceive and approach generative AI (GenAI) disclosure in research. We contend that a community-centered lens is essential for navigating the complex terrain of AI disclosure, where formal policies intersect with the professional cultures that ultimately determine their uptake. Therefore, we adopt a community-based approach in this study, exploring how researchers view GenAI disclosure and how their views might shape future disclosure policies. We focus on the digital humanities community as a case study, an inherently interdisciplinary field that brings together scholars with diverse expertise (Luhmann & Burghardt, 2022), to gauge attitudes toward declaring GenAI use in research.

Questions

  1. How do digital humanists perceive and approach disclosure of generative AI (GenAI) use in research?
  2. In particular, to what extent do digital humanists regard disclosure of GenAI use as necessary?
  3. More particularly, in which research contexts should GenAI disclosure be practiced, for what specific activities, and in what forms?
  4. Finally, who should be responsible for developing and enforcing GenAI disclosure policies? Why?

Methodology

Qualtrics Survey

  • Distributed 20 February to 6 May 2025 via (Bluesky, Slack, LinkedIn) and digital humanities mailing lists
  • Digital humanists self-identify due to lack of clear definition for “digital humanist” (Ma, 2022; Ramsay, 2011)
  • 30 questions organized into 3 sections:
    • Demographics
    • GenAI Disclosure Perceptions and Practices
    • GenAI Personal Use and Literacy

Responses

  • 152 total responses; after data cleaning, 99 fully completed surveys remain
  • Respondents:
    • span a wide range of digital humanities subfields: e.g., literary and cultural studies, history, information science
    • represent a geographically diverse community across the United States, United Kingdom, European Union, China, and beyond
    • hold positions ranging from undergraduate to full professor; independent scholars also included

Main Project Objective

Our project seeks to offer empirical guidance to scholars, institutional leaders, funders, and other stakeholders who are responsible for shaping effective AI disclosure policies.

Results

The survey study reveals a shared recognition of the importance of disclosing GenAI use, but divergent views on how such disclosure should be integrated into research practices.

Respondents differ in their views on which activities most require disclosure and on the most appropriate methods for doing so.

Most also believe that safeguards for AI disclosure should be established through institutional policies rather than left to individual decisions.

Results: Overview

  1. There is a GenAI disclosure perceptions mismatch among digital humanists:
    1. 72% of researchers who never disclosed GenAI use perceive it “very necessary” to disclose GenAI use.
    2. Only 43% of researchers who disclosed GenAI use voluntarily perceive it "very necessary" to disclose GenAI use.
  2. Researchers consider it more important to disclose GenAI use in developmental and dissemination research stages.

Conference Presentation

The project developers submitted a research abstract and poster (Figure 2) on "Disclosing Generative AI Use in Digital Humanities Research" to the Association for Information Science and Technology (ASIS&T) Conference, which occurred in Washington, D.C. on November 14-18, 2025.

ASIS&T 2025 Conference Poster: Disclosing Generative AI Use in Digital Humanities Research
Figure 2. Poster presented at the Association for Information Science and Technology Conference (2025). Click image to view full size file.

At the conclusion of the ASIS&T Conference, the project developers learned that the conference had awarded their submission "Best Poster Award, 1st Place" (Figure 3).

Best Poster Award, 1st Place certificate from ASIS&T 2025
Figure 3. The project developers received the "Best Poster Award, 1st Place" from the Association for Information Science and Technology Conference (2025) for their submission. Click image to view full size file.

Site Code

Our code focuses on making the site fully accessible, but if you have suggestions for improvement, please let us know.

The code for this site is adapted from the One More Voice project site. The code for that site is released under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license and may be reused for educational and other non-commercial purposes as long as proper attribution is given.

The code for the present site was further developed using a combination of human effort and Claude Sonnet 4.5 from the Claude website and Claude Code v.2.0.1 and v.2.0.37.