Artificial Intelligence Article and Policy
Artificial Intelligence (AI) Policy and its Considerations for Case Writing
Connie Allsopp, The World’s Registrar, Canada
The Society for Case Research (SCR) has a general AI statement as guardrails noted on the website and as outlined in Appendix A- Generative AI Statement related to all cases submitted for publication. These guidelines ensure a disclosure of AI use was documented in the teaching note, along with authors taking full responsibility for what is written. However, authors are not to use AI as a co-author in the references, nor watermark any sections written by AI. In essence,
these guardrails serve an ethical purpose, and this author believes there remains a need for a formal AI policy. Several other organizations have already posted effective policy tools, such as the AI Policy used by Taylor & Francis (2025). This author proposes a need for an AI policy and a panel discussion related to conference talks or meeting notetaking as well as the overall use of AI in case writing in the three SCR Journals -Journal of Critical Incidents, Business Case Journal and Journal of Case Studies.
While the discussion questions listed below are not exhaustive, they serve to stimulate a panel dialogue around a proposed AI Policy, see Appendix B – SCR AI Policy Proposal. The purpose of this article was to invite colleagues to dialogue on a best path forward for the organization and to build professional capacity within the general membership. Other members are welcome to suggest additional questions based on their own experiences with AI platforms and/or the SageBreakout Learning discussion or even their own experiences after attending the AI workshop hosted at the 2024 SCR Summer workshop.
Since November 2022, when Altman posit on X “today we launched ChatGPT. try talking with it here” (Altman, 2022) this technological advancement has expanded exponentially. Several articles have been written to build awareness and many have suggested, “disciplines across the globe have grappled with questions about how emerging artificial intelligence will impact their fields” (Mason, 2023). In general, all educators are required to have a greater understanding on the usage of AI given its expansion within every field and lack of understanding has not been acceptable. One example of naivete, a scribe who recently felt it was helpful to speakers for AI to transcribe their case presentations at the conference, but this meant the cases were inadvertently uploaded and the authors’ original works uploaded to the large language model (LLM) thus these cases were now published. So, how would an organization handle this type of situation when the authors subsequently submitted their cases for publication in a journal? How informed are all members on the impact on AI in the field over the last three years?
While education has continued to emphasize the use of technology in general for researchpurposes and general case writing the future related to AI usage remains mostly uncharted and up for debate. At the 2025 MBAA- SCR meeting in Chicago and post discussions several unanswered questions surfaced, such as the following:
1. Does SCR encourage the use of AI?
2. What specific AI platforms are acceptable for use in case writing?
3. What would an AI policy entail?
4. When does SCR have plans to expand the current statement on AI to a policy?5. 6. How are authors responsible for their use of AI in case writing?
Is there interest to publish a special journal issue concerning AI and case writing examples?
An initial response to these key questions might be as follows, however the author realizes these are only initiate insights and remains open to a full discussion at the 2025 SCR summer workshop:
1. This author proposes that SCR support the use of generative artificial intelligence (GenAI) tools, such as using AI to generate ideas and brainstorm potential discussion questions, to search for relevance data on a topic related a real issue, and so forth. In a2024 Harvard Business Review (HBR) article I read, “AI won’t give you a new sustainable advantage: But using it may amplify the ones you already have” (Barney & Reeves, 2024). However, authors must refrain from uploading a company’s real data as well as any confidential or propriety knowledge to AI for it to remain unpublished.
2. While specific AI platforms are often used for different tasks, and the AI field has rapidly expanded no one specific platform is preferred over another. At the time of writing, Ai platforms such as ChatGPT (Hu, 2023), Perplexity, Copilot, Claude, DALL-E, Runaway, etc. are widely available. Authors’ and reviewers’ preferences would remain open and largely depend upon their purpose for using the AI tools.
3. The proposed AI policy in Appendix B serves to encourage discussion on the various aspects of a policy. Readers are invited to consider what would they support, what would they use, and which policy statements inhibit usage of AI. While the proposed policy attempts to overcome key issues such as Freitas surfaced in a recent HBR article as myths and misconceptions about AI, such as it’s: too opaque, emotionless, too flexible, to autonomous, and people would rather have human interactions” (Freitas, 2025). Other issues are yet to be discussed and are welcomed at a panel discussion.
4. In April at the 2025 SCR board meeting and journal editors’ meeting this author proposed a policy be written to clarify the parameters for use of AI in case writing and reviewing. Please see Appendix B for such a proposed AI policy for discussion purposes. This author believes a follow-up conversation, and board agenda item at the July 2025 Summer Workshop, along with a motion to vote on an AI policy is prudent.
5. To date, the AI guidelines require transparency for ethical case writing, thus authors are required to add an AI disclaimer to their teaching notes. Two recent examples of AI Statements are listed below; however, these are not shared as exemplars and further discussion is necessary.
6. a. b. Artificial Intelligence was used as a research assistant to suggest potential discussion questions for the teaching note. The actual discussion questions are the sole responsibility of the authors. AI assistance (OpenAI’s ChatGPT, GPT-4) was used in drafting and revising both the case and thisteaching note. AI was used for revision of drafts, refinement of discussion questions, and alignment with Bloom’s Taxonomy. All AI-generated content was reviewed and edited by the author. Use of AI is cited per APA 7 in the References section, as OpenAI. (2023). ChatGPT (May 24 version) [Largelanguage model]. https://chat.openai.com Wilson & Daughterty (2024) suggested interrogating AI intelligently which means, “let’s think step by step” on how AI was used to improve discussion questions (DQs), or how AI responded to the DQs and/or what part was different from what an A level student may respond. It is the belief of this author that a special edition of a journal edition focused on the use of AI for case writing and case reviewing would serve to expand knowledge thus building capability for the memberships. Examples of cases might be:
What AI platforms tools were the best for research, editing, writing text or even finding new titles for case writing? Does the use of AI tools with students provoke higher class engagement and why or why not? While these two case themes are very limited, the author understands there are a wide range of additional intriguing cases proposals.
In conclusion, this article outlines issues, questions for a panel discussion and an AI policy for review by the membership. I have proposed this dialogue to build capacity on the topic of AI for case writing and notetaking practices at the 2025 SCR Summer workshop. AI created a significant shifted for case writing, teaching and conference presentations. I believe the sooner we all develop a keen grasp of our role from the “human in the loop” framework (Michigan Online, 2024) we will best serve each other and improve the state of education.
References
Barney, J. & Reeves, M. (2024). AI won’t give you a new sustainable advantage: But it mayamplify the ones you already have. Harvard Business Review. (September/October, 72-79).
Freitas, J. (2025, January-February). Why People Resist Embracing AI. Harvard Business Review. (January-February, 53-56).
Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base—Analyst note.
Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
Mason, S. (2023, November 3). Evalua4on and Ar4ficial Intelligence. New Direc)ons for Evalua)on. https://doi.org/10.1002/ev.20554
Michigan Online. (2024, June 4). “Human in the Loop” Framework: Leveraging Genera4ve AI for Social Impact Organiza4ons. hQps://youtu.be/98RRUXfDIX8?si=a5ZASp-1kwpemy2u
Sam Altman [@sama]. (2022, November 30). today we launched ChatGPT. try talking with it here: Http://chat.openai.com [Tweet]. Twitter.https://twitter.com/sama/status/1598038815599661056
Taylor & Francis. (2025). AI Policy. hQps://taylorandfrancis.com/our-policies/ai-policy/
Wilson, H. & Daughterty. (2024). Embracing Gen AI at Work: The skills you need to succeed in Era of large language models. Harvard Business Review. (September/October, 151-155).
Wren, H. (2024, January 18). What is AI transparency? A comprehensive guide. https://www.zendesk.com/blog/ai-transparency/#AI%20transparency%20requirements
Appendix A- Generative AI Statement
[SCR Generative AI Statement – Currently in place but not date is listed- found 2025/04/22]
https://ignited.s3.amazonaws.com/SCR%20Generative%20AI%20Statement.pdf
This statement is designed to provide guardrails to the use of generative AI in the research process for case studies, teaching notes, or articles.a. b. c. The term generative AI applies to large language models (LLM) like ChatGPT, Lama, Claude or any current or future LLM. These tools should not be used to write content. These tools are trained on large data sets and work only to guess the next word in sequence. They do not think, can make errors, incorporate bias, and hallucinate. Another issue if that all the information you share with the LLM becomes part of its data and it is no longer private. Authors will need to continue to write their own work and use research methods like internet searches, fieldwork, or library visits. Generative AI is not and should never be used as a source of facts.
Do not list AI as an author or co-author. The appropriate use of generative AI is as an assistive technology. Authors are still responsible for all aspects of their submission. AI professionals and users understand that there are questions and problems that are inappropriate for use with LLMs. Their purpose is to help humans think and be creative. They are not human.
The appropriate use of AI involves using appropriate prompts to brainstorm and organize your thoughts and ideas. Authors are encouraged to understand the appropriate prompts that can help the brainstorming process. AI can also engage in data analytics, visualize data, and create and explainstatistical results. Authors are responsible for discerning errors and using the tool for creative interaction and not content creation.
Disclosure
Creative use. This work includes output from brainstorming sessions with [name the LLM]. The prompts the authors used, and the output should be provided a separate file. Please label each file with the prompt you used and then provide the word document where you copied and pasted the output.
Data analysis. This work includes output, the creation of visualizations, statistical output and interpretation based on the use of [LLM Data Tool]. The authors understand that they are responsible for any errors in the output, interpretation, or visualizations. Authors must label any graph, table or outputthey use in their submission with appropriate APA citation.
Grammar checks. This statement does not apply to grammar and spell-checking software or AI. Authors are encouraged to use these as a final step before submission to resolve these kinds of issues and take the opportunity to consider suggestions the software makes for improvements. The software does not replace proofreading.
Appendix B – SCR AI Policy Proposal
Why: AI transparency has ethical, legal and societal implications for everyone and the disclosure of how, where and when it has been used builds trust.
Purpose of Policy: Artificial Intelligence (AI) Transparency and Ethical Considerations
Audience: Society for Case Research - Case Writers, Editors, and Reviewers.
Policy Approval Authority: 2025 SCR Board of Directors
Responsible Division: Journal Editors and SCR Board Members
Responsible Officers: Journal Editors and Reviewers
Contact Person: Executive Director of SCR or Connie Allsopp, Coeditor of Journal of Critical
Incident
Submission Date: July 1, 2025, for annual review at the MBAA – SCR Conference
Approval Date: TBD
Review Date: TBDPublications: As SCR has three journal publications, and this policy applies for all journals –
Business Case Journal, Journal of Case Studies, and the Journal of Critical Incidents.
Wise Practice, include but are not limited to:
1. Explain the use of AI and be transparent on what is included in the case, how you collected, stored, and used this data. Specifically, how primary interviews, scribed notes, and research evidence was collected and documented.
2. Outline details on how you prevented biases in the case writing. Compare and contrast what data was included by different AI platforms for your case/teaching note, and how you proofed this data.
4. Evaluation and disclosure of what types of prompts were used to obtain what data andhow you reviewed it for any discrepancies or hallucinations.
5. Understand the implications for AI usage as a method for note-taking at conferences and at confidential meetings.
Mandate: Effective (date) all cases, articles, critical incidents, teaching notes and notetaking submitted to SCR will need a disclosure statement on the use of AI as outlined about in this policy.
Disclosure: In the event the authors did not use AI, or did use AI, regardless a statement is required to declare that was the author’s preference. (See the Disclosures in Appendix A)
Implications: If the Editors and Reviewers discover that AI was used, and the author(s) did not disclose, then the case or critical incident along with the teaching note will be rejected. It will be returned to the author(s) along with a letter stating their work will not be published given its lack of transparency. All future cases, critical incidents and teaching notes by this author or authors will be double checked before acceptance for a period of the next two years.
Definition: Artificial Intelligence (AI) transparency means understanding how artificial intelligence systems make decisions, why they produce specific results, and what data they are using. Simply put, AI transparency is like providing a window into the inner workings of AI, helping people understand and trust how these systems work (Wren, 2024).