Guidelines for the Use of Artificial Intelligence (AI) for the MD Program

A task force was assembled to develop guidelines for the use of AI in the MD program at GW SMHS. The purpose of these guidelines is to establish clear expectations for the responsible, ethical, and professional use of Artificial Intelligence (AI), including Generative Artificial Intelligence (GenAI) tools, within the MD Program. The guidelines are intended to safeguard academic integrity, uphold professionalism, protect learners, and ensure that AI use enhances, rather than undermines, undergraduate medical education.

To address the concern that disclosure may feel dangerous or stigmatizing, the task force encourages using standardized, neutral disclosure language so that students, faculty, and staff are not asked to “confess” AI use, but simply to document it in the same way they might document other learning supports or productivity tools. We also aim to draw very clear distinctions between permitted, limited, and prohibited uses of AI in each course or clerkship. When expectations are explicit, disclosure becomes safer and more routine.

Equally important, we recognize that faculty interpretation plays a critical role. Faculty development will emphasize that appropriate, disclosed AI use when allowed should not be interpreted as laziness or lack of preparedness. Instead, transparency should be understood as a marker of professionalism. The guidelines will also be explicit that disclosure of permitted AI use, by itself, should not result in disciplinary or professionalism actions. Consequences apply to inappropriate or undisclosed misuse, particularly where patient safety, confidentiality, or academic integrity is at risk by the lack of compliance with other existing policies or course rules.

II. Scope

These guidelines apply exclusively to Undergraduate Medical Education (UME), including all pre-clerkship, clerkship, and advanced clinical phases of the MD Program. It governs the use of AI by medical students, faculty, course directors, and instructional staff in educational, assessment, and academic contexts. 

These guidelines do not apply to Graduate Medical Education (GME), residency or fellowship training, admissions or selection processes, faculty employment decisions, or clinical operations, all of which are governed by separate institutional, hospital, or accrediting-body policies.

III. Definitions

Artificial Intelligence (AI): Computational systems capable of performing tasks that typically require human intelligence, including reasoning, pattern recognition, language generation, and evaluation. 

Generative Artificial Intelligence (GenAI): A subset of AI tools capable of generating text, images, code, questions, feedback, or other content based on user prompts (e.g., large language models).

IV. Guiding Principles

  1. Professional Accountability: AI tools may support educational activities, but accountability for accuracy, judgment, and ethical conduct always rests with the human user. 
  2. Academic Integrity and Intellectual Ownership: All work submitted for evaluation or delivered instructionally must reflect the user’s own understanding, reasoning, and scholarly contribution.
  3. Transparency and Disclosure: Use of AI must be disclosed clearly and appropriately.
  4. Bias Awareness, and Learner Safety: AI outputs must be critically evaluated for bias, inequity, inaccuracy, or potential educational harm. 
  5. Educational Primacy: AI should augment, not replace, the development of foundational knowledge and skills, clinical reasoning, professionalism, and ethical judgment.
  6. Security: All AI use must comply with GW data classification, privacy, FERPA, HIPAA, and information security requirements.
  7. Accessibility: Faculty and instructional staff must ensure that any AI tools used for teaching, learning, assessment, or educational support are accessible and inclusive, in accordance with applicable university accessibility policies and federal requirements. AI-enabled educational materials should not create barriers for learners with disabilities, and reasonable accommodations must be maintained regardless of the use of AI technologies. Faculty should also consider recommending GWU Approved AI Tools to minimize financial burdens associated with students using AI tools in courses where possible. Responsibility for accessibility remains with the faculty member or course director utilizing the AI tool.

V. Relationship to Academic Integrity and Professionalism Policies

Use of AI within the MD Program is governed by, and explicitly linked to, the MD Program Code of Conduct in the Learning Environment policy, medical student professionalism standards, and applicable university-wide (e.g., FERPA) or healthcare organization-wide policies (e.g., HIPAA) on ethical conduct, responsible computing, and data stewardship. Violations of these policies may constitute academic dishonesty, failure to attribute sources, misrepresentation of intellectual contribution, or unprofessional conduct and may result in remediation, grading consequences, course failure, or referral to appropriate accountability offices. HIPAA violations are a serious offense, and patient data should be protected in all cases.

VI. Appropriate Uses of AI

When consistent with course or program expectations, appropriate uses of AI may include as authorized by faculty/course director: 

For Students:

  • Brainstorming ideas or outlining approaches 
  • Clarifying complex concepts for personal learning 
  • Editing for grammar, clarity, or organization
  • Literature search and summarization of articles/research
  • Generating practice questions or explanations for self-study (not submission) 
     

For Faculty and Staff:

  • Drafting lecture outlines or educational materials, with independent verification 
  • Supporting curriculum design, instructional planning, or course administration
  • Generating exam questions  
  • Assisting with evaluative educational workflows


AI-generated content can be inaccurate, misleading, or entirely fabricated and may contain copyrighted material. In all cases, users are responsible for verifying accuracy, relevance, and alignment with learning objectives. The use of AI tools should occur at an appropriate point in the learning process and should not prematurely replace critical cognitive tasks such as independent analysis, synthesis, or clinical reasoning. Faculty are responsible when necessary for clearly communicating when AI use is permitted or restricted, within specific phases of instruction, assessment preparation, or clinical education.  

Clinical training sites may maintain additional policies, procedures, or restrictions governing the use of AI tools. Medical students, faculty, and staff are required to comply with all site-specific guidelines or policies when participating in educational activities at affiliated hospitals, clinics, or other clinical learning environments.  

VII. Examples of Inappropriate Uses of AI by Students, Faculty, and Staff:

Unless explicitly authorized in writing, the following uses are inappropriate

  • Submitting AI-generated content as one’s own intellectual work or without disclosure (will be considered plagiarism) - see MD Program Code of Conduct in the Learning Environment policy. 
  • Using AI tools during examinations, quizzes, or graded assessments unless otherwise specified
  • Using fabricated citations, references, data, or sources
  • Using AI to mask lack of understanding or substitute for required learning activities or goals
  • Entering protected health information, student records, or confidential institutional data into AI platforms without prior approval. Information provided in prompts to AI tools is often stored by the software vendor to be used to train AI models.
  • Public or non-approved AI tools may not be used to upload educational materials.

Specific faculty and staff guidelines:

  • Faculty and staff retain responsibility for curriculum, assessment, grading, and feedback.
  • AI may be used to enhance efficiency and educational quality but not to replace human judgment.
  • Faculty and staff AI use must comply with FERPA, HIPAA, and GW IT security requirements.
  • Courses and clerkships should publish clear parameters and expectations for students regarding AI use or refer back to these guidelines as needed.
  • Public or non-approved AI tools may not be used to upload student work or generate grades.
  • AI may assist with drafting formative feedback or learning resources only within GW-approved tools and with human review.
  • Fully automated grading or summative assessment by AI is not appropriate.
  • Faculty and staff should be transparent with students when AI is used in teaching, assessment, or feedback.

VIII. Disclosure Requirements

When AI meaningfully contributes to work submitted for evaluation, instruction, or dissemination, disclosure must include:

  1. The name and version of the AI tool used 
  2. The purpose and nature of use 
  3. The user’s intellectual role in reviewing, revising, and finalizing the material

For formal citations, follow the applicable guidelines for citing AI (e.g., APA, AMA). 

Disclosure alone does not make AI use acceptable; the work must still demonstrate intellectual ownership and professional judgment. Disclosure alone should also not make individuals subject to disciplinary action. Course directors/faculty may request in advance that AI users save a log of relevant prompts and output so that they could be provided in the event of further review of AI use.

IX. Faculty and Staff-Specific Disclosure and Accountability

Course directors are encouraged to link these guidelines in their course syllabus. Course directors/instructors should determine and inform students in which parts of the course they are allowed/not allowed to use AI. Depending on the needs and goals/objectives of each course, the instructional team may be more restrictive on AI use but should not issue blanket bans on AI use without permission by the Office of Medical Education.  

Faculty should include in their course syllabus a reference to the MD program AI guidelines with specific instructions that can be found in the SMHS MD Program Syllabus Appendix | School of Medicine and Health Sciences (Appendix 2).

Faculty are expected to disclose meaningful AI use when it contributes to: 

  • Lecture slides, presentations, or recorded instructional materials 
  • Educational content delivered to students 
  • Assessment item development (e.g., exam questions, cases, rubrics) 
  • Automated or semi-automated scoring, feedback, compilation of narrative comments or learning analytics 

Disclosure should be proportionate and transparent (e.g., syllabus statements, slide footnotes, or assessment notes). Faculty remain fully responsible for the accuracy of content, mitigation of bias, fairness of assessments, and integrity of grading decisions, regardless of AI assistance.  

X. Data Privacy and Security

Only institutionally approved AI tools may be used with sensitive or regulated educational data. Public AI platforms must not be used for student records, assessment materials prior to administration, or confidential institutional information. All users must comply with university IT, cybersecurity, and data governance policies. Activity conducted through institutionally provided AI tools may be logged, retained, and reviewed by the university and its vendors to ensure compliance with institutional policies. Users should not assume confidentiality.

XI. Oversight and Review

These MD Program-specific guidelines will be reviewed as needed and as determined by the Office of Medical Education and relevant educational governance or accrediting bodies. Questions, exceptions, or ambiguities regarding AI use should be referred to the Office of Medical Education to ensure consistent, principled decision-making.

XII. Effective Date

These guidelines take effect upon approval by the GW SMHS Committee on Undergraduate Medical Education (CUMEC). For any questions regarding the guidelines email Ioannis Koutroulis (ikoutroulis [at] gwu [dot] edu (ikoutroulis[at]gwu[dot]edu)) or the Office of Medical Education (ome [at] gwu [dot] edu (ome[at]gwu[dot]edu)).  

These guidelines will be reinforced through educational and administrative mechanisms, including but not limited to course syllabi, clerkship orientation materials, faculty development sessions, and learner professionalism communications. The Office of Medical Education, in collaboration with course directors and clinical leadership, will support consistent dissemination, interpretation, and reinforcement of expectations related to AI use across all phases of the MD Program.

XIII. Relevant policies/guidelines

Established in April 2026, these AI use guidelines were created by a multidisciplinary task force of MD program faculty, staff, and students, chaired by Ioannis Koutroulis, MD, PhD, MBA.


Appendices

Appendix 1. Framework of good practice for writing AI disclosures

(Adapted from: Cleland J, Driessen E, Masters K, Lingard L, Maggio LA. When and how to disclose AI use in academic publishing: AMEE Guide No. 192. Medical Teacher. 2025 Dec 29:1-2)

  1. Specify the AI model used: Clearly name the AI system, including version numbers or custom/local model information.
  2. Describe the specific activities for which AI was used: Provide precise details of how the AI contributed and avoid vague descriptions.
  3. Declare consent if participant data were uploaded: Confirm that appropriate consent was obtained for uploading any learner, patient, or participant data.
  4. Describe data‑protection measures taken: Explain safeguards such as disabling model‑training settings or using privacy-preserving mechanisms.
  5. State when AI was NOT used (when relevant): Clarify omissions where a reader might reasonably assume AI involvement.
  6. Reference relevant journal, publisher, or institutional AI policies: Affirm that AI use aligns with applicable guidelines.
  7. Include a final statement of responsibility: Assert full responsibility for the accuracy, integrity, and validity of the content.
  8. Maintain a record of prompts and interactions: Keep documentation of AI interactions in case editors request verification.

 

AI Use Disclosure Template (Fill‑in‑the‑Blank)

  1. AI Tool Used: ..............................................................
  2. Specific Activities for which AI was used: .............................................
  3. Consent obtained for uploaded data (if applicable): .....................................
  4. Data protection measures taken (if applicable): .........................................
  5. Statement on what AI was NOT used for (optional): ......................................
  6. Statement of compliance with journal/institution AI policies: ............................
  7. Final responsibility statement: ..........................................................
  8. Location of prompts/records (if retained): ............................................... 

 

Appendix 2. Model language to include in course syllabi regarding AI use

The SMHS encourages instructors to state explicitly and affirmatively their expectations regarding student use of GAI tools. If an instructor wishes to permit certain uses of GAI tools, such uses must be set forth explicitly in the course syllabus and/or assignment instructions.

Below is some model language to include in course syllabi:

Generative Artificial Intelligence (GAI) tools are becoming important resources in many fields and industries. Accordingly, you are permitted to use such tools to generate content submitted for evaluation in this course, including [papers; take-home examinations; specified other assignments]. Your instructor will explain to you the uses of GAI tools that are permitted or prohibited in this course, including on what specific assignments use of GAI tools is permitted. You remain responsible for all content you submit for evaluation.  

[Instructors might also wish to include language regarding pitfalls, such as the following:] You may use GAI tools to help generate ideas and brainstorm. However, you should note that the material generated by these tools may be inaccurate, incomplete, or otherwise problematic. Beware that use may also stifle your own independent thinking and creativity.

[Instructors might also wish to include language regarding citation, such as the following:] If you include content (e.g., ideas, text, code, images) that was generated, in whole or in part, by Generative Artificial Intelligence tools (including, but not limited to, ChatGPT and other large language models) in work submitted for evaluation in this course, you must document and credit your source. For example, text generated using ChatGPT-4 should include a citation such as: “ChatGPT-4. (YYYY, Month DD of query). ‘Text of your query.’ Generated using OpenAI. https://chat.openai.com/.” Material generated using other tools should be cited accordingly. Failure to do so in this course constitutes failure to attribute under the GW MD Program Code of Conduct in the Learning Environment policy.

 

Appendix 3. Examples of AI Use by Medical Student Education and Professionalism Concerns

This table provides illustrative examples of how the use of artificial intelligence (AI) by medical students may intersect with professionalism expectations.

AI Use Scenario

Typical Policy Language / Expectation

Example Student Behavior

Why This Is a Professionalism Concern

Common Institutional Response

Educational Emphasis

Undisclosed AI assistance on written work Permitted AI use must be disclosed when allowed; failure to disclose constitutes a professionalism lapse. A student uses ChatGPT to rewrite parts of a reflective essay but submits it as original work. Lack of transparency and misrepresentation of authorship undermine trust and integrity. Professionalism review; documented warning if repeated. Honesty, transparency, professional identity formation.
AI-generated responses for graded assessments AI may not be used for graded work unless explicitly authorized. A student uses AI to draft short-answer exam or clinical reasoning responses. Misrepresents competence and interferes with valid assessment of readiness for patient care. Academic integrity process and professionalism remediation. Accountability and accurate self-representation of competence.
Entering patient information into public AI tools PHI* or identifiable patient data may not be entered into public or non-approved AI tools. A student pastes real patient details into a chatbot to obtain diagnostic suggestions. Breach of confidentiality and poor judgment regarding patient privacy. Professionalism and privacy review; required HIPAA remediation. Confidentiality, patient trust, ethical judgment.
Unapproved AI use during clinical rotations AI use in clinical settings must be explicitly approved and not replace supervision. Student uses an unauthorized AI app* during rounds to suggest management plans. Blurs role boundaries and may compromise patient safety. Clerkship counseling and professionalism evaluation impact. Role clarity, supervision, patient safety.
Fabricated or AI-hallucinated information and citations Students are responsible for verifying the accuracy of all cited information and providing accurate citations. AI generates information and/or nonexistent references that the student submits without verification. Undermines scholarly rigor and evidence-based medicine. Educational remediation and warning for first offense. Scholarly integrity, evidence appraisal.
Repeated or escalating AI misuse Patterns of misuse may indicate professionalism deficiencies. A student repeatedly violates AI rules across multiple courses. Raises concerns about readiness for professional responsibilities. Referral to professionalism or promotions committee. Professional identity formation and accountability. [1]

 

Appendix 4. Examples of Poor AI Use by Faculty and Staff and Modeling Concerns

This table provides illustrative examples of how the use of artificial intelligence (AI) by faculty constitutes poor professional modeling to students:

AI Use Scenario

Typical Policy Language / Expectation

Example Faculty or Staff Behavior

Why Is This Poor Professional Modeling?

Undisclosed AI assistance in creating teaching materials Permitted AI use must be disclosed A faculty uses ChatGPT to develop teaching slides but fails to disclose Lack of transparency
AI-generated assessments Assessments likely fall under restricted data as they are not public and confidential A faculty uploads prior year’s exam and answer key to create a new written assessment Compromises exam security and fairness of the assessment
Entering student grade information into public AI tools FERPA* or protected student data may not be entered into public or non-approved AI tools. A staff member pastes real student grade details into a chatbot to draft a course report Breach of confidentiality and poor judgment regarding student privacy.
Unapproved AI use during patient care This is unauthorized access, use, or disclosure of Protected Health Information (PHI). Use of an unapproved AI scribe or ambient dictation tool to assist in documentation HIPAA violation
Fabricated or AI-hallucinated information or citations Faculty are responsible for verifying the accuracy of all AI-generated content A faculty uses an AI tool to generate images for a lecture that includes multiple inaccuracies Undermines faculty learner trust and demonstrates lack of scholarly rigor in teaching

Appendix 5. Data Classification

Regulated Data

Data protected by law or regulation; unauthorized disclosure can cause serious harm and legal penalties (e.g., PHI/HIPAA data, FERPA‑protected student records, Social Security numbers, full medical records).

Restricted (Confidential) Data

Sensitive institutional or personal information that must be protected due to privacy, contractual, or ethical obligations, though not always legally regulated (e.g., personnel files, internal assessments, non‑public research or financial data).

Public Data

Information approved for unrestricted public release; disclosure poses little or no risk (e.g., public websites, published curricula, directory information, publicly released reports).

 

 

Examples of Data Classifications and AI Use:

Scenario 1

A third‑year medical student is preparing for rounds and wants help refining a differential diagnosis. They paste details from a real patient’s chart (age, symptoms, lab values, and imaging findings) into a public AI chatbot to “double‑check” their thinking.

Why this is regulated data:

Even if the student omits the patient’s name, the information still constitutes protected health information (PHI) under HIPAA. Entering this data into an AI tool without approval exposes it to outside non‑authorized users.

How guidelines typically address this:

  • This use is never permitted unless there is approval by institutional officials or regulatory bodies even if the AI tool developers claim that it is HIPAA compliant.
  • The issue is treated as both a privacy violation and a professionalism concern.
  • Acceptable alternatives include using:
    • De‑identified fictional cases, or
    • Institutionally approved, EHR‑integrated AI tools.
Scenario 2

A faculty member wants to summarize evaluation data from multiple faculty instructors to create a comprehensive overview of a student’s performance while on a rotation. They upload the student’s grades and evaluations into an AI tool to summarize the information.

Why this is regulated data:

Student academic and financial records are legally protected by the Family Educational Rights and Privacy Act (FERPA). Entering this data into a public AI tool exposes it outside GW‑approved systems and poses a severe risk to the university.

How guidelines typically address this:

  • This use is never permitted in public or non‑approved AI tools.
  • The issue is treated as a privacy violation.
  • Acceptable alternatives include using:
    • Fully de‑identified student data before use
    • Institutionally approved AI tools (e.g., Box)
Scenario 3

A pre‑clerkship student uploads lecture slides, faculty‑written exam review questions, and a draft OSCE prompt into an AI tool to generate practice questions and summaries.

Why this is restricted data:

Although this is not legally regulated like PHI, the materials are GW‑owned, non‑public educational content. Uploading them into a public AI tool risks loss of intellectual property and unauthorized redistribution.

How guidelines typically address this:

  • Restricted data may not be entered into AI tools without institutional approval and consent by the content creator (e.g., faculty).
  • Use may be allowed only in:
    • GW‑licensed AI environments with contractual data protections, or
    • Future GW local tools that do not retain or train on uploaded content.
  • Improper use is often framed as a judgment and professionalism issue, especially if students were previously instructed on data boundaries.
Scenario 4

A staff member uploads internal curriculum committee notes into an AI tool to generate action items for distribution to relevant parties.

Why this is restricted data:

This data is not publicly available and considered confidential due to policies, contracts, or proprietary considerations. Uploading them into a public AI tool risks loss of intellectual property and unauthorized redistribution.

How guidelines typically address this:

  • Restricted data may not be entered into public AI tools.
  • Use may be allowed only in:
    • GW‑licensed AI environments with contractual data protections, or
    • Future GW local tools that do not retain or train on uploaded content.
  • Improper use is often framed as a judgment issue, especially if faculty and staff were previously instructed on data boundaries.
Scenario 5

A student asks an AI tool to help them create a study schedule and summarize publicly available review articles on hypertension guidelines published by professional societies.

Why this is public data:

The information is already openly available and approved for public dissemination. No personal, institutional, or proprietary information is involved.

How guidelines typically address this:

  • This use is generally permitted.
  • Students remain responsible for:
    • Verifying accuracy
    • Avoiding over‑reliance
    • Ensuring AI use aligns with course policies (e.g., not using AI during exams)
  • This type of use is often encouraged as a low‑risk, educationally appropriate application of AI.
Scenario 6

A faculty member is creating a new elective for fourth‑year medical students. They ask an AI tool to help them draft a syllabus for the course by summarizing publicly available examples from other institutions.

Why this is public data:

The information is already openly available and approved for public dissemination. No personal, institutional, or proprietary information is involved.

How guidelines typically address this:

  • This use is generally permitted.
  • Faculty remain responsible for:
    • Verifying accuracy
    • Avoiding over‑reliance
    • Ensuring AI use aligns with institutional policies
  • This type of use is often encouraged as a low‑risk, educationally appropriate application of AI.

 

Key takeaway language

  • Regulated data (PHI, FERPA records) must never be entered into public AI tools.
  • Restricted data (non‑public GW materials) may only be used in approved, protected AI environments.
  • Public data may be used with AI tools, subject to course and assessment rules.

 


[1] In preparing these guidelines the task force used Microsoft Copilot to (1) brainstorm and generate initial ideas, (2) help structure an outline and generate examples of professionalism as relates to AI (3) review nationwide AI guidelines for MD programs. Importantly, all conceptualization, interpretation, and argumentation were developed by the taskforce. Moreover, we critically reviewed and substantively revised all text produced or edited with AI to ensure accuracy, originality, and coherence with GW & SMHS policies. They take full responsibility for the final content and interpretations.