Get Help: Guides and Troubleshooting

Academic Technology Services (ATS) develops and maintains documents, guides, and videos for self-guided solutions to common issues and tasks relevant to the use of academic technologies and services. Don’t see what you need? Contact your department’s consultant for personalized support!

  • Perusall enables collaborative annotation of assigned readings as an asynchronous way to replicate some of what we achieve during in-person class discussions. It’s integrated with the Glow course management system and provides a readings-focused discussion and collaboration forum that your students may find to be richer than Glow Discussions.

    Getting Started

    Perusall can be added to your Glow course from your course Settings: Go to Navigation and drag Perusall from the bottom of the list to your course menu items.

    Full integration of the tool happens when you:

    1. Create an assignment in Glow
      • Set the Assignment Submission Type to External Tool.
      • Click on the find button and find Perusall as your External Tool.
      • Select the check box for “load this tool in a new tab.”
    2. Make sure to set the assignment title in Glow to the exact same name as the assignment in Perusall.

    Q: How do I get started with Perusall?

    Example of Perusall and GLOW

    Official Documentation

  • GenAI Detectors: Current Research and Considerations

    Fall 2024

    OIT’s mission includes “making available the best information technology resources” and “support[ing] the faculty in communicating knowledge […] to the community of scholars”.

    As Williams faculty develop their responses to generative AI (genAI), we know that software vendors are marketing genAI “detectors” as tools for supporting academic honesty. This resource is designed to share high-quality, research-based information about the performance and use of these detection tools, with implications for student learning and Williams’ institutional commitments to equity. Research is ongoing and is likely to evolve on pace with the technology; nevertheless, this post represents a snapshot of our current understanding, based on research rather than marketing.

    • Detectors are neither accurate nor precise, making them unreliable. Research on widely available genAI detection tools reveals that any advantage or consistency associated with genAI detection accuracy continues to drop as newer LLM models are released, the ways users engage with genAI during their workflow becomes more complex, and the application of adaptive and evasive techniques to genAI detection grows.

      Studies consistently show these tools produce both false positives (identifying human-written text as AI-generated) and false negatives (failing to identify AI-generated text). While some sources point to specific AI detection tools that demonstrate relatively better accuracy than others, such as Turnitin and GPTZero, the research emphasizes that even these tools are not foolproof and require cautious interpretation. The unreliability of genAI detection tools and the impact of false accusations makes them unsuitable for high-stakes situations like academic misconduct investigations.

      Elkhatat, Elsaid, & Almeer (2023)Krishna et al. (2023), Liyanage & Buscaldi (2023), Weber-Wulff et al. (2023)

    • Not all genAI detectors will reach the same conclusion about the same submission. In addition to unacceptable false-positive and false-negative rates within individual genAI detection tools, results will vary greatly depending on factors such as the specific AI model used to generate the text, the type of text being evaluated, and the interpretation of the detector output.

      Chaka (2023), Flitcroft et al. (2024), Otterbacher (2023)

    • Given that genAI detection tools are trained on data to determine what is human or non-human writing, the exact training data plays an incredibly important role. Different data sets will prioritize certain culturally prescribed writing norms, which means writing from individuals with writing styles or patterns that fall outside of those particular norms will be identified as being AI-generated. It is difficult to know what norms were incorporated into training data. Importantly, evidence suggests that non-native English writing samples are significantly more likely to be identified as having been AI-generated than native English writers’ samples. Colleagues have shared personal experiences when their writing was thought to be generated by AI, simply because they are autistic. Further, students who have not been taught the “hidden curriculum” of college (including first-generation and/or international college students) may produce work that does not conform to culturally-prescribed “academic” writing norms. Therefore, students from marginalized and/or historically excluded groups may be incorrectly flagged as using genAI more frequently than their more privileged peers.

      Bond (2019), Liang et al. (2023), Otterbacher (2023), Silva (2023)

    • Not much. Most vendors of genAI detectors offer minimal transparency into their proprietary methods, so we have limited insight into how the tools actually function. While there is ongoing research into linguistic features (e.g. “burstiness” and “perplexity”) and technical methods (e.g. watermarking and fingerprinting), we generally have to trust vendors’ marketing to tell us whether, how, or how frequently they incorporate up-to-date research into their products.

      Kassis & Hengartner (2024), UNLV Libraries (2024)

    • Under typical license terms, content submitted to a detector can be used for “legitimate business purposes”, which often includes improving the product (e.g., expanding the database or training the AI model). If a submission includes FERPA-protected or personally-identifiable student information, sharing it would be in violation of these protection laws and policies.

      An important note: By accepting the terms of use of a genAI detector, the user asserts that they have the legal right to submit the content. Since students hold copyright to their own work in the U.S. and to their intellectual property at Williams, submitting student work requires their consent.

      Copyleaks (2024), GPTZero (2024), Originality.ai (2024)

    • It’s hard to predict. We know that genAI upgrades have set a rapid pace over the past two years (multiple major upgrades by OpenAI, Google, Meta, X, Mistral, and others), with each upgrade introducing new patterns, and most details kept proprietary. We also know that paraphrasing, translating, and other simple user techniques have been shown to be effective at subverting detectors’ results, and novel variations on these techniques will likely make detecting a perpetually defensive game of Whac-a-Mole.

      Anderson et al. (2023), Perkins et al. (2024), Sadasivan et al. (2024), Walters (2023)

    • Your options depend upon what your goals are! If you are really interested in identifying every student who uses genAI in your course, there’s no current solution we can recommend. In consultation with the Rice Center for Teaching, below are some ideas you might consider for addressing genAI, based on your goals.

      • If your goal is for students to avoid using genAI, you can focus on improving student understanding of what is acceptable AI use, why unacceptable AI use is not aligned with academic integrity at Williams, and why academic integrity is important. You might co-create a genAI policy with students in your course. Further, you may help students consider that using genAI takes away an opportunity for them to do the hard, rewarding work of learning.
      • If your goal is to help students understand and use genAI appropriately, you may also encourage students to be skeptical about what genAI produces, including that evidence demonstrates AI can and does reproduce bias, as well as how genAI is just wrong sometimes. You might provide resources to support their understanding of how to best use genAI tools, or incorporate student use of genAI into classwork or assignments. There are not currently standardized ways of citing or attributing work to genAI (though some are emerging), so make sure you are clear with your students about this.
      • If your goal is to reduce the efficacy of genAI tools in generating results for students that appear reasonable or authentic, you might consider:
        • Designing assignments that incorporate student reflection from class discussions into their answers.
        • Using drafts or other iterative learning processes.
        • Decentering assignments that focus on recalling information, but rather focusing on assignments that require deep understanding and synthesis of multiple complex ideas.

      Whatever your goals are, there are a number of offices that you can reach out to discuss pedagogical strategies: the Rice Center for Teaching, the Davis Center, and/or OIDEI! There are even more Rice Center for Teaching thoughts about generative AI.

      Foltynek et al. (2023), Heikkilä (2023), Nicoletti & Bass (2023), Topaz (2024)

    • The consequences for students who are found responsible for violating the Honor Code can be wide-ranging and have long-term impacts, including receiving a “0” on an assignment, failing the course, or being expelled. Even if a student is not found responsible for an accused violation, the impact of being brought before any academic integrity committee can be emotionally stressful for a student, and can reduce the student’s attention and effort toward their other classes, as well. In fact, data suggest stress is linked as a predictor of academic integrity violations, and being falsely accused is certainly an added stressor. Some colleagues have commented on the negative experience of being accused of using AI, including one autistic professor who went viral online after having their direct communication patterns attributed to genAI use. Thus, it is important to deeply consider the many potential impacts of accusing students of using genAI.

      Given the weight of these impacts, please remember that there are resources on campus to help you consider how to proceed with potential inappropriate use of genAI, including the Rice Center for Teaching, the Davis Center, the Writing Center, and the Office of Accessible Education.

      Brooks & Greenberg (2021), Peterson (2021), Silva (2023), Sokol & Ellis (2020)

    We in OIT are aware that the uses of technology have implications beyond the technology itself. We know that your situation may be discipline-specific and/or unusual, and we really want to encourage you to talk to us, and also seek support from these offices on campus:

    Rice Center for Teaching  |  The Davis Center  | Office of Institutional Diversity, Equity, and Inclusion |  The Writing Center  |  Office of Accessible Education

    GenAI Detectors: Current Research and Considerations Bibliography