Academic Technology Services (ATS) develops and maintains documents, guides, and videos for self-guided solutions to common issues and tasks relevant to the use of academic technologies and services. Don’t see what you need? Contact your department’s consultant for personalized support!
-
- Panopto: FAQs for Instructors
- How do I upload my video files?
- How do I move or copy my video(s) to another course?
- How do I create a prerecorded lecture in GLOW using Panopto?
- Importing Automated Captions for Course Audio and Video Materials
- How do I enable the download option for my video in the Course Media Gallery?
-
Perusall enables collaborative annotation of assigned readings as an asynchronous way to replicate some of what we achieve during in-person class discussions. It’s integrated with the Glow course management system and provides a readings-focused discussion and collaboration forum that your students may find to be richer than Glow Discussions.
Getting Started
Perusall can be added to your Glow course from your course Settings: Go to Navigation and drag Perusall from the bottom of the list to your course menu items.
Full integration of the tool happens when you:
- Create an assignment in Glow
- Set the Assignment Submission Type to External Tool.
- Click on the find button and find Perusall as your External Tool.
- Select the check box for “load this tool in a new tab.”
- Make sure to set the assignment title in Glow to the exact same name as the assignment in Perusall.
Q: How do I get started with Perusall?
Example of Perusall and GLOW
- Collaborative annotation and asynchronous learning (by April Merleaux)
Official Documentation
- Create an assignment in Glow
-
Zoom is a video conferencing system that offers breakout rooms and whiteboard tools.
Getting Started
Q: How do I get access to Zoom?
Q: How do I compare Google Meet and Zoom?
Q: How do I join a meeting?Best Practices
Q: What are the best practices for running a video meeting?
Q: How do I secure my Zoom sessions (a.k.a. avoid Zoombombing)?Scheduling
Q: How do I schedule Zoom meetings?
Q: How do I use the Google Calendar add-on for Zoom?
Q: How do I schedule recurring meetings?
Q: How do I enable Zoom and schedule meetings within Glow?Managing
Q: How do I manage a Zoom classroom?
Q: What are the Meeting Controls?
Q: How do I enable and add a co-host?
Q: What is the difference between an alternative host and a co-host?
Q: Can a Zoom licensed user start a meeting, then transfer a Host to a basic user?
Q: How does GLOW and Zoom integration work?
Q: Will my Zoom meeting time out during my office hours?
Q: How do I enable closed captioning?Breakout Rooms
Q: What are Breakout Rooms and how do I manage them?
Q: How do I pre-assign my students to Breakout Rooms?Panopto and Zoom Integration
Q: How does Zoom Cloud Recording & Panopto integration work?
Known Issues
Q: I get a black screen during screen sharing. How can I resolve it?
A: Windows users, go to this page to find the solution. Mac users, turn off the “Automatic graphics switching” or use “Use TCP connection for screen sharing.”Product Resources
-
GenAI Detectors: Current Research and Considerations
Fall 2024
OIT’s mission includes “making available the best information technology resources” and “support[ing] the faculty in communicating knowledge […] to the community of scholars”.
As Williams faculty develop their responses to generative AI (genAI), we know that software vendors are marketing genAI “detectors” as tools for supporting academic honesty. This resource is designed to share high-quality, research-based information about the performance and use of these detection tools, with implications for student learning and Williams’ institutional commitments to equity. Research is ongoing and is likely to evolve on pace with the technology; nevertheless, this post represents a snapshot of our current understanding, based on research rather than marketing.
-
Detectors are neither accurate nor precise, making them unreliable. Research on widely available genAI detection tools reveals that any advantage or consistency associated with genAI detection accuracy continues to drop as newer LLM models are released, the ways users engage with genAI during their workflow becomes more complex, and the application of adaptive and evasive techniques to genAI detection grows.
Studies consistently show these tools produce both false positives (identifying human-written text as AI-generated) and false negatives (failing to identify AI-generated text). While some sources point to specific AI detection tools that demonstrate relatively better accuracy than others, such as Turnitin and GPTZero, the research emphasizes that even these tools are not foolproof and require cautious interpretation. The unreliability of genAI detection tools and the impact of false accusations makes them unsuitable for high-stakes situations like academic misconduct investigations.
Elkhatat, Elsaid, & Almeer (2023), Krishna et al. (2023), Liyanage & Buscaldi (2023), Weber-Wulff et al. (2023)
-
Not all genAI detectors will reach the same conclusion about the same submission. In addition to unacceptable false-positive and false-negative rates within individual genAI detection tools, results will vary greatly depending on factors such as the specific AI model used to generate the text, the type of text being evaluated, and the interpretation of the detector output.
-
Given that genAI detection tools are trained on data to determine what is human or non-human writing, the exact training data plays an incredibly important role. Different data sets will prioritize certain culturally prescribed writing norms, which means writing from individuals with writing styles or patterns that fall outside of those particular norms will be identified as being AI-generated. It is difficult to know what norms were incorporated into training data. Importantly, evidence suggests that non-native English writing samples are significantly more likely to be identified as having been AI-generated than native English writers’ samples. Colleagues have shared personal experiences when their writing was thought to be generated by AI, simply because they are autistic. Further, students who have not been taught the “hidden curriculum” of college (including first-generation and/or international college students) may produce work that does not conform to culturally-prescribed “academic” writing norms. Therefore, students from marginalized and/or historically excluded groups may be incorrectly flagged as using genAI more frequently than their more privileged peers.
Bond (2019), Liang et al. (2023), Otterbacher (2023), Silva (2023)
-
Not much. Most vendors of genAI detectors offer minimal transparency into their proprietary methods, so we have limited insight into how the tools actually function. While there is ongoing research into linguistic features (e.g. “burstiness” and “perplexity”) and technical methods (e.g. watermarking and fingerprinting), we generally have to trust vendors’ marketing to tell us whether, how, or how frequently they incorporate up-to-date research into their products.
-
Under typical license terms, content submitted to a detector can be used for “legitimate business purposes”, which often includes improving the product (e.g., expanding the database or training the AI model). If a submission includes FERPA-protected or personally-identifiable student information, sharing it would be in violation of these protection laws and policies.
An important note: By accepting the terms of use of a genAI detector, the user asserts that they have the legal right to submit the content. Since students hold copyright to their own work in the U.S. and to their intellectual property at Williams, submitting student work requires their consent.
-
It’s hard to predict. We know that genAI upgrades have set a rapid pace over the past two years (multiple major upgrades by OpenAI, Google, Meta, X, Mistral, and others), with each upgrade introducing new patterns, and most details kept proprietary. We also know that paraphrasing, translating, and other simple user techniques have been shown to be effective at subverting detectors’ results, and novel variations on these techniques will likely make detecting a perpetually defensive game of Whac-a-Mole.
Anderson et al. (2023), Perkins et al. (2024), Sadasivan et al. (2024), Walters (2023)
-
Your options depend upon what your goals are! If you are really interested in identifying every student who uses genAI in your course, there’s no current solution we can recommend. In consultation with the Rice Center for Teaching, below are some ideas you might consider for addressing genAI, based on your goals.
- If your goal is for students to avoid using genAI, you can focus on improving student understanding of what is acceptable AI use, why unacceptable AI use is not aligned with academic integrity at Williams, and why academic integrity is important. You might co-create a genAI policy with students in your course. Further, you may help students consider that using genAI takes away an opportunity for them to do the hard, rewarding work of learning.
- If your goal is to help students understand and use genAI appropriately, you may also encourage students to be skeptical about what genAI produces, including that evidence demonstrates AI can and does reproduce bias, as well as how genAI is just wrong sometimes. You might provide resources to support their understanding of how to best use genAI tools, or incorporate student use of genAI into classwork or assignments. There are not currently standardized ways of citing or attributing work to genAI (though some are emerging), so make sure you are clear with your students about this.
- If your goal is to reduce the efficacy of genAI tools in generating results for students that appear reasonable or authentic, you might consider:
- Designing assignments that incorporate student reflection from class discussions into their answers.
- Using drafts or other iterative learning processes.
- Decentering assignments that focus on recalling information, but rather focusing on assignments that require deep understanding and synthesis of multiple complex ideas.
Whatever your goals are, there are a number of offices that you can reach out to discuss pedagogical strategies: the Rice Center for Teaching, the Davis Center, and/or OIDEI! There are even more Rice Center for Teaching thoughts about generative AI.
Foltynek et al. (2023), Heikkilä (2023), Nicoletti & Bass (2023), Topaz (2024)
-
The consequences for students who are found responsible for violating the Honor Code can be wide-ranging and have long-term impacts, including receiving a “0” on an assignment, failing the course, or being expelled. Even if a student is not found responsible for an accused violation, the impact of being brought before any academic integrity committee can be emotionally stressful for a student, and can reduce the student’s attention and effort toward their other classes, as well. In fact, data suggest stress is linked as a predictor of academic integrity violations, and being falsely accused is certainly an added stressor. Some colleagues have commented on the negative experience of being accused of using AI, including one autistic professor who went viral online after having their direct communication patterns attributed to genAI use. Thus, it is important to deeply consider the many potential impacts of accusing students of using genAI.
Given the weight of these impacts, please remember that there are resources on campus to help you consider how to proceed with potential inappropriate use of genAI, including the Rice Center for Teaching, the Davis Center, the Writing Center, and the Office of Accessible Education.
Brooks & Greenberg (2021), Peterson (2021), Silva (2023), Sokol & Ellis (2020)
-
Currently, genAI detectors are neither accurate nor reliable in identifying genAI-generated or human-written work, and continue to become more inaccurate and unreliable as genAI is rapidly upgraded. The makers of these tools are not always or fully transparent with their proprietary methods of “detection” and how they use submitted data. We assume any submitted data will not be protected, unless the tools explicitly tell us otherwise. Detectors themselves state explicitly that their results should not be used for punishment or disciplinary action. Closer to home, evidence demonstrates the impact of false accusations on students is negative.
If you decide to use a genAI detector, we know that you’ll be motivated to mitigate its shortcomings so that students are not harmed. The following questions can help you consider how to (at least partially) offset their limitations:
- Is my syllabus policy on genAI clear and comprehensive? Does it adequately address ambiguous situations? (E.g. can students use the genAI-powered features in Google Docs, or Microsoft Word, to make edits?)
- What will “trigger” my decision to apply a detector in the first place? Is that criterion objective and equitable, or does it rest on intuition (and possibly introduce selection bias)?
- How have I designed my course activities and community to reduce the attractiveness and efficacy of genAI use in the first place?
- Have I had a substantive conversation with my students to explain my reasoning, or just “laid down the law”? (Do my students have a sense of investment or co-ownership in my specific genAI boundaries?)
- Will I implicitly trust whatever results I get from the detector? What will be my strategies for fairly considering the possibility of an inaccurate result?
We in OIT are aware that the uses of technology have implications beyond the technology itself. We know that your situation may be discipline-specific and/or unusual, and we really want to encourage you to talk to us, and also seek support from these offices on campus:
Rice Center for Teaching | The Davis Center | Office of Institutional Diversity, Equity, and Inclusion | The Writing Center | Office of Accessible Education
GenAI Detectors: Current Research and Considerations Bibliography
-