Agreed with all of the above and especially glad you highlight the environmental impacts of AI. I don't think we've seen nearly enough of a discussion on this topic yet, and as we think about the implications of equity and AI it's paramount that we consider the possible harm to our environment and equity implications of that harm, too.
Thanks Jens Peter! The link was indeed missing in the newsletter email, but I have updated the landing page with the appropriate link for future readers. Thanks again.
Thank you for speaking out against AI detection "technology". I think that as long as it exists in institutional policies it will greatly hinder the ability to actively engage students. Students are understandably very worried about malpractice accusations (especially unfalsifiable ones), and even in courses that tried to introduce responsible genAI usage and incorporate it into assessments, many of them have said that they steer clear of using any generative AI in their assignments (legitimately or otherwise) because they're worried about this.
Agreed with all of the above and especially glad you highlight the environmental impacts of AI. I don't think we've seen nearly enough of a discussion on this topic yet, and as we think about the implications of equity and AI it's paramount that we consider the possible harm to our environment and equity implications of that harm, too.
Absolutely Audrey! Thanks for stressing this pivotal issue.
Thanks for this, Jeppe.
However, I missed the intended link at the end of Step 3 to Stefan Bauschard’s post on AI plagiarism detectors.
His post is easy enough to find without the link (and very informative as well), but anyway…
Thanks Jens Peter! The link was indeed missing in the newsletter email, but I have updated the landing page with the appropriate link for future readers. Thanks again.
Thank you for speaking out against AI detection "technology". I think that as long as it exists in institutional policies it will greatly hinder the ability to actively engage students. Students are understandably very worried about malpractice accusations (especially unfalsifiable ones), and even in courses that tried to introduce responsible genAI usage and incorporate it into assessments, many of them have said that they steer clear of using any generative AI in their assignments (legitimately or otherwise) because they're worried about this.
Thanks Cesare, I agree completely! Thanks for reading along, I appreciate it.