In August, my husband and I moved our son into his first dorm room. And since that day, I’ve experienced a lot of firsts when it comes to your kiddo leaving home for college.
The first week without him joining us at the dinner table.
The first realization that I can go to bed without waiting up to see if he makes curfew.
The first time I joined a Facebook page create specifically for parents of students attending his university.
And boy, has that been an experience.
Most of the posted content hasn’t been helpful to me, BUT there are a handful I’ve learned a lot from. The most recent went something like this…Hi, my son recently submitted a paper for X class. After reading it, the professor reported that after running my son’s paper through some software it reported that 21% of it was written by AI. He didn’t use AI, so what can we do? How do we refute the software?
Yes, the effects of ChatGPT, Bard, and other learning language models are hitting every walk of life.
Here’s what I learned, and how you can use this story to set up your nonprofit, consulting business, or college student for success.
1. If you use a tool like Grammarly, that is AI. And any software that scans writing for AI use will pick this up. So, it’s quite possible this student did not use ChatGPT, but relied on an editing tool to clean up some of the grammar issues in the paper.
2. If you add a citation but do not do it correctly, some software will read it as AI influenced writing.
3. While some may see AI as a great tool to help those who find writing difficult, there are certain institutions that forbid its use, in any form.
4. There are clearly few guidelines on the use of AI, other than banning it completely.
5. If you are accused of using AI, but didn’t, is there even a way to refute said accusation?
What does this mean for writers and organizations, especially nonprofits? It means we need to start figuring things out. If your agency does not have a policy on the use of AI, you need to establish one, and soon.
Here’s what I suggest, based on my experience and reading thus far.
1. Like with any policy, chances are someone has already written a good one. Ask around and see if there is a document you can edit to meet your own needs – rather than starting from scratch.
2. Talk to your employees and consultants. Find out how they are currently using AI, what works, and what does not. Use this information to help guide your agency’s policy.
3. Consider the ramifications of banning the use of AI entirely versus letting its use run wild. Chances are, there is a happy medium that works best.
4. Know that this is a document that will probably never be set in stone. Make it common practice to review and update regularly (annually or maybe even quarterly). As the tech changes and the greater its use, the better we will grasp what AI should and should not do for nonprofits.
I never heard the outcome for this student. I hope the professor was open to discussion because AI is here to stay. And simply relying on an AI scanning software as the end all, be all answer to what tools a person used in their writing is not the answer. The flat out ban of all AI usage is going to be harder and harder to uphold, for universities and nonprofits alike, so it’s time we start to figure this out.
- Breaking Down Silos Between Grants and Development - February 22, 2024
- Old School Productivity Tools Still Get the Job Done - February 15, 2024
- Trial By Fire: Writing Tips Learned from the Ashes - February 8, 2024