I have now been blogging for three years, and writing for well over a decade. My writing focuses on human and social systems, particularly relating to activism and social change.
About a year and a half ago, I published an article on my blog titled “Why I Write with ChatGPT”. In it, I outlined my thoughts on how large language models (LLMs) like Claude, OpenAI, or Gemini could, if used ethically, make writing more accessible.
I rarely used AI in my writing to begin with, but felt that there should not need to be overtly strict standards, and that there is a middle ground between asking AI to write a paper and not using it at all.
The core argument of my original article was that artificial intelligence has nothing real to offer a writer, and therefore, it should not be feared. Writing comes from lived experience, and whether you are using a typewriter, a quill, a pen, or a keyboard, the message underneath stays the same.
As someone with dyslexia, this issue is close to my heart. I have known more than most, what it is like to have institutions lock you out. Without early forms of LLMs such as spellcheck, I would have never graduated high school.
Two peer-reviewed articles that discuss the potential positive impacts of LLMs on disability can be found here and here.
I do not want to look down upon other writers, who, like me, struggle with mechanics of writing, but want to communicate their thoughts all the same.
While the premise of my original article is solid, the world has changed since I first wrote it. My reasons for adopting stricter standards are as follows.
First, there is no universally agreed upon set of rules as to what counts for illegitimate usage of AI. Different websites, institutions, and academic foundations set their own rules. Writers need to come together, to find out a set of guidelines that is not just moral but enforceable, and that process could take years.
Until then, the simplest approach is to not use AI at all.
Second, while AI the tool is incredibly useful, the people and organisations running it are highly problematic. The list of problems with AI companies is literally endless: climate change, misinformation, and mental health concerns to name a few. While I am not one to value individual changes over collective ones, I would rather not promote the usage of big tech on my blog.
Third, I would like to challenge myself more. The biggest argument against AI in writing that I agree with is that friction is an essential part of the process. Sometimes wrestling with a sentence is the best way to make it perfect. As I grow into a new phase of blogging and publishing, I want to create a little more tension between my page and my head.
Therefore, I am moving away from intentional AI usage. In 2026, AI is everywhere, and my personal blog is not a thesis so I will not be truly cold-turkey. For instance, if I do a Google search, and Gemini pops up with an interesting statistic that I can verify elsewhere, I will use it. I hope my readers trust my discretion as to what specifically crosses the line.
When it comes to the actual body of my blog posts, the only times I will use any editing software: AI or not, is for simple spelling and grammar. Old posts will remain up for now, but posts published after May 1st, 2026, will be written under the new criteria.
To make it clear, these guidelines are exclusively the rule for my blog. If I do a publication elsewhere; I want my readers to know that I will, of course, follow whatever standards my publisher lays out, being even stricter as necessary.
What works today, will not work forever, and I look forward to continually updating my writing guidelines into the future as technology continues to evolve.
My hope is always to be honest with my readers about my process and methodology, and to encourage other writers with personal blogs to share more about their own processes.
I also do hope that readers and publishers alike make an effort to not treat all AI usage equally, as sometimes it really can help people communicate and stigma prevents open and honest conversations.
The only way humanity is going to get through this new AI chapter is through nuance and evolution as we take it day by day.