I was browsing the internet looking at recent news and I spotted the below at the bottom of a particular article:

This got me thinking, is this the way of things, that we will start seeing notes at the bottom of articles, blog posts, etc, stating that “this was crafted with the help of generative AI tools”. It feels ok from a transparency point of view, in that the organisation in question is being transparent as to how the article was created but could this simply be to absolve them from any issues arising from bias or inaccuracies resulting from the use of an AI solution? Also, what about those less scrupulous organisations; will they bother to let us know about the use of a Generative AI (GenAI) solution or will they simply post articles quickly and easily without any due care and attention?
Taking this and considering the implications for education, what if students took the same approach and simply put in their referencing that their coursework, thesis, dissertation or other work was “written with the help of generative AI”. Would this be acceptable? I feel this is all falling into the trap of compliance; The author of an article or the student, ahead of submitting their work, simply puts the statement in place so they can tick a box and say they are compliant and transparent when in fact they have told the reader or marker very little. How much “help” did the GenAI solution provide? Did it provide the basic outline to start with or did it write the whole thing, aside from a couple of minor sentence changes? The extent of the “help” matters greatly! Or does it?
I suppose the key question here is why do we need to know if GenAI was involved in the creation of a piece of content? Is it due to the fact it may contain bias and inaccuracies? I suspect not as I would expect a journalist or editor to take responsibility and check any GenAI content before it is published. The same goes for a student, I would expect they have thoroughly checked the work before handing it in; it is their responsibility not that of GenAI. Is the reason we need to know due to an uncomfortable feeling in relation to AI created content? Consider reading two pieces of text providing a summary of a sporting event; If you were told one was written by a human and the other by a GenAI solution, would you have a preference and where does this preference, which I suspect would be towards reading the human written work, come from? Is the reason that we need to know that the work is the work of the student or author so we can direct or praise or complaints? But do we acknowledge the word processing software used, the web browser used for carrying our research, the laptop the content is typed on? Is AI a tool in the creation of the content or is it more than just a tool? If the piece of work produced with the help of GenAI, be this help little or significant, is a good piece of work does it matter? We used to focus on mental arithmetic, considering the use of a calculator to be cheating, yet now a calculator is just a tool we can use to help with maths; how is the use of GenAI any different?
I worry that the newspaper that placed this little rider at the bottom of their article is approaching the use of GenAi far too superficially without considering the wider impact. There are many unanswered questions in relation to GenAI with a small number of them presented above.
Or maybe I just need to accept that at least they have made an effort and a start as to how we become more transparent in the increasing use of GenAI in the creation of online content?
References:
Woman wins £2million house in competition but only receives £5,000 due to small print (msn.com)