A compliance approach to AI

I was browsing the internet looking at recent news and I spotted the below at the bottom of a particular article:

This got me thinking, is this the way of things, that we will start seeing notes at the bottom of articles, blog posts, etc, stating that “this was crafted with the help of generative AI tools”.   It feels ok from a transparency point of view, in that the organisation in question is being transparent as to how the article was created but could this simply be to absolve them from any issues arising from bias or inaccuracies resulting from the use of an AI solution?    Also, what about those less scrupulous organisations;  will they bother to let us know about the use of a Generative AI (GenAI) solution or will they simply post articles quickly and easily without any due care and attention?

Taking this and considering the implications for education, what if students took the same approach and simply put in their referencing that their coursework, thesis, dissertation or other work was “written with the help of generative AI”.    Would this be acceptable?     I feel this is all falling into the trap of compliance;   The author of an article or the student, ahead of submitting their work, simply puts the statement in place so they can tick a box and say they are compliant and transparent when in fact they have told the reader or marker very little.    How much “help” did the GenAI solution provide?   Did it provide the basic outline to start with or did it write the whole thing, aside from a couple of minor sentence changes?   The extent of the “help” matters greatly!  Or does it?

I suppose the key question here is why do we need to know if GenAI was involved in the creation of a piece of content?    Is it due to the fact it may contain bias and inaccuracies?    I suspect not as I would expect a journalist or editor to take responsibility and check any GenAI content before it is published.  The same goes for a student, I would expect they have thoroughly checked the work before handing it in; it is their responsibility not that of GenAI.    Is the reason we need to know due to an uncomfortable feeling in relation to AI created content?   Consider reading two pieces of text providing a summary of a sporting event;   If you were told one was written by a human and the other by a GenAI solution, would you have a preference and where does this preference, which I suspect would be towards reading the human written work, come from?   Is the reason that we need to know that the work is the work of the student or author so we can direct or praise or complaints?   But do we acknowledge the word processing software used, the web browser used for carrying our research, the laptop the content is typed on?     Is AI a tool in the creation of the content or is it more than just a tool? If the piece of work produced with the help of GenAI, be this help little or significant, is a good piece of work does it matter? We used to focus on mental arithmetic, considering the use of a calculator to be cheating, yet now a calculator is just a tool we can use to help with maths; how is the use of GenAI any different?

I worry that the newspaper that placed this little rider at the bottom of their article is approaching the use of GenAi far too superficially without considering the wider impact.   There are many unanswered questions in relation to GenAI with a small number of them presented above.  

Or maybe I just need to accept that at least they have made an effort and a start as to how we become more transparent in the increasing use of GenAI in the creation of online content?

References:

Woman wins £2million house in competition but only receives £5,000 due to small print (msn.com)

Compliance and training

In schools we have a fair number of “compliance” matters which we need to deal with especially around training.   Safeguarding including Online safety, Data Protection and Health and Safety to name but a few.   But sometimes I wonder whether we sometimes can’t see the wood for the trees.    I wonder if sometimes we become that focussed on the compliance measure that we may lose sight of the reason the compliance requirements are there in the first place.

Let us take data protection as an example.   We must comply with the legislation and as part of this we need to make sure that staff have received the relevant training.   So, the compliance measure is to ensure that staff have had training.    So, we attack this requirement with an annual training session likely a short session within inset at the start of the new academic year.    If we do this correctly, we will have a nice register of all the attendees at the session, which will match the list of staff, thereby showing all staff have been “trained” and we are compliant.

But why do we have the training?    The training is not the outcome or objective, it is the vehicle.   What we are really trying to achieve is that staff understand data protection, understand what they should and should not be doing with data and what to do when things go wrong.   So, taking my compliance to the next level (is there such a thing?   Do I not comply or not comply?   Maybe a separate question!) I now want to check understanding.   So, I send all staff a quiz to answer based on some of the content of the training.    If most, let’s say 80% of staff, get the answers correct I am happy that the training has developed the understanding I seek.  If the score is less, then I need to review the training materials and amend accordingly.

But again, this has flaws.   Is a quiz sent out after the training a good measure of understanding?   Or is it just a test of short-term memory where staff who may score well immediately after the training, will have forgotten most of the information two or three weeks down the line?   At this point I may revise my compliance measure again, releasing different short quizzes at the end of each term to get a better view for how the understanding is retained over time.

At this stage I may decide that the single training session isnt enough, as I want to go beyond basic understanding and look to change the culture in relation to data protection.   I am now looking at a multi-modal approach with a big training session, maybe weekly short facts, termly quizzes, smaller training sessions with targeted departments, on-demand video training, etc.  In looking to change the culture I am clear that this will involve lots of little changes and activities now and over a longer period of time although the prevailing culture is unlikely to show any signs of change until some time in the future.   I am accepting that changing culture is the long game.

Conclusion

The issue with all of the above is that I feel we often get stuck at the top, at the simple measure of the number of people who attended an annual training session.   I don’t feel we always question and explore our compliance.  I don’t feel we go beyond the simple and easy to measure.   

What if, in accepting the need to comply with training, we accept that our complex approach of briefings, training sessions, standing agenda items, etc forms the training and that this is better than a one-off training session, even with a quiz.   In doing so we can demonstrate training through details of our approach and evidence to support each part of it; We can provide copies of briefings, PowerPoints from presentations, meeting minutes, etc.    Isn’t this enough to prove compliance should a relevant authority ask for such proof?   Yes, we won’t be able to provide a nice simple list of all staff having signed attendance at an annual training session, but was that ever the point?

Internet Filtering

There was a time when safeguarding in relation to technology use was simple.    I remember when this was the case, when I was teaching IT in a secondary school as well as acting as the IT coordinator.   The only devices with internet access which the students had access to were in the school, the technologies to allow bypassing of filtering or which might make filtering difficult were few and far between, plus generally only for techie types rather than users in general.  Back then it was simple; Your internet filtering kept students from harmful content plus allowed you to monitor what students were doing online so you could tick the compliance box in relation to online safety.

The world isnt as simple anymore.

Although you still have your filtering in place you cannot consider this enough anymore.    Firstly students now are likely to have a mobile phone with data connectivity;   The filtering of internet access on your school network is of little use here whether students are using their connectivity for themselves or even sharing it as a wi-fi hotspot for their friends.  And in some schools students will even be bringing their own devices to school to actively use in lessons.

Tools for maintaining user privacy have also changed significantly.  15 years ago, in the secondary school I taught in, students would attempt to bypass filtering using web proxies.  These were easy to identify and therefore easy to then block.   Students used these as it was easy for them to use, simply requiring only the web address of the proxy.    Today students have access to all manner of tools from VPNs, which are now advertised on TV in relation to personal data security, to the ability to setup a dark web site with only one or two clicks.    Some services even market the fact they don’t keep logs.   Disposable email and social media accounts can easily be created as and when needed, or maybe even spin up a virtual PC in the cloud, use it then destroy in when done, taking with it any evidence of what it was used for.    The tools schools have to keep students within a safe internet bubble havent changed much, but the user-friendly tools which students have access to in order to bypass any restrictions have grown significantly.

Next the increasing need for privacy and security online is moving all sites and services towards systems which are less easy to monitor.   First it was almost all sites moving from HTTP to HTTPS.   The next step seems to be a move to DNS over HTTPS.   Given DNS requests are a key feature of filtering solutions, the encryption of these requests will render filtering solutions unable to see which sites students are actually visiting.    A solution here is SSL decryption which would allow filtering solutions to decrypt and then re-encrypt DNS requests as well as data however this in itself has its implications;   Is it acceptable to break a fundamental security measure built into sites in the interests of safeguarding?     By breaking the fundamental security of website traffic could we put student data at risk as it traverses our filtering solutions, and if so, is this risk acceptable?    And, is all of this effort worth it if students can simply hop onto their 4G/5G signal and bypass all of these precautions at will?

For me, what was very much a simple compliance measure in the need for a filtering solution has now changed significantly.    We need to therefore stop looking at this issue in terms of simply having filtering/monitoring in place and consider it from a broader risk point of view.   What are the benefits of how we use technology in our school?   What are the risks?   How do we reduce/mitigate these risks?  Do any of our mitigation measures limit potential positive uses of technology and is this acceptable? 

For me it is all about a balance between an open network allowing students to explore the breadth of potential positive uses of technology, along with the corresponding risk, versus a closed environment where technology usage is limited in the name of safety but equally this limits potential beneficial uses of technology.     Each school needs to identify where it stands on this continuum, what it supports in terms of technology use and what mitigation measures will be put in place.   This then needs to be regularly reviewed in relation to new technologies and also new or changing uses of technology within school.

Safeguarding in relation to technology use is no longer simple;  It is no longer a simple compliance tick box, or simple internet filtering box but instead a larger conversation around the benefits and risks of technology use in school, by staff and by students.

Online compliance courses

Education and schools have to cover a number of risk areas which staff need to be aware of including safeguarding, health and safety and data protection to name but three areas.   The wider world, beyond education, has similar issues which might also include COSSH, lifting and handling and personal protective equipment (PPE).   So how do we address these issues and how do we “train” staff?

Recently I have had the opportunity to see a number of online training platforms, in different contexts, which are being used to address some of the above.   The idea is that these online platforms allow staff to receive training on the areas which relate to them, while maintaining a central record of what training has been done and also sending out notifications and reminders when training has to be renewed.    All sounding good so far?

The issue I have with this is that the focus has almost totally shifted to that of compliance rather than developing learning in relation to the risk area which is being covered.    The platform shows who has done which training courses plus ensures that people do the courses, but does this actually improve the learning related to the particular risk area?

One look at some of the online training content shows multiple ways in which content can either be quickly skipped through or missed out altogether.   I must admit my own urge, when presented with some of these online courses, is to simply get it finished as quickly as possible to allow me to get on with matters I deem to be more pressing.    In addition, the content is not particularly engaging taking the form of video lectures or large amounts of text, with only minimal interaction.   Even the attempts at testing user knowledge at the end of units or modules is superficial in nature plus very much dependent on short term memory of facts as opposed to testing more longer term, or deeper learning of the subject matter.   A user may therefore seem to be proficient in a given area such as cyber security, having completed the relevant online course however may have learned very little if anything from it.

Here we see an example of the focus shifting from developing an understanding of health and safety, for example, to ensuring all have done the health and safety online course.     We stop worrying about understanding of health and safety as we can demonstrate that all staff are deemed proficient having completed to relevant online course.   We have achieved compliance but not competence.   We are considering what we can measure, the completion of online training, as what matters as opposed to trying to measure what matters.

I think we need to take a step away from the compliance culture.  Yes, it is easier to measure an organisations health and safety awareness by the number of people who have completed the annual training, but does this mean the understanding and practice is there?    I believe it doesn’t.    And if it doesn’t why should be spend the time, money and effort on these courses.   Surely, we need to find a better way?

The key for me lies in two areas, the first being how we educate and then on how we measure that learning has taken place.    In the area of education I think it is about making use of multiple delivery methods from short online content to in person training, posters and email awareness programmes.  We also need to continually adapt and revise our approaches which brings me neatly onto measuring.   We need to find methods of measuring whether this is short tests at intervals throughout the year, playing out scenarios, audits or focus group discussions.   This can help inform us as to what has been learned and what has not, and in doing so can help us revise and redesign.   In revising and redesigning we can then seek to build better understanding in our staff.    Yes, this is all much more difficult than simply firing out an online course for staff to do however it builds deeper learning.

Deeper learning is likely to serve a staff member and the organisation much better than a tick against an online training course in the event of a cyber, health and safety, COSSH or other issue.