Technology: Balancing Benefits with Risks

In our modern era, technology permeates every aspect of our lives, transforming how we work, communicate, and live. The advent of the internet, smartphones, artificial intelligence, and other technological innovations has brought unprecedented convenience and immediacy, significantly improving efficiency in countless areas. However, this rapid advancement is not without its downsides. As we become increasingly reliant on technology, we must grapple with the risks and challenges that arise, including cybercrime, data protection concerns, and the detrimental effects on our ability to focus.    So how do we find an appropriate balance?

The Benefits: Immediacy and Convenience

One of the most significant advantages of modern technology is the immediacy it affords. The ability to access information instantly, communicate across vast distances in real time, and perform tasks that once took days or weeks in a matter of seconds has revolutionised the way we live and work.   This immediacy extends beyond communication to other areas, such as online shopping, where you can order products with just a few clicks, expecting next day or even same day delivery, or the healthcare sector, where telemedicine enables patients to consult with doctors without needing to visit a clinic in person.

Convenience is another major benefit of technology. The rise of smart devices and automation has simplified tasks that used to require considerable effort. For instance, smart home systems can control lighting, temperature, and security, while virtual assistants like Siri and Alexa can perform tasks such as scheduling appointments, sending messages, or even ordering groceries. In the workplace, technology streamlines operations, with software automating repetitive tasks, allowing employees to focus on more complex and creative aspects of their jobs.   While in schools AI can help students and teachers create, refine or assess materials, or can help with translation, simplification and other process which support or even enhance learning experiences.

These conveniences and immediacy should improve quality of life, offering more time for leisure and reducing the stress associated with many day-to-day tasks however my sense is that they often just allow for more to be expected and reinforce the “do more” and efficiency cultures which I feel exist.

The Risks: Cybercrime, Data Protection, and Cognitive Impact

The advantages of immediacy and convenience come with significant risks. One of the most pressing concerns is the rise of cybercrime. As more sensitive information is stored and transmitted digitally, individuals, businesses, and governments are increasingly vulnerable to hacking, data breaches, and other forms of cyberattacks. Cybercriminals exploit weaknesses in software and networks to steal personal data, financial information, or intellectual property. The consequences of these breaches can be devastating, leading to identity theft, financial loss, and reputational damage. You don’t need to look to hard at the current news to find an organisation which has suffered a cyber incident.

In tandem with cybercrime is the issue of data protection and privacy. In the digital age, vast amounts of personal data are collected by companies, governments, and online platforms, often without individuals being fully aware of how their information is being used. This has raised significant concerns about privacy, with many questioning whether individuals have enough control over their personal data. The rise of surveillance capitalism—where companies monetize personal data to drive targeted advertising—has sparked debates about ethical boundaries and the need for stricter regulations. High-profile scandals, such as the Cambridge Analytica case, where millions of Facebook users’ data was harvested without consent for political purposes, have highlighted the potential for misuse and the lack of transparency in data collection practices.

Beyond the security and privacy risks, the very immediacy and convenience that make technology so appealing can also have negative cognitive effects. The constant stream of notifications, emails, and messages can fragment our attention and make it difficult to focus on tasks that require sustained concentration. Research has shown that multitasking with technology can reduce productivity and impair cognitive function. This “always-on” culture, fuelled by smartphones and social media, can lead to stress, anxiety, and burnout, as individuals struggle to disconnect from the digital world.

Moreover, the overreliance on technology can erode essential cognitive skills, such as problem-solving, critical thinking, and memory. With information just a click away, individuals may become less inclined to engage in deep thinking or retain knowledge. The rise of artificial intelligence and machine learning also raises concerns about the future of human skills and the potential for automation to replace jobs, leading to economic inequality and social disruption.

Striking a Balance

Given the immense benefits and equally significant risks, it is crucial to strike a balance between embracing technology and mitigating its drawbacks. On the one hand, the conveniences of immediacy and efficiency are undeniable and have improved many aspects of modern life. However, these advancements should not come at the expense of privacy, security, or cognitive well-being.

One way to maintain this balance is through stronger regulations and policies that protect individuals’ privacy and data. Governments and organizations must implement robust cybersecurity measures and transparent data collection practices to safeguard against cybercrime and misuse of personal information. Additionally, educating the public about digital literacy and security can empower individuals to protect themselves online.

At an individual level, it is also essential to cultivate mindful technology use. Setting boundaries around screen time, practicing digital detoxes, and focusing on single-tasking rather than multitasking can help mitigate the cognitive impacts of constant connectivity. Encouraging critical thinking and problem-solving in education and the workplace can also help individuals develop skills that are less susceptible to automation.

Conclusion

Technology exists in a delicate balance between its undeniable benefits and the risks it poses. Immediacy and convenience have transformed society, making life easier and more efficient in many ways. However, these benefits come with the trade-offs of increased cybercrime, data protection concerns, and cognitive challenges. As we continue to innovate, it is vital to remain vigilant about the potential risks and take steps to mitigate them, ensuring that technology enhances rather than undermines our well-being.  I also wonder whether the drive for efficiency and immediacy is reducing the time for us to be human and to interact with other humans directly and in-person, as we have since the dawn of mankind, but that’s a whole other post!

InTec IT innovation in education

This week saw me taking a trip to Mercedes World to speak at the InTec IT Innovation in Education event in relation to esports and also to host a little esports round table.   Now as usual my travels weren’t without their issues which started from the outset with the car park at the station being full, so no spaces, and then was promptly followed by a delayed train meaning I missed my connection.    I do sometimes wonder why I continue getting the train however I suspect, if I drove instead, there would just be significant traffic jams plus I wouldn’t be able to work or have a beer in the process of travelling.   As it was the already long journey took just over 5 hours to complete.

So as to the event itself.    The first topic covered was AI in education and in particular Microsoft’s Co-Pilot.   Now this session focussed on the paid version of Co-Pilot where it exists in Word, PowerPoint, Outlook, etc, rather than the free version.   The capabilities are impressive as was evidenced by the demo video which was worked through however two challenges currently exist in schools.   One is that of cost, with a cost of around £25 per user per month the scalability of CoPilot in its paid form across whole school staff bodies is rather limited.   That said it could be issued to key users.   The other issue is that of data protection and data security in relation to how CoPilot may surface data which it shouldn’t but where permissions and labelling of data has been historically poor.    Now an example I used here, and experienced recently albeit not actually involving copilot, involved a poorly configured MS Team with data pertaining to a trip.   Permissions made the team available to all within the organisation, including students.   Now in the past this wouldn’t have been a problem as students would either need to find the link or get very lucky in stumbling across the Team however in this case the AI in office 365 which tries to predict what might be useful, surfaced some files from this team following a number of staff accessing said files.    Office 365 was just presenting “this file might be of interest” however surfaced information which wasn’t meant to be available to students.    In a world of CoPilot this is likely to happen all the more often and present significant potential risk.

Next up was a discussion on cyber security and safeguarding.   I liked the strong linking here between safeguarding, which is rightly viewed as critical, and cyber security, which is often shown lesser consideration.    It may be that the best way forward in terms of schools and cyber security is to view it as an aspect of safeguarding in keeping student and staff data safe and secure, and through this protecting them from potential harms.   And isn’t protecting student from harm exactly what safeguarding is about?

During the lunch break I got my hands on a very nice Sim racing rig and got to do a bit of racing.  To start with I didn’t do too well, treating the pedals like an Xbox controller, with a brake and accelerator pedal with an up and down position and nothing else.  Cue, spinning off the course and missing corners.  I joked with one of the Mercedes staff that I was driving a lawnmower given the amount of time I was spending on the grass.    Later I started to get a better feel for things and for being more careful with my acceleration and braking, at which point I started to make gradual improvements, eventually getting my lap time down below 1 minute and eventually coming 5th on the leader board.

After lunch there were sessions on infrastructure and IT planning.   I think the key messages were the importance of a modern infrastructure to support the increasing number and differing types of devices, including VR headsets and 3D printers among many other items.  Also, the need to plan and plan early.   This always makes me think of failing to plan as planning to fail, however in this case its not just about planning but about planning early to allow time for those things we cant predict.

My session was largely on esports, talking about how easy it is for schools to get involved with esports plus about the potential benefits in terms of soft skills development but also in terms of the potential career pathways which esports, and the soft skills it helps develop, might open for our students.   I still sense that esports continues to be adopted more by the Further Education colleges than it does within schools, and I feel this continues to be a shame as the benefits are not limited to those 16+ year old. 

My session also had a second topic being the ISBA Technology Survey.   Now I led on the development of the 2024 survey and resultant report, picking up from the work of Alan Hodgin and Ian Philips who developed the 2018 survey.     I continue to feel that technology changes so fast that no school or staff in a single school can effectively adapt and therefore we need to seek collective solutions.   To that end the ISBA Technology survey is about gathering data and presenting baseline information to schools on how technology is being used across schools, to help in comparison and in planning.   

Conclusion

The event was very enjoyable and the Mercedes World venue was perfect, especially given the opportunity to get some Sim racing done before then presenting on esports.   It was also a great opportunity, like so many other similar events, to network and share thoughts and ideas, including getting to catch up with a few colleagues from other schools where I haven’t seen them in person for a number of years now.

AI does continue to be a common topic in education circles at the moment, and this event was no different, however I am increasingly seeing discussions of esports;  This is something I find very heartening and something which I hope continues.   It would be great to see more and more schools get involved in esports, helping students develop the soft skills which esports support, plus introducing them to the many career paths which esports links to.  

Tech vendors should do more?

There is a lot of discussion in relation to how tech vendors and particularly big tech vendors need to do better, whether this is in relation to data protection, online safety, addressing fake news and many other considerations.    A recent presentation by Laura Knight at FutureShots24 where she spoke of the finite and infinite games, and of Simon Sinek’s book, “The infinite game”, got me thinking about this again.

Tech vendors need to sort it

Firstly it is important to acknowledge the benefits of technology;   The tools we have and use are there as they are useful and the tech companies that continue to operate are there as we as users choose to use their solutions, but there are also challenges and drawbacks associated with most technologies.    It is pretty clear that tech vendors need to do more to address the various challenges and risks which come about as a result of their products.    They provide a tool, whether it be a productivity suite, a social media application or a generative AI tool, among many others, with many people using these tools appropriately and for good, however, there are also then those who use these tools for ill, for criminal, unethical and immoral purposes.    Now I have blogged on this before, how tools are neither good or bad, but it is their use which is good or bad, however, the challenge is that through technology the resulting impact is magnified.   I have talked of a hammer as a tool, and how it could be used for assault, but unlike a hammer, a maliciously used social media tool can impact hundreds or thousands of people at once; the potential impact of the tools is much broader.   So, from this, it seems clear that tech vendors need to consider this negative impact and seek to mitigate the risk in the design of their platforms and through their processes.

The key here is that we are not really looking at these tools, but at their impact on wider society.   Society will continue, for good or for ill long into the future.   It is an infinite game.    Long after I am worm food, society will continue.   Likely long after many of these tech platforms have been and gone (think MySpace, Friends Reunited and the likes) society will continue.

And so, we look to rules and to laws to provide us with the frameworks and protections, where these rules and laws will exist long into the future, although they may evolve and be adjusted over time.    Sadly, though these laws and rules are designed for the long infinite game and therefore are slow to change, relying on established processes and methods not designed for the quick changing technological world we find ourselves in.  

With laws unable to keep up we find ourselves complaining that the tech vendors need to do more, and this is likely the case but the tech vendors know their time is limited as they may be dispatched to the bin should the next viral app come along, so they don’t want to expedite this through making a safer but less usable or less enjoyable or less attractive or addictive platform.   We have a problem!

But the tech companies are important

The tech companies are driven by profit as they are after all money-making companies with shareholders to answer to.   That said, many of the big tech companies do try to establish the moral and ethical principles by which they operate.    It is their drive for money which leads them to “move fast and break things”, to innovate and disrupt as they seek to find the next big thing and the corresponding profits which come with it.   And we need this innovation.   If we left innovation to governments, their processes, laws and rules would make the process of innovation so much slower than it is while it is in the hands of tech companies.  I suspect we would be still using 5 ¼” floppy discs at this point! 

The tech companies play the finite game, knowing that in this game there will be winners and losers so moving fast, disrupting and innovating is the only way to avoid being confined to the technology bin of history; think the polaroid camera, the mini-disc, and the platforms I mentioned earlier.    So, if the choice is spending longer to create a safer platform, but possibly being 2nd to the market with a product, or getting it out quickly and being 1st but then having to try and address issues later on, closing the gate after the horse has bolted, it seems pretty clear which the tech companies will choose.    Being 1st means survival while being 2nd might spell doom.

Solution?

I am not really sure that there is a solution here, or at least that there isn’t a perfect or near perfect solution.    Things will go wrong, and when they go wrong we will be able to highlight what could have or should have been done by tech vendors, governments or individuals to prevent the outcome.  But we have to remember we are dealing with technology tools operating at scale, and just take TikTok for example and its approx. 1 billion monthly users.    We haven’t yet banned cars but car accidents continue to happen!

Tech companies will continue to focus on the finite game and on maximising profit for their shareholders and on remaining viable, while politicians will also play the finite game, focussing on policies and proclamations which are more likely to be psotively received and to keep them in power, or help them to power.    But the world and society is an infinite game where what we do now may impact how things are for future generations.

I think we need to be pragmatic and I also think its about partnership and working together.  If governments, tech vendors and user groups can work together, discuss the benefits, the concerns and the issues, maybe we can make some progress.   Maybe we can find the best “reasonable” options and the “good enough”.     And I note, I feel some of this is already happening within some companies.     I suppose my one conclusion is simply that it isn’t for tech vendors to do more, it is for us all to do more, tech vendors, governments, schools, parents and adults more broadly, communities, and more.    And if we can do it, discuss and explore, find and test solutions together then maybe we can start to address some of the challenges.

Who poisoned the AI?

One of the challenges in relation to Artificial Intelligence solutions is the cyber risk such as that presented through AI poisoning.  When I seek to explain poisoning the example I often use is of an artist who sought to keep traffic away from a particular street.   To do this he simply purchased a number of cheap smartphones, put them in a little trolley and then walked this trolley slowly down the chosen street.    To Google Maps the fact a number of smartphones were progressing very slowly down a street was interpreted as a traffic jam or accident and therefore Google maps sought to redirect people away from the street.   Basically, the individual had poisoned the AI data model to bring about a generally unwanted outcome, at least from the point of view of Google Maps.

Poisoning might take a number of forms, such as through the input data received by the AI such as the position information from the phones, or through the prompts made to a generative AI solution or through the training data provided, including where this might include the prompts.    The key is that the AI solution is being manipulated towards an output that wouldn’t normally be anticipated or wanted.  And there are also concerns from a cyber security point of view in relation to poisoning being used to get AI solutions to disclose data.

That said I previously read an article in relation to AI poisoning but where the poisoning was being presented as a solution to a problem rather than a risk.   In this case the problem is ownership and copyright of image content, where an AI vendor might use such image content, scraped from the internet often without permission or payment to the creator, and used to train the AI.    The concern from copyright owners and artists is that they are creating works of art, images, etc, but as generative AI solutions are fed this data, the AI solution either copies elements of their works, or could even be asked to create new works, but in their style.   And given the creator is receiving no remuneration for the use of their works in training an AI, plus that the AI might lead them to receive less business, they are concerned.   Roll in Nightshade, a solution for poisoning an image.   Basically, what the solution does is to change individual pixels within an image, where this isnt perceptible to the human eye, but where it will influence an AI solution.   The poisoned images therefore negatively impact the functionality of AI solutions which ingest them into their training data, but while still be totally acceptable from a humans point of view.

The above highlights technology and AI as a tool;   Poisoning can be used for malicious purposes but in this case can be used positively to protect the copyright of image creators.    The challenge however is that this technology for poisoning images will likely lead to AI solutions either capable of identifying and discarding poisoned images or AI solutions which are tolerant to poisoned images.   It will end up as a cat and mouse game of AI solutions vendors vs. copyright holders.    This is much like the cat and mouse which is the tech vendors seeking to create generative AI solutions which create near human like content versus the detection tools seeking to detect where AI tools have been used.   Another challenge might be the malicious use of poisoned images to disrupt AI solutions such as the feeding of poisoned images into a facial recognition or image recognition solution in order to disrupt the operation of the system.

I also think it is worth stepping back and looking at us as humans and how poisoning might work on human intelligence rather than artificial intelligence.   One look at social media, one look at propaganda and at the Cambridge Analytica scandal shows us that poisoning of intelligences, such as human intelligence, isn’t something new;  I would suggest fake news is a type of intelligence poising albeit possibly at a societal level.    Poisoning has been around for a while and I am not sure we have a solution.   So maybe rather than looking at how we deal with or positively use the poisoning of artificial intelligence we need to go broader and consider poisoning of intelligence in general, including human and artificial intelligence?  

References

This new data poisoning tool lets artists fight back against generative AI, Melissa Heikkilä (2023), Technology Review, Downloaded 07/11/2023

Berlin artist uses 99 phones to trick Google into traffic jam alert, Alex Hern (2020), The Guardian, Downloaded 07/11/2023