Keeping students safe in a digital world

It is becoming increasing challenging for schools to keep students safe in a digital world.   This is largely due to the easy with which students can make use of solutions designed with privacy in mind.  These technologies weren’t designed with the safeguarding needs of schools in mind.   As a result, I believe we need to be increasingly pragmatic about our approach.

One big factor in keeping students safe relates to whether the devices being used belong to the students and their parents or belong to the school.    Where they belong to the school there is greater potential to use technology solutions to help keep students safe, however these same solutions can easily be circumvented or removed where the devices belong to the students, e.g. where a Bring Your Own Device scheme is in place.    Personally, I suspect we will only see BYOD growing in terms of how common it is in schools.    It is also important to note that students will bring their own devices to school irrespective, likely in the form of personal mobile phones, therefore protections in place of school issued devices are rather limited in their effect given students can simply switch to their personal mobile phone should they not wish to be filtered or monitored.

The big reason for writing this post comes following reading a post where it recommended advising students to make use of VPNs in order to keep their communications safe and secure.   From a cyber security point of view I can understand this.   Using a VPN will stop someone snooping on my personal data in transit.   When thinking about it a bit more broadly however I think it would be a bad idea.   Firstly, it would hamper school filtering and monitoring, which is in place largely for safeguarding reasons.   Also, although there are very good VPNs available, these tend to be paid services.  Parents and students are unlikely to want or possibly even be able to afford to spend money on these services, which will therefore push them towards the various free VPNs which appear so readily available.   These free VPNs may either be fully malicious in nature, not being a VPN at all or may be gathering and selling user data.    Either way I am not sure if the cure, in a free VPN, is any better than the risk.

I think schools must now look to tackle safeguarding in a digital world in a more holistic way.   Its not down to the safeguarding and pastoral team to define filtering of sites, or access times for students, nor is it down to the IT dept. to make sure firewalls and filtering are in place.  It needs to be a collective approach where all involved discuss the risks and what they have in place, and what they can put in place going forward.    Within this, I continue to believe the principle focus needs to be on awareness rather than seeking technology solutions, ensuring students, teachers and parents are all aware of the benefits and risks of technology use, plus aware of how to keep themselves safe and secure online.

As privacy online continues to grow in focus, and as technology companies increasing bake privacy and security into their solutions, the act of keep students safe in a digital world will only continue to become more challenging.

Tech Confident Students

I previously wrote a post on developing confidence in the use of technology within teachers.   I therefore thought it would be appropriate to turn my attention to the issue of building student confidence.

One of the challenges with student confidence in the use of technology is possibly the outwardly perception they convey of being highly confident and competent.   They naturally seem to swipe ands scroll through different apps, taking on new apps as they arise.    This may be the reason that the idea of students as “digital natives” seemed to ring true; It just seems to superficially fit the situation and the appearance students convey.    The issue here is that it simply isnt, in my view, true.

One of the first things to pull out is student engagement with social media.   Social media apps are designed to be easy to use and are designed to grab and keep user attention.   As such it isnt that difficult to become familiar and comfortable with using social media apps, and then to spend many hours swiping and scrolling through their content.    The apps are designed this way, to be easy to use and to, dare I say it, be addictive.  As students likely spend a fair amount of their time on such apps, and as our perception of them as confident may partly come from observing them on such apps, we need to rethink our view as to how confident they really are.   

In discussing Digital Citizenship with students, the one thing which has often surprised me is students lack awareness of, and even lack concern, relating to the implications of their social media app usage.   Students are not fully aware of the extent of the data being mined about them through their continued use of apps.    And even when made aware, they express apathy; All our friends are on the apps, so we need to use the app.  What really can we do in relation to the data being gathered about us?    This for me is a very big challenge and one which I don’t have the answer for.   I too find social media useful to stay in touch, share ideas, etc, but am concious of the data being mined from my interactions.   Am I going to stop using social media to stop the mining of data?   I think not, however I think the key here is being concious and aware of the fact that data is being gathered, and then making balanced and informed decisions.   I choose to use social media as the gain in the ability to stay in touch and to share ideas and collaborate with others outweighs the risk associated with the data being gathered.   Aware of the risk though I sometimes choose not to share some things on social media.   

Thinking about being confident always reminds me of the concept of mastery and the need for 10,000 hours of practice.    I feel confident in the use of technology however I am certainly way beyond the 10,000 hours.   If we want students to be confident with technology use we need them to be using it but not just for social media engagement, which as I mentioned previously has been designed to be easy.   We need students using technology in different contexts ideally across the curriculum.   If students are using technology to explore concepts, to collaborate on projects, to present ideas, etc, and they are using it in Maths, in English and across the curriculum, there is a far greater chance of them developing the necessary confidence.   Linked to this though is teacher confidence, in being able to support the students in lesson, set learning activities suited to the use of technology and even facilitate students to support each other.      

When thinking about student confidence in using technology my focus is on technology as a tool.   My focus is therefore on knowing what technology tools are available, what they do and the benefits or drawbacks from their use in different situations/contexts.   This generally draws the question of teaching about technology within the context of subjects across the curriculum, or as a discrete subject, another of the binary arguments all too common on social media.   For me the answer isnt binary, one or other, but to treat these two standpoints as either end of a continuum with the answer lying somewhere in between.    For me it is important for students to see technology being used, and to be supported to use technology within different contexts however it is also important for students to learn about the general tools and the implications of using technology tools.    As such it is my view that we need to both have cross curricular use of technology combined with some discrete teaching about technology and digital citizenship.

I find students in schools are less afraid of things going wrong using technology than their adult counterparts, and therefore they are happy to try new apps and tools as needed.   They are also less concerned about using a wide range of tools.   This is both a benefit and a risk;   It means students are less afraid of change in relation to how the tools they use work or in the tools they are expected to use, however it also means they are likely to create a larger digital footprint, plus less likely to consider data protection and cyber security.    Their lack of fear though shouldn’t be confused as confidence; They are willing to try tools but this doesn’t mean that they know how to apply them effectively to a particular problem.   This is one of the things schools need to address, in ensuring students know which tools to use and when, how to use and how not to use them, which tools go together and which don’t and overall how to be effective in their use of technology.    It is important we harness their lack of fear and explore how they can positively use technology to enable their needs and interests.   We also need to ensure they are able to make educated and informed decisions as to their technology use, so as to manage the risks which may exist.

Technology, in my view, doesn’t scare our students so they are largely willing to try, but what we really need is for them to be sure of their ability to use technology tools well, plus to have the relevant skills and experience.   We need them to be competent and confident.

Social Media: Just the messenger?

Social media both shares the goings and news of the world, but it also shapes and creates it.   This has become all the more apparent to me, but it worries me how we often we may not be concious of this.

Social media messages are short and simple.

The social media world relies on simple messages, on a single image, 280 character or a 60 second video.  The message needs to be simple as it is designed to be addictive and get our attention, to fit into busy lives and to encourage us to flick from one message to the next, then the next.   I know I have found myself wasting 20mins just flicking through amusing TikTok videos for example.   This is what the platform providers want us to do.  They want to keep us on their platform as this is how they achieve their revenue, via advertising, so the longer we are on their platform the more data they can mine and the more advertising revenue they can achieve.

The world is simple?

My concern is that the features of the medium, in this case social media, influence the messages which are being conveyed.  But what does this mean for how we perceive the challenges of the world, as shared via social media?    I would suggest this is encouraging us to increasingly see the world complex problems and challenges, as being simple.    Almost every problem whether it be global warming, the covid19 pandemic or racism can be boiled down into a social media message.   And for everyone message sent someone will be able to argue a counter position using the same medium and the same inherently short bite sized social media message.  As such I think we may become less aware of the nuanced nature of the problems we are faced with.   Seeing every problem being boiled down to a simple message may convince us that the problems themselves are simpler than, in the real world, they are.

Increasing binary viewpoints

This simplicity also makes it easier to see problems in terms of black and white or binary positions.   It makes it easy to see a statement on social media as either true or false rather than seeing the complexity and therefore the infinite number of possibilities which may exist between two extreme positions.   And again, for every post stating one position there is at least one person, and often many, many, people, able to reply with an opposing binary view.   This in turn could help to explain the increasing divisions in society whether this be in relation to Brexit in the UK or Trumps presidency in the US, or a multitude of other news stories.

Reinforcement learning

Social media also makes us hyperconnected.   Having identified data about our usage patterns, social media platforms will purposefully expose us to content which fits with these patterns.    As a result, we will be repeatedly exposed to consistent messaging which, through reinforcement, may strengthen our commitment to the binary viewpoints we are encouraged to develop.   This may make our commitment to our position, in relation to specific issues, and to defending such positions more fixed and immovable.     It can also impact on our world view including what we see as truth, how positively or how divided we see the world we live in.  

Social: Time to consider the medium and the message

Social media is here to stay.   It may not be Facebook, Twitter or TikTok in the future, but social media is highly likely to continue in some shape or form.   Watching the news reporting as to concerns in relation to social media, they tend to focus on the messages being shared.  They focus on what social media companies should be doing to prevent extremism, suicide, etc.   I agree work needs to be done here however I also think we also need to look at the vehicle for these messages and what this may mean for the society we live in.

Final Thoughts

Social media represents a magnified version of real life due to the nature of the medium.   This has its advantages in making it easy to consume and contribute to.   The flipside however is it lacks the detail, the nuanced nature and complexity of real life.   We need to be more concious of this, and to ensure our students are also concious of this. Only through being concious of the impact on the medium can we seek to adjust for and minimise it.

AI and Bias

I recently saw an article in the guardian regarding a call from an Artificial Intelligence expert to cease using AI in the UK due to concerns that they were “infected with biases” and couldn’t be trusted (McDonald, 2019).

I too have concerns in relation to bias in AI, particularly in relation to AIs as black box systems where we are unable to ascertain how an AI might have arrived at a specific decision.    For example, the guardian article references immigration related applications of AI, so an AI might decide to approve or reject an immigration application based on the data it has available to it.    The danger here, in my view, is the potential lack of transparency in relation to the AIs decision making process.  

Despite my concerns, I however do not advocate banning AI use, as the alternative to using AI is to use human decision making.    Human decision making is far from lacking in bias.   In Sway (2020), by P. Agarwal, the author states “we are all biased – to a certain degree” going on to discuss in detail human bias and particularly unconscious bias.   Agarwal also states that “we cannot erase our biases completely” plus in relation to technology use, suggests that technology solutions, which therefore includes AI, “incorporate the biases from the designers and data engineers” who design them.   As such it doesn’t seem fair to hold AIs up to a standard, that of being absent of bias, when the human designers, users, etc of such systems are themselves unable to achieve this standard.

For me the critical issue is being aware of the bias which may exist and seeking to mitigate and manage the resultant risks.   We have to accept that bias is unavoidable, it is unavoidable in we humans, and also unavoidable in the systems and AIs we may create.    It is due to this need for awareness that my concern regarding the potential lack of transparency arises.

References:

Mcdonald, H. 2019. AI expert calls for end to UK use of ‘racially biased’ algorithms. [Online]. [27 December 2020]. Available from: https://www.theguardian.com/technology/2019/dec/12/ai-end-uk-use-racially-biased-algorithms-noel-sharkey

Agarwal, P (2020). Sway: Unravelling Unconscious Bias. United Kingdom: Bloomsbury Publishing.

Huawei: National needs vs. World Internet

The recent issue of Huawei 5G equipment in the UKs 5G infrastructure highlights the challenges of the internet and technology, which often cross international borders, but where the services and hardware is produced by companies which exist clearly within the borders of countries and therefore potentially within the influence of their governments.     There is a clear tension here between the services provided to the internet and the companies providing them.

The Huawei case is very much about internet security.     The implication is that Huawei could be influenced by the Chinese government who could then leverage the Huawei equipment installed in foreign countries telco infrastructure to gather intelligence, modify or filter communications or otherwise impact on the operation of a country through control of its communications systems.    This all seems quite logical.   Who would want a foreign government to be able to exercise power of their infrastructure?

The issue for me here, is that the technologies, either hardware or software, have to be created and then developed and deployed from somewhere in the world.     Apple devices, Microsoft Windows, Facebook, Google, all have to come from somewhere and in doing so could be influenced by governments or political powers within that given location.   So, the Huawei argument from the perspective of a UK citizen, may equally be matched by Chinese concerns over Apple from the perspective of a Chinese citizen.     Looking to the US, there is even some precedence for being suspicious with Kaspersky, which I note are a Russian firm, highlighting in 2015 that the NSA, a US intelligence agency, could “implant spyware of hard drives to conduct surveillance on computers around the world”.

Technology and technology services are used internationally whether that is a Dell laptop, Dropbox cloud file storage or newspaper website.    Often, these products or services may use components from other organisations, such as Seagate hard drives in a laptop, or Google Analytics or Facebooks share and like buttons on a companies website.   This further complicates things.   The devices, services and components are all used without consideration for international borders.     Yet we live in a world where international borders exist, where governments may have a stake in technology companies or may have influence.  The risk of influence exists.

One solution to this is to block and to ban.   China block Google and YouTube for example, and now it looks like the US and UK are banning Huawei.    Meanwhile in Russia they are testing their own national internet system separate to the “real” internet.    This may be the direction governments increasingly pursue, to block, ban or to create in-country copies, but for me I don’t see how this will work.    In China VPNs provide a solution to circumvent blocks while I am sure Chinese semiconductors/microchip are already in so many of our devices in the office and at home.   If the service or device works for users, it will find its way into use no matter what governments choose to do.

The answer for me is an acceptance of the complexity of this predicament and that countries will have their own personal motives or ends that they wish to encourage.    It is, in my view, a lose-lose situation.     Leave Huawei in place and allow for the risk of Chinese influence or remove Huawei which will likely result in counter moves by the Chinese plus, assuming they are seeking to exert influence via technology, them targeting other parts of the world wide internet infrastructure and services.

All we are left with is a risk-based judgement, which is what I believe must have been taken here.    The risk of counter action, Chinese influence over other parts of the internet and additional cost of changing supplier including removing existing Huawei technology must have been judged to be less than the risk created by Huawei technology within the UKs core or edge network.  My worry here is the potential for bias in the decision making.   As Pinker(2018) points out, “people are poor at assessing probabilities” so “if two scenarios are equally imaginable, they may be considered equally probable”.   Potentially the probability of destructive Chinese action against the UK may have been over estimated.   As such the preventative action taken in blocking Huawei may be excessive.   Or maybe it isn’t!

And if you want to take this whole discussion a stage further let’s consider how companies might now influence the world.   Take for example Facebook which, if it were a country, based on users it would be biggest in the world.    What if we accept that it to may have motives and ends to its is actions, beyond simply providing the Facebook platform?    Google, Microsoft, Apple, Twitter, etc, may all be the same.   But that is possibly for another post.

 

References:

BBC News. 2020. Huawei 5G kit must be removed from UK by 2027. [ONLINE] Available at: https://www.bbc.co.uk/news/technology-53403793. [Accessed 16 July 2020].

CNet. 2015. NSA planted surveillance software on hard drives, report says. [ONLINE] Available at: https://www.cnet.com/news/nsa-planted-surveillance-software-on-hard-drives-report/. [Accessed 16 July 2020].

Pinker, S., 2018. Enlightenment Now. 1st ed. UK: Penguin House.

TechCrunch. 2019. Russia starts testing its own internal internet. [ONLINE] Available at: https://techcrunch.com/2019/12/26/russia-starts-testing-its-own-internal-internet/?guccounter=1. [Accessed 16 July 2020].

World Economic Forum. 2016. If social networks were countries, which would they be?. [ONLINE] Available at: https://www.weforum.org/agenda/2016/04/facebook-is-bigger-than-the-worlds-largest-country/#:~:text=If%20Facebook%20were%20a%20country%2C%20it%20would%20be,it%20each%20month%20-%20around%201.9%20billion%20people.. [Accessed 16 July 2020].

Digital Citizenship and Covid19

Before the Covd-19 crisis begun I presented on Digital Citizenship at the JISC DigiFest event and previous to that at the ISC Digital event in Brighton.   In both cases one of my reasons for presenting was my concern regarding students increasing use of technology not being match by an appropriate considerations or awareness of the risks.   I was worried that students were giving away large amounts of data without considering who they were providing to, how it might be used, how long it would be kept or how it might impact or be used to influence them as individuals, and as groups, in their future.   I was worried and believed education and its educators needed to start to do something about this.

Since then the Covid-19 crisis has deepened with the lockdown rules introduced.   I have seen large numbers of schools and other organisations rushing to find solutions to the challenges this crisis has created.   New applications have been put into use, devices issued and configurations quickly changed to enable remote working and remote learning.    Applications such as Zoom and Houseparty have quickly achieved viral status as people adopt solutions to keep in touch and maintain some form of social interaction and community spirit in a time of isolation.   Decisions as to what to use have been based on which apps are easy to use and which are being used commonly by others.     In all of this fast paced change, home working and learning and technology enabled socialisation the phrase that jumps to mind is “act in haste, repent at leisure”.

Teachers, parents, children, the elderly, in fact almost everyone is now using more technology that they previously did, either in new devices, new apps, the duration spent using technology, etc.   With this use we have greater exposure to online risks such as cyber crime and to risks associated with the data trail we leave behind us.

There is also the discussion of tracking apps to help identify where people have come into close contact with someone with the Corona virus of Corona virus systems, or the use of thermal cameras and facial recognition for a similar purpose.   Although in the short term this appears to be in the interests of public good, this data might be used in the future for other purposes related to the tracking and profiling of individuals, or even to help with targeted campaigns aimed at influencing the behaviour of individuals or groups.

To be clear, the current situation is far from normal and far from ideal.   No-one chose for this to happen.   As a result, the decisions which are being made, and which need to be made, are borne out of necessity rather than being purposeful, fully planned and implemented with the implications carefully considered.    There is also the issue that, at the best of times, we cant predict the future yet now are presented with an even more unpredictable future but the need to make decisions now that will influence this future.

If I was worried about Digital Citizenship before the Covid-19 crisis, I am even more worried now.

JISC DigiFest: Digital Citizenship

Following my DigiFest session I thought I would share some thoughts which went into my session.

It is important to firstly acknowledge that our views on technology are very much the result of our experiences.  My experiences include learning to code in Basic on the Commodore 64 at an early age, before moving on to AMOS basic on the Amiga and then QBasic, Visual Basic and C++ on the PC.    This early use of technology, and the ability to develop software to solve problems has very much shaped my views.    Now, today I walk around with a mobile phone with over a million times more memory than my commodore 64, from less than 30 years earlier, and the growth rate across the period has not been linear.   A perfect illustration of this lies in how long it took various technologies to reach 50 million users.    Radio took 75 years whereas TV only took 38 years.   Bringing us close to today, Facebook got the time to 50 million users down to 3.5 years before Pokemon go managed it in less than a single month.   It is clear from this that the pace of changing is quickening.

Looking at our use of technology today we find that most of us now use technology for communication or entertainment in the form of mobile phones, social media and on-demand TV.   We are also increasingly being required to use technology to access governmental services, council services, banks, etc.    Technology is now integral to our lives and here to stay, complete with the ever-quickening pace of change mentioned earlier.

The more I think about the pace of change and the way that technology is becoming an integral part of our everyday lives the more the movie Ready Player One comes to mind.   In the movie Wade Watts makes use of virtual reality to live a double life, living as Percival in VR.   As the film progresses it becomes clear that his two lives aren’t as separate as he would like and that events in virtual reality impact on real life and vice versa.   For us, like Wade Watts, our lives in real life are inseparably linked to our digital lives.   In fact, I believe that it no longer serves us to think of digital citizenship as the term implies that there is something else available, a non-digital citizenship, when in fact there is not.    Possibly the discussion should not be of digital citizenship at all but simply citizenship.  As Danah Boyd, in her book, Its Complicated said, although the apps might change our online connectedness, our need to share and the challenges around privacy are “here to stay”.

Resulting from this new technology there are benefits or potential benefits and we need to acknowledge this.  A couple of examples include the current exploration of self driving vehicles plus the recent use of choreographed drones as an alternative to traditional new years day fireworks.  In relation to current events around the globe, there is also the use of Artificial Intelligence (AI) to identify new antibiotics and other drugs.   We need to prepare to make the best of these new opportunities and to ensure the students in our educational establishments are prepared.

But with the above benefits, there are also risks.    Fake news and the ease with which videos including interviews can be faked will increasingly make it more difficult to tell fact from fiction.   We also have challenged to individual privacy and risks around habits and potential addictive behaviour plus also the potential for platforms to go so far as to actually shape and influence human behaviour.

The danger in the benefits and risks of technology is the currently common resultant binary views of either technology as infinitely good or inherently bad and evil.    Sadly, these views are seldom of little use as to view technology as purely good is naïve whereas to consider it as purely negative equally naïve and simplistic.   The reality is that technology and more particularly the use of technology for a given purpose will lie in between the extremes of good and evil, positive and negative.   Any use of technology is likely to have its positives but also its drawbacks or unintended consequences and therefore we need to consider carefully the pro’s and cons and seek a balance.

Looking at how we prepare our students for the world and the issues listed above I can see the things which we do satisfactorily, through our eSafety programmers, however I can also see those areas where little or nothing is currently offered.   We currently discuss the importance of privacy settings on social media, of having strong passwords, of how online content, once posted, will remain permanent and of the need to be aware of bullying online.   These areas are currently covered.    Sadly, however little is said in relation to the conflict between user convenience and individual privacy, between individual privacy and public good, and between social media reporting on or actually creating the news and truths which we come to believe.     These are the areas which we need to discuss, for which there isn’t a single answer and therefore where the most we can do is help students develop their own views through discussion.  It is through discussion that we can hopefully ensure that students, when presented with the infinite challenges of technology use, will approach them with their eyes wide open.

This brings me nicely to raising a couple of examples, from the many examples available, which would make valuable discussion topics for use in our schools.

Algorithms and AIs can be manipulated by an individual or organisation, to their own ends.   

Do we understand why algorithms might exist?     Do we understand why an individual or organisation might seek to “game” an algorithm and the potential gains which may arise?   The use of a series of mobile phones to fool googles traffic analysis algorithm into identifying a traffic jam where one doesn’t not exist, resulting in it redirecting traffic away from a given street, being one simple example of what is possible.

Governments can filter and censor content based on political motivations.    

Do governments need to be able to filter content for public safety?   But could their filtering be used to shape public perception or to revise fact to their own political ends and political gain?   What is truth and should governments be allowed to control and revise truth?    We have already seen governments filtering internet content with their filtering then being identified as being lacking transparency and in their own self interest; Filtering of TikTok being one possible example of this.

Online companies can gather and sell your data for profit.  

Do companies need to gather all the data which they gather?   Do they have the right to sell this data?   Where data is anonymized is it possible for data sets to be combined which then might reverse the anonymisation process?   A simple example of this being a cellular carrier selling on viewing habit data.

Mary Aiken in her book, the Cyber Effect, identifies the need for us to “make sense of what’s     happening” and only through discussion is this likely to occur however one concern I have is where these discussions might happen.   In the current crowded curriculum they tend to be banished to the IT classroom, a subject which not all students will study.   I don’t think this is sufficient.   These discussions need to take place throughout schools, across the subject areas, across the stages, with students, with staff, with parents and with the local community.   Discussing the challenges of technology needs to become part of the culture, simply the way we do things around here.

As Danah Boyd stated, “Collaboratively, adults and youth can help create a networked world that we all want to live in”.  If I am to ask anything following my session at DigiFest, I would ask this:  Lets begin with a discussion in our schools, colleges and universities, any citizenship related discussion where technology has its part to play complete with its pros and cons, but let’s do it today.

You can access my full presentation from DigiFest 2020 here.

Final Note: As we now engage in much more home and distance learning due to the Corona virus it may be more important than ever for these discussions to happen, and to happen now!

 

 

JISC DigiFest: Thoughts from Day 1

I thought I would share some initial thoughts following day one of JISC DigiFest.  The event was launched with a very polished and professional pre-prepared video displayed on screens scattered around the events main hall, focussing on the rate of change in relation to technology and some of the technological implications of technology on the world we live in.   The launch session also included a room height “virtual” event guide introducing the sessions and pointing you in the direction of the appropriate hall.    In terms of the launch of a conference this was the most polished and inspiring launch I have seen albeit on reflection there wasn’t much particularly innovative or technically complex about it.

The keynote speaker addressed the changing viewpoints of different generations of people focussing particularly on Generation Z, the generation which currently are in our sixth forms, colleges and universities.   I took away two key points from the presentation.   The first was how each generations views were shaped by their experiences particularly between the ages of 12 and 20 year old.   Jonah Stillman used thoughts on space as an example showing how Generation X might have positive views focussing on the successes of the moon landing whereas Millennials may have a more cynical view following the Challenger disaster.   Additionally, Jonah mentioned movies as a social influencer and how those in the Harry Potter generation may view cooperation and trying hard, even where unsuccessful, in a positive manner.  Those born later than this may draw on another series of films, in the hunger games, resulting in a greater tendency towards competition and the need to succeed in line with the movies storyline of everyone for themselves and failure results in death.     The second take away point from the session resulted from the questioning at the end of the session around what some saw as the absoluteness of the boundaries between generations.    I think Jonah’s use of the word “tendency” addressed this concern in that the purpose of the labels was for simplicity and to indicate a general trend and tendency rather than to suggest that all people born on certain dates exhibited a certain trait.  It increasing concerns me that this argument keeps coming up when surely it is clear that there is a need to use simplistic models to help clarity of explanation and that no model, not matter how complex will ever truly capture the real complexity of the world we live in.

My 2nd session was actually the delivery of my own session.   I will be sharing some thoughts in relation to my presentation along with my resources in the near future.   For now I will simply say that the session was not one of my best.   I do however hope that my main message, in the need for greater and broader discussion in relation to citizenship within the now digital world we find ourselves living in was clear.

The third session of the day focussed on  digital literacy programme one particular university had developed.   I found it interesting in this and a later presentation, how digital literacy or digital citizenship appeared to often fall to the library in universities in terms of developing and delivering a programme.    In schools I feel the same topics tend to fall on the IT teaching department rather than libraries however it is interesting that something which should be permissive would find itself localised in educational institutions in a single department rather than being supported across the institution.   It was interesting how the programme the university developed had evolved over time, which seems to me to be the correct approach given how quick technology is changing.  I also found it interesting in that student voice suggested needs which then later students indicated they did not find useful.  In other words students themselves were not an accurate judge of their own wants and needs.     Session five followed a similar topic again looking at digital literacy however the presenters made use of a fairy tales as a vehicle to deliver their message of the pros and cons of the digital world we live in.   I must admit I enjoyed this presentation in its novel approach to delivering the concept in hand.

Session four focussed on partnerships between a university, a local council and a number of corporate organisations focusing in particular on data analysis and business intelligence.  I think schools have some way to go in this area as they regularly gather huge amounts of data however little is actually done with it beyond reporting it to school leaders, parents, etc.   I think the challenge is that schools often lack the resources which a college or university may have at their disposal, such as having data scientists as part of the staff body.   That said, the sessions seemed to indicate the potential for schools to leverage partnerships to fill this gap with minimal to no outlay on their own resources.

My final session of day one focussed on digital transformation, and like the key notes was insightful and inspiring.    Lindsay Herbert discussed the bear in the room, which is similar to the elephant in the room but more dangerous.     I particularly like the way Lindsay stated early on that the world was a “terrible place” citing issues such as the corona virus, fires in Australia, storms across the UK and ongoing technological change.   She then quickly moved on to the fact that we are inherently brave in our attempt to not only exist but to strive to flourish in this world, before then going on to identify various success stories where the bear in the room was tackled.    She left us with 3 main tips, all of which struck a cord with me, in that transformation starts with a worthy cause, requires lots of people and needs to be learned and earned rather than purchased.   The third tip in particular strikes a cord for me as I have encountered change where it has not gone as smoothly as I would have liked, and where significantly more effort was expended than had originally been attended;  In retrospect this may have been the change being earned, plus certainly involved a lot of learning.

Day 1 was useful with the keynote and closing session of the day being my highlights.    Have plenty of notes to digest when I get back home.  Roll on day 2.

 

 

 

Digital Citizenship

For a while now I have been sharing various online articles which I believe relate to Digital Citizenship via twitter and also sometimes via linkedIn however it recently came to me that it might be useful to curate these tweets so that teachers looking for discussion material in relation to specific aspects of Digital Citizenship might be able to use them.

To that end I created three Wakelets based on three themes which I thought we reasonably common in relation to Digital Citizenship.

  • AI, Drones, Driverless cars and the other societal changes with Tech may bring

https://wke.lt/w/s/kJ3z2B

  • Cyber Security, Data Protection and Big Data

https://wke.lt/w/s/XFOeIs

  • To ban or not to ban?

https://wke.lt/w/s/09MVpQ

Now it may be that in future I may expand the number of themes.  I suspect this is highly likely, but for now the above are hopefully a good starting point.

In addition, for ease, I have created a separate section on my site for this curated Digital Citizenship content in case anyone wants to bookmarks it.  This section is also available via the sites menu structure.

EdTech Summit, Brighton

I had the opportunity to present at the Brighton ISC Digital EdTech summit during the week.  My talk, “Common Sense Safeguarding” focussed on the need for schools to take a broad and more risk based view of online safety as opposed to the previous more compliance driven approach.    Given the number and range of technologies students have access to and also the tools available to bypass protective measures put in place by a school, or even the ability to negate them totally through using 4G, online safety is no longer as simple as it once was.    This therefore needs a broader view to be taken.

In addition, I identified that in our dealing with Online Safety we are not yet effectively addressing the issues which are growing with our increasing use of digital resources and services.    Cyber security, big data, profiling, artificial intelligence and bias, ethics of IT systems and similar broad topics don’t yet have a key place in the general curriculum albeit opportunities exist across different subjects.    We need to ensure these issues are discussed with all students.   It was to that end that I proposed a cross school discussion group focussed on Digital Citizenship.

Overall my view is one that we need to be more aware of the limitation of preventative measures such as web filtering plus need to focus more on user awareness and having discussions with students regarding the wider implications of staying safe and being successful in a digital world.

If you are interested in being part of a group of schools discussing Digital Citizenship please fill out this Microsoft Form and to access my slides from the EdTech Summit please click here.