Words by Sabrina, 19 VIC
It’s a marvel how much we’ve come to trust technology.
Recently I was sitting in a Zoom class where we were discussing the impact of deep fakes. Deep fakes, for those who don’t know, are digitally manipulated videos or images which depict someone doing or saying something that never actually happened. Creepy, right? Ever seen that video of a kid supposedly being swept away by an eagle? Oh, and it was hard to miss Barack Obama apparently calling Donald Trump a ‘total and complete dipshit’.
Many of these videos are labelled upfront as fake, but sometimes this doesn’t happen until after the video’s already gone viral. By this point the damage is done; after all, first impressions are strongest. Perhaps if this normalisation of fake media continues, we’ll eventually arrive at a “zero trust point” where by default we question the authenticity of any and all media, as suggested by Dr. Matt Turek. However, that possibility is a far cry from where we are now. When you’re unknowingly watching deep fakes, you can sense something is slightly off. But it’s a video. It looks like it could have been shot on any old iPhone. It’s probably real, right?
So anyway, I’m sitting in this Zoom class. The bright green light next to my webcam tells me I’ve given Zoom access to my camera. Just beside my laptop is my phone. It requires a simple click to open – I’ve registered my fingerprint on my phone. (Clearly, we’ve gotten to a point where even entering a four-digit passcode is just too much). I know on some phones all you have to do is hold the phone up to your face and bam. You’ve unlocked it. How secure, right? Only you can open your phone.
The problem is, I don’t know where my fingerprint went when I gave it to my phone. Let’s be real, no one reads the fine print. Does it stay localised on my phone? Does it go to Apple? Upon investigation I discover that no, fingerprints used for Touch ID aren’t stored on Apple servers nor are they backed up anywhere. Nonetheless, I don’t think it’s silly of me to ask these sorts of questions.
What does Apple keep stored about me? Well, I know they have my email, my location (IP address), my purchase history, the music I like, my contacts, my search tabs, my calendar… Huh. Apple might know me better than some of my oldest friends.
We ask Siri what the weather’s going to be like this week. Alexa has become a meme for her DJ abilities. There are heaps of ways that technology can help us: for example, deep learning is the process by which Artificial Intelligence (AI) learns to imitate human brain functions by processing large amounts of data. AI presents huge potential benefits in the fields of healthcare, education, customer service, and so many more. For example, check out how chatbots are being used to check patients for COVID symptoms.
Data collection is so integral to the progression of technology. Every Google search that is made, every Facebook link that is clicked, every Instagram like, is contributing to surveillance capitalism – individuals’ personal data being sold to companies without our knowledge or consent. With the normalisation of AI through technologies like Alexa, Google Assistant, and Siri coming into our homes and becoming a part of our everyday life, it’s important that we educate ourselves about our privacy rights and make educated decisions about where our data is going.
Australian privacy legislation can be jarring to read when trying to understand how the law protects our personal information. The everyday technology user is only recently beginning to learn about the genuine risks surrounding information privacy and data use. What is needed is a change in the Australian STEM curriculum. We need to mandate teaching the next generation not only how AI is developed and how it works, but our role as citizens in protecting our own information.
We need to find ways to introduce these ideas to primary school students who are spending more time on the internet than ever. The pandemic has inflated this, with kids spending even more time online not only for their schoolwork but to stay occupied where they usually would have been able to socialise with friends in real life. If technology is such a key aspect of how children are being brought up, and if it will define their future – the future, after all, is in technology – we have a responsibility to protect our youth by educating them from a young age to make informed choices. We need to equip them to distinguish whether the images sprawled across their screens are genuine or not, in a time where it is sometimes difficult even for discerning adults to know, and a “zero trust point” looms.