Written By:
weka - Date published:
11:19 am, May 10th, 2025 - 41 comments
Categories: national, social media -
Tags: Member's Bill
Have to admit I have days where I think we should ban social media altogether, and I’m still in two minds about whether the internet is a net benefit to the world (I say that as someone who spends a lot of time online). The tech isn’t the problem, how humans are using it is. How it is used is a reflection of humanity generally, and at this point in history, it’s not looking good.
There’s been a lot written for a long time about the problems with social media giants like Facebook, who intentionally use software to emotionally and cognitively manipulate people into staying on the platform and being influenced by specific content. If you’re unfamiliar with these dynamics and how people are affected, take a look at the film The Social Dilemma, or the Centre for Humane Technology.
The commercial purpose of that is obvious, as is the lack of concern for societal wellbeing including children, but it’s also increasingly obvious that there are political and ideological motivations as well. The attention economy is not just about making money, it’s also about power and control, and the left is incredibly naive if it thinks we can ignore the tech bro take over at a time of rising fascism. What we do, each step of the way, has long term implications.
National MP Catherine Wedd’s Member’s Bill, Social Media (Age-Restricted Users) Bill, was introduced by Christopher Luxon last week. It follows Australian legislation last year that aims to prevent those under 16 from accessing certain social media (but not all). The Australian Act puts the legal onus on social media companies, not children and their parents.
In addition to the inbuilt harms in social media use, children are at particular risk from behaviours like bullying, and access to age-inappropriate content. Like other safeguarding measures, it’s never going to be perfect, kids will still try and find ways around it, but it’s a tool for minimising harm. In Australia the companies have a year to make the change.
The best argument against a ban I have seen is that young people use social media to network politically, and a ban unfairly excludes youth from aspects of political life. However, a ban doesn’t have to cover all platforms or all kinds of online networking (the relevant Minister would establish which platforms are age-restricted). Which leaves the door open to the development of ethical social media, starting with youth. There’s a real opportunity here.
The big question isn’t whether a ban would be justified. We have no problem with government mandated protection of children from tobacco and alcohol, social media is similar.
The question is if and how it’s possible to do it technically, particularly without breaching privacy rights or conventions. How do you exclude a certain age group without needing all other age groups to prove how old they are? It’s not like being at the pub where staff can reasonably figure out which people need to be asked for age ID.
The Australian legislators rightly rejected a government-issued ID. No-one should be trusting social media companies with that (see the first paragraphs above), but nor should governments be trusted, exhibit A: the United States, where Tech Bros and the President have joined forces in undermining democracy, and where the Muskites are harvesting personal data en masse to use against the population. People also have a right to access the commons without showing an ID card at the gate.
In a piece by NPR, Australia’s e-Safe Commissioner said this,
Grant: There are really only three ways you can verify someone’s age online, and that’s through ID, through behavioral signals or through biometrics. And all have privacy implications. There was big concern with providing government ID. But there are digital identity providers, like one called Yoti, that can estimate someone’s age using facial recognition technology.
But we do want to make sure there is not discrimination, or bias, and some of these technologies are less accurate depending on the kind of face being scanned. I met with an age assurance provider last week in Washington, D.C., who is using an AI-based system that looks at hand movements and has a 99% success rate.
There are legitimate concerns about whether that kind of legislation is opening the door to further degradation of privacy. While I can conceive of a system that might protect the ongoing privacy of young people, I am sceptical that this is what will be attained. For instance, asking kids to provide facial scans might be protected if the data is deleted when they turn 16. But it should still require parental consent.
Beyond that, it would be based on trusting big tech companies and they’re why we are in this situation in the first place. Google is called gevil for a reason, and Twitface et al have demonstrated repeatedly that social wellbeing is an add on, not a primary concern. Worse, nation states let that situation develop over a long period of time. Can MPs be trusted to design a good law?
The other thing that jumps out about the eCommerce interview is the glaring omission of concern about the societal importance of pseudonymity. A variety of people have valid reasons for not using their real life details online: politicised women experience to amounts of online violence, including sexualised. Women of colour particularly so. Women escaping domestic violence. People in jobs where it’s become difficult to express political views online even in a personal capacity. One of the biggest issues would be the suppression of political expression, because some people aren’t free to do that using their real life names, and that issue is only getting worse.
It’s also alarming that the eSafety Commission appears to be leaving the tech up to the social media companies, who are the ones that created the problem in the first place,
So there are some innovative solutions out there. But whatever social media companies end up using, it’s going to be balanced against privacy, and it must ensure it does not undermine a user’s security.
I haven’t read the Australian legislation to know what safeguards are in place there. But generally societally we are cavalier about technologies like facial recognition, not to mention data sharing. Many people have given up all sorts of data to the SM giants, and basically don’t care that much.
I don’t trust National on the privacy aspects at all and consider them incapable of writing a Bill with our best interests in mind. The only reason Bill English’s big data plans got stopped was a surprise election result in 2017. Maybe the Select Committee process would sort some of that out, but that’s always a compromise.
Here’s what the Bill says,
7 Provider must take reasonable steps to prevent age-restricted users from having accounts
A provider of an age-restricted social media platform must take all reasonable steps to prevent an age-restricted user from being an account-holder with their age-restricted social media platform.
8 What constitutes reasonable steps
Without limiting section 7, reasonable steps means that which is, or was, at aparticular time, reasonably able to be done by a provider in relation to preventing an age-restricted user from being an account-holder with their age-restricted social media platform, taking into account and weighing up all relevant matters, including—
(a) the privacy of the age-restricted user; and
(b) the reliability of the method used for a person to assure the provider that
they are not an age-restricted user.
This seems incredibly vague.
I’d also like to know what plans National have for not simply banning, but education and improving social media culture, and protection for teens once they turn sixteen. Unless of course, the timing of the Bill was a dead kitten to distract from their urgency to undermine pay equity.
Fortunately it’s a Members Bill which means it has a low chance of even making it into the parliamentary process. But we should be having a wide ranging conversation about it now, the issues are complex and we need time to make sense of them. In particular I hope there are people who can talk us through the technical side of what is possible.
Facial recognition is commonplace in NZ already.
Airports. Banks. Some major malls. Supermarkets. Trains. Speed cameras have the capacity. Police stations. Parliament. Relax, you're soaking in it.
Anyone wonder why Google always gets the right ads for you? It's not just your search history.
It's easier to identify the areas they aren't.
If Labour can't yet support it, they can just wait the Australian implementation and adjust from the lessons.
I never use facial recognition because it fails repeatably. Way too inaccurate and susceptible to enviromental factors.
Each time I shave, each time I change glasses, each time I get a new set of wrinkles, even if I am tired. But also just on the light…
This isn't one system – this is on every system from my apartment blocks front door, to my cellphone, to my laptop. My success rate is best on the door.. provided the sun isn't too bright.
Been testing up-close facial recognition since the 1990s. As a technology it actually appears to be getting worse as I age and as the technology gets used in less constrained environments….
Mostly I use encrypted passkey devices. My phone, yubi keys, and authentication apps – usually protected with master keys and finger prints. That is reasonably reliable.
I'm constantly amazed at how badly wrong Google gets ads for me. I know that I have eclectic searching (part of my job) – but even so….
And, I don't see that biometrics (if it's being used) is helping the result targeting (I am not in the stage of my life when I would need ECE education – but have been hit by a repeating rash of ads for it, over the last couple of months).
I would suggest the wording in the Aussie bill was chosen specifically to make it vague enough it wouldn't work at any level other than a splash screen that you need to check before being able to enter a web site verifying you are over 16.
The point isn't to make workable legislation.
The point is squeezing a wee drop of support out of those in the electorate that love this type of nanny state populist bullshit with out crossing over into the complicated area of personal liberty and online privacy issues.
The NZ bill will likely use te same wording for the same reasons..
What waste of time and money from a government that seems to love doing both.
Imprecision in legislation isn't uncommon in a new regulatory field.
In this case the Amendment Act doesn't prescribe what measures will constitute 'reasonable 'steps' to identify and prevent useage by An Australian under 16.
Australia has an eSafety Commissioner. It's they who will be tasked with forming regulatory guidelines to assist in determining those 'reasonable steps'. And after that it will be up to the Prosecutors to put up some test cases to further clarify how the legislation will be enforceable.
Pretty normal in a new field. But if we want to stop the accelerated decline of stable facts that form reasonable opinions that in turn enable young people to make actual decisions, the new regulatory field has to be faced.
Good summation of the issues at play.
I mentioned this bill on Daily Review somewhat dismissively. Less about the legislation itself, more about the timing, the likelihood of it seeing light and the improbability of achieving it's aims.
While not essentially a culture war issue, it's more of a low level skirmish that distracts and gets attention and resources where there are bigger fish to fry. Regulatory Standards Bill, Repugnant, immoral dishonest treatment of survivors in state care, Pay Equity, Amendments to Fisheries Act etc etc.
The idea of making the companies restrict access to their own sites is a tad optimistic.
The best I knew, someone had to be 15 to have a FB account. Going back a few years I knew of three youngsters that had an account, with their parents blessing. A bit like R16, R18 ratings on video games- widely ignored.
To play devils advocate, this sort of legislation would put us in breach of son of TPPA (or however that acronym designed to protect corporate profits is known) in that it would be a restraint of trade. I wouldn't want to compare the depth of pocket of these multi national's and their will with NZ Govt. Inc.
I've never been drawn to the idea we can ban our way out of a problem. Prohibition rarely works.
Definitely think the way it could be done needs to be looked at closely. I'll post some links in a bit about other areas where governments are regulating internet tech companies (Australia and Europe).
I'd like to push back on the culture war angle, and the idea that prohibition rarely works.
For instance, restricting alcohol sales to adults definitely means it's less available to children. Doesn't mean some teens don't find work arounds at some time, but it's harm minimisation that wouldn't exist if a 13 year old could walk into a bottle store and buy a bottle of vodka. Most 13 year olds aren't going to get into an R18 film in a cinema.
I agree that National tried to use this as a distraction (but failed 😈). But it's not a low level skirmish. AI is going to change SM massively, we can't even imagine what it's going to be like in 5 years time. But we do have increasing and compelling evidence of the harm being done by both screen time and SM use. We can continue to ignore this, but it will just get harder to mend in a full on AI world.
Already AI manipulation of images into sexualised imagery is a thing, affecting women in particular. This is a massive overriding of consent, and it's not trivial to try and sort this out now.
All good.
I don't think there is a right and a wrong in this just differing attitudes.
A key influence that folk won't want to acknowledge is the modelling that is set by the elders. Just like yr alcohol/ youth example, kids that will drink inappropriately are likely to have seen that. If adults could curtail their SM to exposure (they can't) then the kids will be alright.
At the bottom line, what is the problem still that is trying to be solved? I will wager you dollars to doughnuts it won't be.
But at least we aren't formulating plans for push back on behalf of victims of state torture.😉
Adults' use of SM is most definitely a problem too. I would count that as a major reason why we don't do something about SM 😉
The difference is that their brains are more developed, and they have legal adult agency. The more generations that grow up online (in its current form), the worse the outcomes for the world. Also fucking annoying, because the internet could be an amazing thing.
"and they have legal adult agency."
A powerful insight I was given is that the role of the parent is to be the chitta for the child till they are 16.
Chitta is Sanskrit and in this context loosely means consciousness, or mental field. So, again, we come back to what are the adults in the room doing?
I would argue, way more damaging for youngsters, is the screen (laptop, tele phone) being the babysitter for toddlers in busy households. Way more damage being done earlier and more profoundly than a 12 yr old acquiring a SM account.
the adults in the room are on their phones.
I think screen baby sitters aren't great either, but if you think SM is mild by comparison, with all due respect, I think you don't understand what SM is doing and how it is escalating.
Not so much mild in comparison, but the impact of SM is profoundly exacerbated in those who were parented by screens in those very important early years.
From Ad's link:
"First, some students felt stressed and anxious when they couldn’t contact their parents or caregivers during the day."
This is the issue. Anxiety in youth at unprecedented levels. That has to come back to parenting. Youngsters not having enough responsibility in their lives. Amongst a bunch of other stuff.
Deal with anxiety in kids, build resilience and all the online bullying in the world wont be worth the bandwidth it uses.
Annnd.. if the government is serious about the health of kids they would make vapes prescription only.
Remember when banning cellphones in school was going to cause the heads of students to implode?
In fact, zero drama, better learning.
That’s too flippant and reality is a little more nuanced than you like to portray it.
https://theconversation.com/school-phone-ban-one-year-on-our-student-survey-reveals-mixed-feelings-about-its-success-252179
The article that article links to, refers to research that apparently shows that increased cell phone use has adverse effects, and that school bans need to be alongside less exposure outside of school.
https://www.theguardian.com/education/2025/feb/05/school-ban-phones-not-improve-grades-health-uk-study
If the research doesn't show a positive shift from the bans, then they need to look at why. Was it the inconsistency across schools? Other effects? Not enough time has elapsed since the bans?
cell phone as a distraction is only part of it, and that piece doesn't really get to grips with the degree to which smart phones cause problems.
Yr funny, thinking that because there is a cell phone ban in schools there aren't cell phones on schools. How sweet.
The school I work in had banned them a few years before
an appeal to boomersthe school cell phone ban came in.I can assure you, there are cell phones in schools.
related,
https://x.com/mike_salter/status/1921348328538743185
We already know that teen and pre-teen exposure to porn is having a huge influence on teen and then adult sexuality, in particular because teens are being taught the kinds of sex portrayed in porn as normal.
I remember more than a decade ago reading accounts from young women about the kinds of sex they were expected to engage in, because their boyfriends were watching porn that degrade women in sex. It's escalated since then, increasingly being strangled during sex is seen as normal. I'm now wanting to see what is happening with age pressure and sex.
Obviously that's a much bigger problem than SM under 16 and phone use in schools, and needs a wide range of remedies, but the case for harm minimisation to children and society is strong and phone bans in schools and under 16 SM bans can be part of that.
The Heraldine is running the narrative that restriction on under 16 access to social media now has government agenda status.
Because the PM has asked the education minister to work on this.
https://www.nzherald.co.nz/nz/politics/social-media-restrictions-for-under-16s-work-to-be-part-of-governments-agenda/XXXPTWCN2FA43OSV5237QVEK5Q/
1.What is "social" media?
2.How to verify age to social media platforms without compromising identity?
Also
Prohibiting sexually explicit deep fakes of real people is fairly obvious.
https://www.nzherald.co.nz/nz/politics/social-media-restrictions-for-under-16s-work-to-be-part-of-governments-agenda/XXXPTWCN2FA43OSV5237QVEK5Q/
Seymour has a point about the Bill being badly written, but that's fixable so I wonder what his agenda is.
Social media can be defined in the legislation, and then updated as needed by the relevant Minister.
Age verification is the tricky one. Grappling with that issue is worthwhile even if the legislation is never developed.
Parents can block access to sites now. On either their child's device, or the home network.
Whether a G rated social media site, or an R 18 one.
What I don't get is why, if an MP thought it was technically and electorally feasible to ban under-16s from various kinds of web applications, their first thought would be "Let's ban them from social media." We know that children are accessing the huge volume of freely available online porn and we now have an epidemic of teenagers and young adults who assume heterosexual sex involves anal sex, hair-pulling, slapping or strangling. That strikes me as a way, way bigger problem than that some of them will suffer emotional harm from mean people on social media.
I've been on the internet since 1999 and the whole time I've been hearing people say that you can't control internet porn. I'd like someone to read this article and then explain to me why pornhub can't be burned to the ground and senior management put in jail.
Pornhub are intentionally trading in the sexual abuse of children.
https://www.nytimes.com/2025/05/10/opinion/pornhub-children-documents.html
https://archive.is/fLAW6#selection-925.0-925.436
It looks like an economic and sociopolitical issue to me, not a technical one. Which is why I'm not convinced by the 'cant' be done' argument on SM bans.
Basically, I think online porn is so widely accessed by many men, it'll be hard to regulate it in any way, maybe because many of the people capable of making it happen, are male porn users.
Porn was one of the earliest topics of some of the earliest computer mediated communications, going back at least to the 1980s. I recall reading about the early history that preceded the World Wide Web and the development of the capability to share still and moving images.
I think it's useful to make clear the differences between what's technically possible, and what is social and political lack of will.
But yes, the problem is men and vested interests.
19 US states have age verification for porn sites – by face, or uploaded document.
What the story does not say, is that if the site blocks the state user, adults and or under under 18er can get in by a VPN set to another state.
Maybe they need VPN's to be for those over 18?
It is Project 2025 policy to ban pornography, but the first phase is age verification.
https://edition.cnn.com/2025/01/11/politics/invs-porn-age-verification-laws-supreme-court
Porn Hub is now required to verify all that uploaded to their site – from content creators to verifying the identity and consent of every performer seen in user-generated content uploaded.
https://itif.org/publications/2025/02/05/congress-should-lead-way-in-childrens-online-safety-access-adult-content/
https://www.freedomforum.org/age-verification-laws-first-amendment/
https://theintercept.com/2025/01/15/supreme-court-porn-age-verification/
is the italics a quote? Where from?
The thing to do about pornhub publishing and monetising the sexual abuse of children, is to seize all the assets, destroy them, and prosecute senior management. They've had ample time to stop the exchange of CSAM on their site, and they haven't.
No-one has the right to either access porn or making money from porn at the expense of children, but that is exactly what is happening. Shut it down.
https://help.pornhub.com/hc/en-us/articles/34927506861843-Verification-Upload-and-Content-Moderation-Process
It was a requirement after a court case in New York – they accepted videos from a content creator who lied about the workers (they were over 18) giving consent to online use.
The under age “model (term used)” matter appears to be historic.
and yet they're still hosting videos of the sexual abuse of children.
2020 is not 2025.
The 2023 deal with New York involves a Monitor in place (for 3 years).
https://www.justice.gov/usao-edny/pr/pornhub-parent-company-admits-receiving-proceeds-sex-trafficking-and-agrees-three-year
I can only assume the answer is "Because money talks and politicians listen," because what else could explain the lack of political interest in dealing with people publishing child sexual abuse? There was a lot to be said for times when people could make a living from investigative journalism.
But Pornhub's a good example: if politicians can't/won't keep 12-year-olds off Pornhub, why should anyone believe they can/will keep them off social media apps? I think you're right it's an economic and social issue rather than a technical one, and "money talks" is always going to be the biggest hurdle to get over.
as difficult as it is to ban SM accounts for youth, it's nothing compared to doing something meaningful about porn. Everyone should be kept off pornhub though, so maybe that part of it is not complicated.
Seems they can.
Wednesday 15 January 2025 09:50 GMT
[…]
Nearly one third of U.S. states — with a combined population of just over 104 millions people — are now unable to access PornHub, the world’s largest pornography site.
That is because all those states have passed laws requiring porn websites to verify that their users are over-18, such as by checking their driver’s licenses, or else be vulnerable to civil lawsuits.
The result is that online porn giant Aylo, which owns PornHub, RedTube, and YouPorn — has simply stopped operating in those states, arguing that it cannot comply with the laws without violating its users’ privacy.
https://www.independent.co.uk/news/world/americas/pornhub-blocked-us-states-age-verification-b2679211.html
Naive, those over and under 18 in those states just use a VPN linked to another of the 50 states.
The real issue is, as usual, down the line …
https://www.reddit.com/r/LeopardsAteMyFace/comments/1bfvt2t/searches_for_vpns_spike_in_texas_after_pornhub/
Some clarity as to what the the Oz legislation covers.
Their test is addictive algorithms.
The rationale for a ban, rather than parental control, is the impact of social isolation (from the herd).
https://www.rnz.co.nz/news/national/560655/australian-politician-who-sparked-social-media-ban-says-it-s-worth-it-even-if-kids-find-way-around-it
Some would want a group chat feature that includes those under 16
https://www.1news.co.nz/2025/05/12/social-media-can-be-positive-for-teenagers-coaches-and-teachers-say/
The element that I see missing from this is the legitimate use that under 16s make of social media in connecting geographically diverse people into interest groups. I see it particularly in friends with kids at the high-functioning end of the autism spectrum – who use the niche online social media groups to pursue their passion for trains, or a particular TV programme, or the lesser-spotted dotterel – or whatever.
Kids with a passion for Minecraft or whichever online game is cool now – also heavily use social media platforms associated with the game (and, according to my kid – are well able to spot potential pedophiles attempting to groom them. I guess that kids with less savy, may be more taken in).
How do you stop the 'problematic' social media groups. I'm thinking here of ones where online bullying is rife, ones which promote unhealthy states of body (super dieting) or mind (fill in the gaps yourself). But still allow ones which give isolated or alienated kids a social grouping to belong to.
And, while I'm fairly relaxed with my own kid's participation in social media (I made sure he had the tools to detect predation, encouraged communication, and did occasional reviews of his chat logs (with some discussion over the way that text and verbal coms differ, and you can't read tone online) – I wouldn't feel happy handing his ID over to the major tech companies – knowing how problematic their security has been in the past – and how they've mis-used data for their own corporate profit. And, while I don't anticipate that the Government would sell off the data – I don't trust their online security either: too many data breaches….
The best way to manage proof of age is to separate this out from ID (name, date of birth etc). That is verifies only such as the user is above 13, above 16 or above 18.
The irony of all this is that those with tracking cookies are able to do (the over 18 part of it) this already, until VPN use.
At some point proof of age for access to a VPN.