Meta has reported removing nearly 550,000 accounts belonging to users under 16 across its platforms in response to Australia’s Online Safety Amendment Act 2024, which went into effect in December. The social media ban restricts access to platforms like Instagram, Facebook, and Threads. While complying with the law, Meta is urging the Australian government to collaborate with the industry for a more effective solution. The company suggests incentivizing the industry to improve safety standards and implement age verification tools to protect young users across all apps, thereby avoiding the need for blanket bans.
Read the original article here
Meta is arguing for a “safer” and more “age-appropriate” online experience for kids, not just a straight ban. Honestly, that feels a bit hollow coming from them. It’s like a tobacco company urging us to reconsider an under-16 smoking ban. You have to wonder about the sincerity when their entire business model is built on algorithms that thrive on user engagement, even if that engagement comes at a cost. It’s also worth noting that many tech executives, including those at Meta, don’t even let their own kids use the platforms.
If we’re talking about Australia considering this, it’s pretty clear where the sentiment lies. It seems Australia is doing something right by taking a hard line on this. The desire to see the US follow suit is understandable, though it’s doubtful, given the deep ties between Congress, the executive branch, and tech billionaires. There is the sentiment that Meta’s CEO is an unethical businessman. We are seeing a pattern of people realizing the platform is a vehicle for right-wing American propaganda.
Meta is, essentially, pleading its case against the ban. This is where we need to be very wary. If Meta is pushing for a change, it’s probably a good sign that the current stance is the right one. Australia might be on the right track, and it would be great to see other countries follow suit. Meta’s AI has had problems with things like bots having “sensual” chats with kids and offering false medical information.
When Meta says they want to provide a safer experience, it’s hard not to be cynical. It’s the same kind of energy as “Oil Company urges drivers to rethink electric vehicles.” If Meta is upset, you know it’s probably the correct decision. They don’t do anything to be helpful or benign. They live off of making people madder, dumber, less aware of the world/reality and meaner all for money.
The core issue boils down to money and data. Meta has lost hundreds of thousands of potential users under 16, and therefore lost a trove of data it could use, and sell, to others. Their track record on moderation is also pretty terrible, with reports of harmful content like Nazi profiles, soft porn, and disturbing gore, often going unaddressed.
The core issue boils down to the fact that, to be blunt, Meta doesn’t care about the harm their platforms might cause. They just want the money. This is the argument of the opposition. It’s like a skydiving school that can’t afford to check parachutes. If you can’t provide a safe and well-monitored experience, then you don’t have a viable business.
The ban, and the pushback from Meta, highlights a fundamental conflict. The company’s profits rely on user engagement, and specifically that of teens. They want access to that data, that market. It’s easy to see why, for them, a ban is an existential threat. They want the revenue, they want the data, and they’re willing to fight for it. It’s the same kind of manipulative tactic we see from other industries where profit trumps safety.
This isn’t just about Australia, though. It’s a broader warning. Let kids live life as kids. There is enough moral corruption in the society. We could do with less. Tech companies are so afraid of limits to their reach. They want to control how children think and act.
The reality is this: Meta’s platforms have been linked to serious harms, including mental health issues, exposure to inappropriate content, and even facilitation of real-world violence. And if they’re protesting so vocally, it’s likely because this ban cuts right into their financial bottom line. As a parent from America said, Australia’s doing a much better job showing that they care about the health, safety, and well-being of their children.
The underlying sentiment is, if Meta is urging you to do something, do the opposite. Their business model is built on the exploitation of data. Their AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info. The push for under-16s to be on the platform is not the right move. The company does not seem to care about its negative impact on individuals, or society as a whole. Meta needs to rethink its entire business model and they should be completely banned in countries around the world.
Meta is on the wrong side of this debate. Their plea to rethink the ban is nothing more than a desperate attempt to protect their bottom line. A company that has facilitated genocide, allowed AI bots to groom children, and peddled dangerous content has no place in the lives of young people. And it is a net negative for humanity, as a whole.
