Preliminary findings by the European Commission suggest TikTok’s design, featuring elements like infinite scroll and personalized recommendations, may place users’ brains into “autopilot mode,” potentially leading to compulsive behavior. The commission alleges these “addictive features” violate EU law by failing to adequately assess harm to users, including minors. TikTok has strongly denied these accusations, calling them “categorically false” and vowing to challenge the findings, which could result in significant fines if confirmed under the Digital Services Act.

Read the original article here

The European Union has voiced serious concerns that TikTok’s design is contributing to harmful, addictive behaviors, particularly among children. This isn’t a groundbreaking revelation for many, as the core mechanisms of social media platforms, including TikTok, YouTube Shorts, and Facebook Reels, seem inherently designed to capture and hold user attention for as long as possible. The very essence of these platforms is to maximize engagement, and the algorithms are fine-tuned to achieve precisely that.

A significant part of the discussion revolves around the impact on younger generations. Some argue that smartphones, in general, should be withheld from children until they reach a more mature age, perhaps 16, citing the detrimental effects on mental well-being. This perspective suggests a need to re-evaluate how children are raised in the digital age, a stark contrast to previous generations where distractions were far less pervasive.

The implications of these addictive designs extend beyond just children, affecting adults as well. There’s a sentiment that if the technology is designed to be addictive, it’s inherently harmful to anyone susceptible, regardless of age. This raises questions about whether governments should prioritize the well-being of all citizens, not just the youngest.

Another layer to this issue involves potential political motivations. Some speculate that governments might be hesitant to take strong action due to their own reliance on social media for spreading information or propaganda. TikTok, in particular, has been described by some as a conduit for state propaganda and a tool for foreign political influence, suggesting it poses a threat to everyone.

There’s a strong call for more decisive action, advocating for aggressive restrictions on short-form media in general. The argument is that these platforms are profoundly affecting people’s cognitive abilities and that half-measures are insufficient. The desire is for a more complete ban or significant overhaul, as the current situation is viewed by some as akin to “mental cancer.”

Teachers, on the front lines of education, have reportedly observed a noticeable decline in children’s attention spans. Lesson plans are having to be adapted to keep pace with these shortening attention spans, indicating a tangible impact on learning and development. The fear is that the exposure to potentially biased or manipulative content, such as propaganda, will further exacerbate these issues.

The timing of these pronouncements also sparks skepticism for some. The idea that this is only now being “discovered” in 2026 seems to miss the mark, as platforms with similar addictive qualities have existed for years. There’s a suspicion that these findings, while perhaps valid, will be leveraged to impose hefty fines on TikTok, potentially as a way for the EU to address budgetary shortfalls.

This leads to comparisons with other industries. If alcohol producers can face massive fines for products deemed harmful to children, then perhaps similar tactics could be employed against social media. The comparison to the slow recognition of the dangers of smoking is also made, suggesting that the insidious effects of social media have been overlooked or downplayed for too long.

The pervasiveness of short-form video is undeniable, with platforms like Instagram now heavily featuring “Reels,” mirroring TikTok’s format. This widespread adoption makes the EU’s focus on TikTok seem, to some, like a partial solution when the problem is systemic across social media.

There’s a frustration with those who adamantly refuse to engage with platforms like TikTok, often citing concerns about propaganda and data privacy, only to then consume similar content disguised as “Reels.” This highlights the difficulty in distinguishing between platforms when the underlying mechanics are so similar.

Reflecting on the evolution of social media, individuals who have witnessed its growth from its early days express a sense of saturation and a desire to return to simpler times. The current algorithmic scroll-style content is seen as having “jumped the shark,” becoming toxic for everyone. Personal anecdotes describe how individuals have become so engrossed in these platforms that they consume an unhealthy amount of time, sometimes to the detriment of essential needs like sleep and eating.

While acknowledging the enjoyment derived from specific content, such as cooking channels, there’s a recognition that the addictive nature of these platforms can lead to obsessive usage. The sentiment is that these platforms, in their current form, “need to die.”

The core issue is identified as algorithms prioritizing “engagement” above all else. This often means promoting sensational, misleading, or outright false content because it provokes a stronger reaction. This isn’t exclusive to social media; many screen-based activities, including television and gaming, have long exhibited addictive qualities, drawing people into hours of consumption.

Despite the acknowledgment of the problem, finding effective solutions remains a challenge. Some believe that transparency regarding algorithms is crucial, perhaps even warranting a ban on social media if algorithms aren’t made public. The call also extends to broader societal issues, with some suggesting a ban on billionaires as well.

There’s a critique of labeling Gen Z as unintelligent while simultaneously citing potentially clickbait-driven articles about declining intelligence. The importance of using credible sources when discussing such serious issues is emphasized to avoid spreading further misinformation.

The inherent addictiveness of these platforms means that even adults, who are theoretically capable of recognizing the harm, often fall victim to spending excessive hours scrolling. The argument is that the addiction is so potent that it overrides rational decision-making.

The idea that social media is inherently harmful isn’t new; it’s been an ongoing concern for some time. The suggestion of running constant videos like “Subway Surfers” is a somewhat tongue-in-cheek way of illustrating the overwhelming and potentially mind-numbing nature of the content.

Looking back to the early days of platforms like MySpace, the focus was on presenting idealized versions of life to gain peer validation. While this was about popularity, the current landscape feels like a significant escalation in terms of damage and potential for manipulation.

There’s a shared sentiment that these companies and industries are actively “brainwashing” youth and fostering habits of instant gratification, which is detrimental to overall well-being.

The argument that governments should protect those susceptible to addiction, similar to how they might regulate substances, is also put forward. However, this raises a counterpoint: if addiction is the primary concern, shouldn’t individuals have more freedom in other areas, like accessing any medication they choose?

The current state of social media is seen as a breeding ground for misinformation campaigns, and the revenue generated from enraged and engaged users is a key driver. Features that would genuinely improve user experience are deliberately omitted in favor of those that maximize engagement, even if it means creating a more toxic environment.

The realization that the internet has devolved from “bad and damaging” to “critically bad and damaging” is a somber reflection for many. The feeling of inadequacy and comparison that arose in college, stemming from perceived better lives of peers, has been dwarfed by the current digital landscape, which is far more profoundly damaging than anyone could have imagined.

While the harm is widely acknowledged, the use of questionable sources, like clickbait articles about Gen Z’s IQ, is criticized. The call for accurate and reliable sources is paramount when addressing such significant societal concerns, to avoid inadvertently contributing to the very misinformation that is being decried.