They’re Hacking Our Minds and Calling It Freedom: The Silent War on American Democracy
How Technology and Social Media Are Undermining American Democracy
Before You Read
They don’t need bullets. They don’t need tanks. All they need is your attention.
While you’re scrolling, liking, sharing—laughing at memes and arguing in comment sections—a quiet war is being waged against the foundation of your democracy. Not with force, but with algorithms. Not with armies, but with targeted ads, deepfakes, and endless streams of content designed to manipulate what you think, how you feel, and even how you vote.
It’s not just that you personal data has been weaponized—it’s that your psychology has been turned into a political tool. And the most terrifying part? Most Americans still think this is just “normal.” It’s not. It’s warfare. It’s profit. It’s power. And if you don’t call it what it is, you’re going to lose something you can’t get back.
Introduction
In the past decade, American democracy has been jolted by a series of unsettling events that revealed a new threat from an unlikely source: our own technology. From the Brexit referendum in the U.K. and the 2016 U.S. presidential election, through the Capitol insurrection on January 6, 2021, it has become painfully clear how social media and other digital technologies can be weaponized against democratic society.
Our personal data has been exploited, disinformation has been amplified at lightning speed, and even our own psychology has been used as a pawn in a grand political game. Foreign adversaries have joined the fray, information warfare is raging online, and public trust in journalism and shared facts has plummeted. Meanwhile, Big Tech companies reap enormous profits and face little meaningful regulation, allowing these problems to fester.
As we look ahead to the 2026 midterms and the 2028 presidential election, the warning signs are flashing red. This is not a far-off, abstract issue—it’s an urgent threat to the future of American democracy and to the very idea of an informed, united republic. In this article, I will investigate how technology and social media are undermining American democracy, examining the key factors at play and how they intersect with politics, money, and power. The tone is one of urgency and clarity, like a lawyer making a case to everyday American voters: our democracy is at stake, and we must understand what’s happening in order to defend it.
Weaponizing Personal Data: The Microtargeting Machine
One of the first cracks in the democratic armor appeared with the weaponization of personal data for microtargeting. In the lead-up to both Brexit and the 2016 U.S. election, political operatives discovered that social media platforms like Facebook were treasure troves of personal information that could be used to influence voters at an individual level.
The most infamous example was Cambridge Analytica, a political consulting firm that harvested tens of millions of Facebook profiles without consent. Armed with this data, they built detailed psychological profiles of voters. Why? So they could craft hyper-targeted political advertisements and messages tailored to exploit each person’s specific fears, biases, or desires.
Consider what this means: instead of a shared campaign message aired publicly for all to judge, microtargeting allows political actors to whisper different promises (or lies) into every voter’s ear via their private social media feeds. During Brexit, certain Facebook users saw ads stoking fears of immigration directed precisely at those anxious about that issue, while others saw different tailored messages—each group unaware of what the other was being told.
In the 2016 U.S. election, voters in key swing states were bombarded with customized posts and ads – some true, many misleading or false – calibrated to push their emotional buttons. This data-driven manipulation is profoundly dangerous to democracy. It creates a “divide and conquer” information landscape, where citizens no longer even debate the same set of facts because each person is seeing a bespoke reality crafted to influence them.
Personal data has become a weapon. Our own Facebook “likes,” Google searches, and online purchases are compiled and sold to the highest bidder, often with minimal oversight. Those bidders can be legitimate campaigns or malicious actors. Either way, the microtargeting machine means elections can turn on invisible, highly personalized propaganda campaigns.
Voters might never know that the ad claiming a candidate supports something heinous was shown only to them and a select group, and might be entirely absent from the news for fact-checking. This undermines the fundamental transparency of democratic discourse. If lies are told in mass media, they can be publicly refuted. But if millions of lies are told privately, targeted to each individual’s profile, how can they all be exposed? Democracy dies in darkness, and microtargeting paints layers of darkness over our electoral process.
Beyond the immediate impact on a single vote, the weaponization of personal data erodes trust long-term. When the Cambridge Analytica scandal broke in 2018, many Americans were horrified to learn how their personal information had been exploited without their knowledge. It was a wake-up call that in the digital age, privacy isn’t just about personal comfort – it’s about power. The power to nudge people’s behavior, opinions, and ultimately votes now lies in the hands of whoever holds the data. And in 2016, those hands were not always benign.
This was only the beginning of the story, however. As we shall see, data-driven microtargeting is just one aspect of a much larger problem. The stage was set for a new era of political persuasion – one that operates in the shadows of our social feeds, and one that traditional campaign rules weren’t prepared to handle.
Algorithms of Division: Amplifying Disinformation by Design
If personal data is the ammunition, social media algorithms are the delivery system that sprays disinformation far and wide. In the old days, propaganda or false rumors spread person-to-person slowly, or via limited media channels. Today, a lie can be launched on Twitter or Facebook and go viral globally within minutes, boosted by algorithms that are literally designed to maximize engagement. Unfortunately, falsehood and outrage are extremely engaging. The result is an algorithmic amplification of disinformation that has proven incredibly effective at undermining our shared reality.
Think back to 2016: fabricated news stories like “The Pope endorsed Donald Trump” or wild conspiracy theories got millions of shares on social media. Many people believed these false stories, at least long enough to influence their perceptions or votes.
Why did these outlandish falsehoods spread so quickly? It’s partly human nature – we are drawn to shocking or emotionally charged content – but it’s greatly magnified by the algorithms that run our social networks. Facebook, Twitter (now X), YouTube, TikTok – all of them use recommendation algorithms that try to keep users glued to the platform. To do that, they show us content they think we will respond to, content that others like us have interacted strongly with. And time and again, the content that triggers quick likes, comments, or shares is the content that provokes strong emotions, especially anger or fear. A well-crafted lie often fits the bill far better than a nuanced truth.
Over the past several years, internal studies from Facebook and others have confirmed our worst suspicions: the companies knew their algorithms were turbocharging divisive and false content, and they struggled (or in some cases refused) to significantly change the formula. In one Facebook memo, researchers bluntly warned that “Our algorithms exploit the human brain’s attraction to divisiveness.” They found that if the algorithm had free rein, it would keep feeding users more extreme and polarized content to keep their attention. That’s good for keeping someone scrolling endlessly through Facebook; it’s terrible for maintaining a healthy democracy. In fact, another internal finding was that a huge portion of people who joined extremist groups on Facebook were led there by Facebook’s own recommendations.
Consider the implications: A young man starts by watching a few political commentary videos on YouTube. The algorithm notes his interest and starts suggesting slightly more sensational content – perhaps a video flirting with a conspiracy theory. If he clicks, the algorithm gets even more confident that controversial content keeps him engaged, and soon he’s seeing recommendations for outright extremist or false material.
This “rabbit hole” effect has been reported by countless users. It’s not an accident; it’s a result of systems optimized for one thing – engagement – not truth or civic virtue. The algorithm doesn’t care if it’s feeding you the truth or poisonous lies; it only cares that you stay online.
By the time of the 2020 election and its aftermath, algorithmic amplification helped spread the “Stop the Steal” lie (the false claim that the election was stolen) to millions. Posts, groups, and videos promoting this baseless conspiracy were promoted and shared across platforms. Each share and comment acted like fuel on a fire, boosting the posts to more eyes. When fact-checkers or responsible journalists tried to debunk these falsehoods, they were already chasing a runaway train.
Studies have shown that false news travels significantly faster and farther on social media than true news – often many times over. In a democracy, truth has to matter and facts must catch up to lies, but on today’s internet, lies are often sprinting ahead while the truth is still tying its shoes. This dynamic is corroding the factual foundation that democracy rests upon.
Hijacking the Mind: How Tech Exploits Our Psychology
Why do these lies and divisive messages work so well? The uncomfortable answer is that the technology is tapping into our own human psychology and exploiting it. Social media platforms and many digital apps are built using insights from behavioral science to hook users in and keep them engaged. This isn’t a paranoid conspiracy; it’s something even tech insiders have openly admitted with remorse.
The former president of Facebook and co-founder of Napster, Sean Parker, confessed that the platform was designed to give you a “little dopamine hit” whenever someone liked or commented on your post, exploiting a “vulnerability in human psychology.” Another early Facebook executive, Chamath Palihapitiya, has said he feels “tremendous guilt” for helping create tools that are “ripping apart the social fabric of how society works.”
These are powerful words, and they get at a fundamental point: social media did not become addictive and all-consuming by accident—it was engineered that way. Features like infinite scroll (which lets you swipe or scroll indefinitely without stopping), autoplay videos, and the red notification badges are all based on psychological triggers that keep us coming back for more.
The platforms run constant experiments on users (often without us realizing) to see what keeps us clicking, watching, and sharing. Over time, they have learned to serve each person content that perfectly fits their interests and biases. It’s comfortable, like a warm cocoon of information that always validates what you already think. But that comfort is a trap.
By feeding us only what we want to see or what pushes our emotional buttons, the tech companies are preventing us from seeing what we need to see: the other side of the argument, the hard truths, the context and nuance. Instead, we get more of what makes us react quickly – often that’s content that makes us outraged or upset, because those emotions keep us glued to the screen.
Psychologically, humans are vulnerable to confirmation bias (believing information that confirms our pre-existing beliefs) and to emotional reasoning (feeling before thinking). Social media pours gasoline on these tendencies.
If you’re already worried about something, say immigration or election fraud, the algorithms will sense that and show you post after post confirming your fears, each post more extreme than the last. It can lead people into paranoid, extremist, or intolerant mindsets without them even realizing it’s happening. One day you’re a normal user reading the news; a few months later you might be convinced of bizarre, false notions (for example, that a cabal of satanic pedophiles secretly controls the government, as QAnon followers believe) because you were gradually fed more sensational content that appealed to your psychological profile.
It’s critical to understand that these tactics have been guided by behavioral science and data analytics. This is not simply random happenstance; it’s the result of deliberate choices by tech companies to prioritize engagement over everything else.
They have basically conducted mass psychology experiments on the American people, tweaking what we see to test our reactions, all in service of profit and growth. The ethical guardrails usually applied to experiments on human subjects were absent – there was no informed consent for how our minds would be manipulated by these feeds. And into this Wild West of psychological manipulation stepped not just advertisers selling sneakers, but political propagandists and hostile actors selling falsehoods and division.
The combination has been toxic. When you hijack someone’s attention and emotions, you can hijack their vote, their voice, and even their sense of reality. This is exactly what has been happening, and it’s tearing at the mental and social fabric that holds our democracy together.
Foreign Interference: Information Warfare on Social Media
While tech companies were busy exploiting our data and psychology for profit, foreign adversaries saw a golden opportunity to exploit these same systems to undermine our democracy from the outside. The most notorious example is Russia’s interference in the 2016 U.S. presidential election. Using a mixture of hacked information and armies of fake online personas, Russian operatives launched a coordinated information warfare campaign against the American public. They didn’t need bombs or spies on the ground; they had Facebook and Twitter.
Russian actors, most prominently the Internet Research Agency (IRA), created thousands of fake social media accounts that impersonated Americans. These accounts infiltrated online groups and communities, pretending to be passionate citizens on both ends of the political spectrum. They posted inflammatory memes, disinformation, and conspiracy theories aimed at widening every possible fault line in American society—race, religion, gun rights, immigration, you name it.
They even organized real-world rallies by posing as activists from opposing sides, in one case convincing Americans to show up to the same place to protest each other, not realizing the event was orchestrated by trolls in St. Petersburg. Through Facebook alone, it’s estimated that Russian-produced content reached well over 100 million Americans in 2016 – a stunning penetration of our information space by a foreign power.
And it wasn’t just random chaos they sowed; it had a goal. They wanted to hurt Hillary Clinton’s campaign, boost Donald Trump’s, and generally shake Americans’ faith in their electoral system. They succeeded to a disturbing degree.
The Russians weaponized the features of social media I’ve been discussing – microtargeting, virality, psychological triggers – to exacerbate distrust and hatred. By Election Day, American discourse was poisoned with conspiracy theories (like the absurd “Pizzagate” claim that Democratic officials were running a child trafficking ring out of a D.C. pizzeria) that had been incubated and amplified on social media, often with foreign assistance.
After the election, Russia’s disinformation campaign didn’t stop; it evolved. It sought to fan the flames on divisive issues, supporting extremist movements and later even amplifying lies about voter fraud in 2020. They recognized that a polarized, internally divided America is weaker on the global stage.
Russia is not the only player. Other countries saw this success and have copied the playbook. China, Iran, and other state and non-state actors have run their own disinformation operations targeting the United States. Sometimes the goal is to influence an election’s outcome; other times it’s simply to confuse us, pit us against each other, and weaken our democratic institutions.
These efforts amount to a kind of digital-age warfare—one that doesn’t attack physically but seeks to hack the minds of citizens and the information ecosystem of the country. It’s a war where Facebook pages and Twitter bots are the battalions, and viral lies are the missiles.
For years, America was caught flat-footed by this new kind of warfare. Our national security and law enforcement agencies were initially slow to understand or respond to attacks that took place on servers and social networks rather than in the physical world. By the time we did wake up—U.S. intelligence officially confirmed foreign interference and social media companies admitted as much—the damage was done.
Trust was eroded; divisions widened. The January 6 insurrection, while largely driven by domestic actors, was nourished in an environment where years of both foreign and domestic disinformation had made millions of Americans susceptible to a blatant lie (that the 2020 election results were illegitimate). In essence, foreign interference primed the pump, helping to create a hyper-suspicious, conspiracy-minded portion of the electorate that domestic demagogues could then easily exploit.
We must recognize that information warfare is now a permanent feature of the geopolitical landscape. American democracy is not just being challenged by internal disagreements, but by an ongoing assault from foreign adversaries who see advantage in our discord.
Social media is their battleground of choice. Every time we log on, there’s a chance that the provocative “fellow American” arguing in our feed is actually a fake account run from overseas. This is a direct assault on our sovereignty as a people – an attempt to manipulate Americans’ opinions and votes without us even realizing it. Fighting this threat will require vigilance and likely new defenses (both technological and legal) that we have barely begun to develop.
Erosion of Trust: The Collapse of Shared Truth
As disinformation spreads and foreign interference muddles the waters, a deeper, more insidious crisis has emerged: a collapse of public trust in journalism and in the idea of shared facts. Democracy relies on a baseline of common understanding – a sense that while we might disagree on opinions or policies, we at least operate from the same set of facts about the world. That baseline has been shattered. Today, Americans often seem to live in parallel universes of information, largely shaped by which media and social feeds they follow.
Must watch TED Talk by Carole Cadwalladr.
“We’re watching the international order fall apart — and this is just the beginning,” says journalist Carole Cadwalladr. In a scorching talk, she exposes her personal experiences with this fast-moving digital coup led by the “broligarchy” — tech titans like Elon Musk helping dismantle democracy and enable authoritarianism. But Cadwalladr doesn’t stop there. She offers a playbook for digital defiance in an age of mass surveillance, data harvesting, and unchecked corporate power — and reminds us we have more power than we think.
In one universe, for example, the COVID-19 pandemic was a serious health crisis that required collective action and trust in science; in another universe, it was dismissed as a hoax or a scheme, and even basic health measures were seen as conspiratorial control. In one reality, the 2020 election was the most secure in history, affirmed by dozens of court cases and audits; in the other, it was stolen in a massive fraud that somehow left no real evidence, yet millions believe it to this day.
These starkly different worldviews aren’t just academic—they led to real-world consequences, from people rejecting lifesaving vaccines to angry mobs storming the Capitol. How did we get here? Technology and social media have played a pivotal role in demolishing the trusted gatekeepers of information and elevating fringe voices.
It’s true that skepticism of the media and institutions predates social media, but never has it been so easy for false “alternative facts” to gain equal footing with verified truth. On platforms like Facebook or Twitter, a fabricated story from a fringe website can appear in your feed right alongside a report from a reputable news outlet, carrying similar visual weight. If that fake story aligns with your beliefs, you have no immediate reason to doubt it—especially if you’ve already been conditioned to think mainstream media lies.
The term “fake news” was ironically popularized as a weapon by those spreading falsehoods: legitimate journalists reporting uncomfortable truths were branded as fake, while actual fake news was presented as real. This gaslighting strategy has severely damaged trust in once-respected sources of information.
We now see poll after poll showing trust in traditional media at record lows. Only a small fraction of Americans say they have a “great deal of trust” in the press. This collapse is particularly acute among certain political groups—many conservatives, for instance, have been convinced that all major news organizations are hopelessly biased or even engaged in conspiracies against them. As a result, they turn exclusively to partisan media or echo chambers online that only reinforce their perspective.
Likewise, some on the left might dismiss any information that comes from a right-leaning outlet. The fragmentation is complete: there is no referee everyone accepts, no common narrative everyone acknowledges.
This is a triumph for propagandists and a tragedy for democracy. Without shared truth, we can’t have constructive debate. It’s like we’re arguing about what to do in a burning house while one group insists the house isn’t on fire at all. A democracy where a large segment of the population fundamentally disbelieves in the fair reporting of events or the honesty of elections is a democracy on the brink.
Public trust is the glue that holds together our social contract; losing that trust means losing the “unum” in E pluribus unum – out of many, one. Instead, we risk splintering into tribes where each tribe has its own “truth” and won’t hear anything else.
Journalism itself has been under financial and political assault, further weakening our defenses. Social media siphoned away the advertising revenue that once supported local news and quality reporting, leading to newsroom layoffs and the closure of many local papers. In the void, social media and talk radio personalities (sometimes with extreme agendas) filled the gap. It is not an exaggeration to say that we are experiencing a crisis of truth. Americans are asking: Who can I trust? What is real?
When conspiracy theories and propaganda become as prevalent as fact-based news, many people just throw up their hands and assume everything is a lie or, conversely, they latch onto one false narrative as the gospel because it feels right emotionally. Both reactions are dangerous. Cynicism (“you can’t believe anything, so why even try to know the truth”) breeds disengagement from civic duties. Blind faith in a preferred narrative (“only my side is telling the truth, the other side always lies”) breeds fanaticism and tears communities apart.
We have to rebuild trust and a sense of shared reality, but technology’s current business model isn’t helping. In fact, it continues to profit from this chaotic information environment. The collapse of shared truth did not happen spontaneously; it was accelerated and monetized by digital platforms that value attention over accuracy. As we will discuss next, those companies have become powerful gatekeepers with little accountability, further complicating the quest to restore a healthy democratic discourse.
Big Tech’s Power and Profits vs. Democratic Principles
At the center of this story sits Big Tech – the handful of corporations that design and control the digital public square. Companies like Facebook (Meta), Google (which owns YouTube), Twitter (X), and others have accumulated an astonishing degree of power over how information flows in our society.
In many ways, their executives in Silicon Valley have more influence over what Americans see and believe than any editor of The New York Times or any government regulator. But unlike traditional media, which had at least some ethical standards and editorial oversight, Big Tech largely denies being a “media” entity at all. They like to claim they’re neutral platforms or just infrastructure, even as their algorithms and policies actively shape our national conversation.
The fundamental problem is that the financial incentives of these companies are misaligned with the needs of democracy. Their profits come from advertising and data harvesting, which means they profit more when we stay online longer, share more about ourselves, and engage more fervently with content.
As we’ve explored, that often translates into promoting whatever content will keep us hooked – not what informs us best. More engagement also means more personal data to collect, which in turn means more targeting opportunities to sell to advertisers (or political campaigns). It’s a self-reinforcing cycle, and it has made these companies fabulously wealthy. Facebook’s empire, for instance, has generated tens of billions of dollars in annual profit, while Google’s parent company and others rake in similarly huge sums. Democracy, unfortunately, doesn’t show up on their quarterly earnings reports.
When push comes to shove, time and again these firms have prioritized growth and engagement over safeguarding democratic principles. There have been moments of reckoning – after 2016, after the Cambridge Analytica scandal, after genocidal propaganda spread on Facebook in Myanmar, after January 6 – where public pressure forced Big Tech to promise changes.
They have taken some actions, it’s true: for example, Twitter and Facebook did remove a lot of fake accounts linked to foreign influence; Facebook hired third-party fact-checkers; YouTube said it would demote some conspiracy content; and after the Capitol riot, several platforms banned or suspended accounts that spread violent lies (including President Trump’s accounts, at least for a time). But these measures have often been too little, too late – and in many cases, temporary.
In recent years, rather than tightening safeguards, some major platforms have loosened them. For instance, YouTube in 2023 reversed its policy that had banned content falsely claiming past U.S. elections were stolen, opening the door again to election denial videos. Twitter, under new ownership as “X,” slashed its moderation teams and welcomed back a number of previously banned agitators, all under a banner of “free speech” absolutism that conveniently also lowers costs and raises engagement.
Even Meta (Facebook’s parent) scaled back some of its content moderation efforts and the promotion of trustworthy news in favor of more “entertaining” content to compete with TikTok. What we see is a backslide: when public scrutiny fades, the old habits return. After all, less moderation and more outrageous content can mean more users clicking and sharing – which means more profit.
It’s not just about content decisions either. Big Tech wields enormous lobbying power in Washington, D.C., and state capitals. These companies have spent fortunes to influence lawmakers and avoid stringent regulations. They fund think tanks, sponsor academic research, and hire former government officials, all to protect a relatively deregulated environment that has allowed them to grow without being held accountable for the societal harms their platforms are causing.
So far, this strategy has been very effective. Unlike, say, the banking industry after the financial crisis or the auto industry with safety standards, Big Tech has not been meaningfully reined in by U.S. lawmakers.
One stark example of profit-over-principle was the decision by Facebook (guided by policy executives influenced by partisan concerns) to exempt political advertisements from fact-checking. This meant politicians could pay Facebook to push out ads containing lies, and Facebook would not stop them, even if the lies were blatant.
The rationale was that the platform shouldn’t censor politicians – but the effect was to give a green light for disinformation campaigns if you had the money to buy ads. That’s a perversion of the democratic advertising model, and it shows how money and tech intersect to harm truth. If a false ad ran on TV or in a newspaper, it would likely face public scrutiny and potential removal; on social media, it could fly under the radar, microtargeted to those most likely to believe it, and protected by company policy.
In short, Big Tech has become a power center unto itself, often acting above the norms and restraints that other democratic institutions follow. They have connected the world in miraculous ways, yes, but they have also allowed their platforms to be used as tools of manipulation and hate, largely shrugging off responsibility by citing free speech or technical neutrality.
Let’s be clear: a private company’s algorithm is not neutral when it selectively amplifies certain content. These companies have effectively written a new rulebook for mass communication but have refused to fully acknowledge that with great power comes great responsibility. Until they align their operations with democratic values – or are compelled to by law – we will continue to see their immense power and profit motives undermining the very system that allowed them to thrive.
Political Exploitation and Regulatory Failure
One might ask: where are our leaders and lawmakers in all of this? Shouldn’t the government be protecting the public and the democratic process from these kinds of manipulations and threats?
In an ideal world, yes. In practice, political actors themselves have often exploited this Wild West environment for their own gain, and efforts at meaningful regulation have faltered in the face of partisanship and Big Tech’s influence.
Politicians across the spectrum have learned to use social media to their advantage, sometimes in positive ways (mobilizing supporters, engaging voters) but often in deeply troubling ones. Consider how readily some candidates and officeholders spread misinformation online.
Falsehoods that would once be relegated to fringe pamphlets can now come straight from a lawmaker’s Twitter account or campaign Facebook page, instantly reaching millions, and often with no one able to effectively hold them to account. When a lie is called out, the damage is usually already done, and the lie’s originator can retreat into claiming they’re being “censored” or “attacked by the media.”
I’ve seen elected officials share doctored videos, push baseless conspiracy theories, and even openly coordinate with extremist online communities to rally their base with disinformation. In short, some of the foxes are not just guarding the henhouse – they’re actively raiding it, using the chaos of the digital information sphere to seize or maintain power.
This dynamic has made passing regulations extremely challenging. The issue of social media and disinformation has become polarized like everything else. One party often emphasizes the threat of lies and foreign interference (pointing to things like COVID misinformation or election lies that have come predominantly from the far-right), calling for more moderation and oversight. The other party often emphasizes instances where they feel social media companies have unfairly removed or downplayed content from conservatives, framing the issue as one of censorship and bias by “Big Tech liberals.”
The result is a stalemate: one side calls for action, the other cries foul that any action will just be used against them. Meanwhile, the tech companies play both sides – promising to police content better but also quietly encouraging the narrative that they shouldn’t be “speech police” at all.
Legislatively, there have been attempts to address pieces of the puzzle. Some lawmakers have proposed stronger privacy laws (to limit data harvesting and microtargeting), others have looked at reforming Section 230 of the Communications Decency Act (which gives platforms immunity from liability for user content), and still others have floated regulations specifically around political advertising transparency or even bans on known disinformation. But up to now, Congress has not passed comprehensive legislation to tackle these issues.
Compare this with Europe, where laws like the GDPR (data protection) and new rules in the EU’s Digital Services Act are holding tech companies to higher standards of privacy and content oversight. The United States has moved much more slowly. Part of it is ideological – Americans value free speech highly and worry (rightly) about government overreach. But part of it is also that powerful interests don’t want new rules. The tech industry’s lobbying might, combined with partisan gridlock, has so far preserved the status quo that allows so much digital manipulation to continue unabated.
It doesn’t help that some of the very individuals who would need to regulate disinformation have benefited from it. If disinformation or extreme online movements helped boost certain candidates into office, those candidates are less likely to crack down on the tactics that aided them.
It becomes a vicious cycle: disinformation helps elect people who then block reforms to stop disinformation. Moreover, the complexities of social media and AI-driven content make it easy for legislators who oppose regulation to claim that nothing can be done without infringing on freedoms. They can throw up their hands and say “you can’t regulate the internet” – which isn’t true (smart regulations can target abuses while safeguarding free expression), but it serves to justify inaction.
There have been some narrow moves at the state level or in specific areas. For example, a few states have looked at laws against deepfakes in election contexts (requiring disclosures if an image or video is artificially generated). But these are piecemeal and sometimes run into First Amendment challenges.
Meanwhile, official efforts to counter foreign disinformation – like government task forces working with social media companies – have themselves become politicized. In one recent instance, a court case has restricted how government agencies can even talk to social media firms about moderating content, under claims of censorship. We are essentially tying one hand behind our back in this fight.
The big picture is grim: the regulatory framework has not caught up to the digital reality of politics. We’re fighting 21st-century information wars with 20th-century (or even 18th-century) tools and laws. Political actors who see short-term advantage in the chaos are blocking collective action that would protect our long-term democratic health.
As citizens, this should alarm us. The people we elected to represent our interests are often a step behind the propagandists and sometimes complicit (actively or passively) in allowing the manipulation to continue. This isn’t about left vs right at the end of the day – it’s about truth vs falsehood, and the public’s interest vs a few actors’ interests.
We desperately need leaders with the courage and foresight to strengthen our democratic defenses, even if it might inconvenience their own campaigns or powerful donors. Without that, we’re essentially inviting a repeat or escalation of the turmoil we’ve seen – especially with even more potent tools on the horizon, as we discuss next.
The New Propaganda Frontier: Deepfakes and AI-Generated Lies
Just as we’re grappling with the fallout from one wave of technology, another wave is arriving that could make the challenge even greater. Generative artificial intelligence (AI) – the kind that can create human-like text, convincing images, audio, and videos (so-called “deepfakes”) – is becoming more advanced and accessible. These tools hold great promise in many fields, but in the wrong hands they are a propagandist’s dream and a democrat’s nightmare.
Imagine an AI system that can churn out thousands of fake news articles, social media posts, or even entire websites full of content promoting a political narrative – all at the touch of a button. We’re basically already there.
AI language models can produce text that reads as if a real person wrote it. This means a single operative could deploy an army of bot accounts on Twitter or Facebook, all spouting persuasive messages tailored to different audiences, and it would be harder than ever to tell that it’s not genuine grassroots opinion.
In past years, even when we saw disinformation campaigns, there were often tell-tale signs of inauthenticity (broken English from a foreign troll, or a dozen accounts posting the exact same phrasing). AI can mask those flaws, making the fakes smoother and more believable.
Even more alarming is the rise of deepfake videos and audio. We now have the technology to create a video of a real person – say, a candidate for office – appearing to say or do something they never did. The video could look nearly authentic to an untrained eye.
Think about the havoc a single convincing deepfake could wreak: a fake video of a candidate admitting to taking bribes, or a fake audio clip of an official plotting illegal acts, released right before Election Day, could throw the outcome into chaos. By the time experts prove it’s fake (if they even can quickly), the damage is done. People have shared it millions of times, the conspiracy-minded refuse to believe the debunking, and even rational people are left with a seed of doubt (“What if it was true? Where there’s smoke, is there fire?”).
We got a small taste of this in early 2024: there was an incident where a deepfake audio of President Joe Biden was circulated, telling voters in a primary state to “stay home” – essentially a form of voter suppression by impersonation. It turned out to be a “stunt” by a political consultant to highlight the issue, and it led to legal consequences for him. But what if it hadn’t been caught? It showed that the barrier to creating such fakes is low enough that someone actually tried it in a real election scenario. And that’s just one example.
AI-generated images have also flooded social media – some are obvious jokes, but others can fool people. In international elections in 2024, we saw AI-made images and videos being used to attack candidates (for example, visuals of incidents that never happened). While in 2024 the worst fears of a deepfake catastrophe didn’t fully materialize, experts caution that this is likely “the calm before the storm.” The technology is improving rapidly, and as awareness grows, so does the sophistication of efforts to deploy it maliciously.
Generative AI can also supercharge microtargeting. Picture an AI that not only knows your demographics and interests from data mining, but can also generate a custom political message just for you. Perhaps it composes a fake personal letter that sounds like it’s from a neighbor or a fellow churchgoer, sharing a heartfelt (but false) story about a candidate – knowing that you, specifically, are likely to be moved by it. This could happen on a mass scale, with different false stories sent to different people, all AI-generated and all fake, yet crafted to resonate deeply. That level of manipulation, targeted to the individual psyche, is unprecedented.
Furthermore, the mere existence of deepfakes has a troubling side effect: it allows real politicians caught in real scandals on video to claim “that video is a deepfake” and sow doubt about authentic evidence. This is sometimes called the “liar’s dividend” – when any inconvenient truth can be dismissed as fake, the liars benefit.
We saw a hint of this dynamic when some people, primed by disinformation, refused to believe clear video evidence of events (like some Jan 6 rioters claiming videos were staged). With deepfakes, that denialism becomes easier to spread. If nothing is reliable, everything is permissible – that’s the cynic’s conclusion, and it’s poison for accountability and truth.
The upcoming 2026 and 2028 elections will almost certainly see more AI involvement. On the positive side, campaigns might use AI for benign purposes (like quickly responding to constituents or creating harmless content). But on the negative side, we have to brace for waves of AI-created lies and counterfeit media. We need detection tools, legal guardrails, and public awareness to catch up – and right now, they aren’t fully there.
This new frontier of propaganda could amplify all the existing issues we’ve discussed: more data misuse (AI scraping the internet for personal info), more algorithmic spread (fake content created faster than it can be removed), more psychological tricks (AI crafting messages that hit emotional hot buttons), and even more erosion of trust (as people doubt everything or believe the wrong things). It’s a daunting challenge that we must meet head on if we have any hope of keeping our elections fair and grounded in reality.
A Global Playbook: Exporting Tactics and Importing Chaos
It’s important to understand that what we’re experiencing in America is part of a global trend, a feedback loop of tactics and effects that bounce from one country to another. In the age of social media, no democracy is an island. Techniques developed to manipulate one populace quickly get exported to others. In turn, the success or failure of these tactics abroad can influence what happens back here in the United States.
Take the example of Cambridge Analytica again. That firm didn’t only work on Brexit and the Trump campaign; it boasted of involvement in political campaigns across several continents, from Africa to Asia to Latin America. They took the microtargeting playbook wherever they could sell it. While that particular firm infamously collapsed, the approach is now standard in political consulting globally.
Likewise, social media propaganda wasn’t invented solely for U.S. elections – Russia had tested disinformation in its near-neighbor countries and in Eastern Europe long before trying it on Americans. What they learned elsewhere, they applied in 2016. And seeing what Russia pulled off, other regimes and political factions around the world said, essentially, “we’ll have what they’re having.”
In authoritarian countries or those sliding into authoritarianism, we’ve seen social media manipulation used to quash dissent and manufacture consent. The Philippines offers a cautionary tale: under President Rodrigo Duterte, armies of paid online trolls attacked his critics and spread misinformation to the public, turning social media into a fearsome tool of intimidation and narrative control. The atmosphere created helped a strongman maintain power and later smoothed the path for the return of the Marcos family to power via a campaign that heavily relied on rewriting history through YouTube and Facebook propaganda. Those same techniques – trolling, historical denialism, personality cult building online – can be found in other places and could easily be adopted by demagogues in the U.S. given the chance.
India is another hotspot: rumor campaigns on WhatsApp (a messaging service) have incited mob violence, and political operatives use WhatsApp and Facebook to spread divisive religious and nationalist propaganda ahead of elections. The ruling party there has been accused of leveraging these methods to sideline truth and energize their base with fear-mongering content about minorities or Pakistan.
If that sounds familiar, it’s because it rhymes with strategies used in Western contexts too. The medium might vary (WhatsApp is huge in India, Facebook in Myanmar, Twitter and Facebook here), but the strategy is essentially the same global playbook: identify societal fault lines and exploit them with relentless misinformation and emotional appeals delivered through social media.
The global export of these tactics also means American lies and conspiracy theories don’t stay in America. The QAnon conspiracy theory, for instance, started in the U.S. online fringe, but then versions of it spread to Europe and elsewhere, piggybacking on the pandemic and existing local narratives.
On the flip side, narratives crafted abroad can find fertile ground among certain American audiences. For example, disinformation about vaccines that started in Russian propaganda channels found its way into American Facebook groups, merging with homegrown conspiracy communities and fueling our own anti-vax movements. It’s a two-way street: our domestic extremists learn from foreign actors and vice versa.
Why does this matter for American democracy’s future? Because the tactics evolve and strengthen through this cross-border exchange. Think of it like a virus that mutates as it passes through different populations: each outbreak teaches the architects of disinformation something new, and they refine their methods. A technique that works to enrage voters in say, Poland or Brazil, might be noticed by an American political strategist and imported into the next U.S. election cycle.
In Brazil, we saw mass disinformation on YouTube and WhatsApp inflame tensions to the point of a mob storming government buildings after a contentious election (a chilling parallel to January 6). Brazilians had been inundated with false claims of election fraud and wild conspiracy theories – a script that felt very familiar to Americans. Those events reinforced that this is a global crisis of democracy, where nations are, in a sense, experiencing variations of the same illness. And just as with a pandemic, if it’s raging anywhere, it’s a threat everywhere.
Moreover, authoritarian powers globally have a shared interest in this trend: it weakens the model of liberal democracy that stands as a challenge to them. When misinformation destabilizes democracies, autocrats point and say, “See? Free societies are chaotic and dysfunctional.”
That narrative then justifies their own tight control over information (they claim they are preventing chaos by censoring, etc.), and it discredits democracy as an ideal in the eyes of some of their citizens and even people elsewhere. If we allow our democracy to be corroded by these tactics, we’re not just harming ourselves; we might inadvertently be validating the propaganda of dictators who argue that truth and freedom can’t be handled by the public.
In this interconnected reality, defending American democracy means engaging in a global effort to set norms and defenses against digital warfare. We should be sharing knowledge with other democracies on how to counter bot networks, how to educate citizens to spot fakes, how to pressure social media companies to act responsibly worldwide. Because if the worst practices continue to circulate internationally, they will come home to roost, again and again. The health of democracy anywhere affects democracy everywhere in the 21st century.
Exploiting Society’s Weaknesses: How Democracy’s Defenses Were Breached
All these technological and tactical elements – data misuse, algorithms, psychology hacking, foreign meddling, etc. – would not be as effective if our society were not already laden with certain sociopolitical and psychological vulnerabilities. It’s as if our democracy had pre-existing conditions, and the virus of digital manipulation took full advantage. To fully understand how technology undermined American democracy, we must also examine what in our society made us so susceptible.
Firstly, the United States has been experiencing deep political polarization for years. We are divided into camps often defined by intense distrust and even dislike of the other side. This polarization provided a perfect target. If Americans had been more united or even just more willing to engage civilly across differences, disinformation would have a harder time taking root.
But propagandists played on polarization like a fiddle: they fed each side content that confirmed their worst suspicions about the other. For conservatives, the feeds were filled with exaggerated or false stories about liberal “plots” and vice versa. In this environment, even outrageous claims found ready audiences primed to accept them because they meshed with the narrative of “the other side is evil.”
Social grievances and prejudices were also cynically exploited. Russia’s troll farm, for example, did not randomly choose content – they specifically targeted racial tensions by creating fake Black activist groups and also fake “Blue Lives Matter” and white nationalist-type groups, hoping to inflame both and heighten the conflict. They understood America’s racial history and knew exactly where wounds were still raw.
Similarly, domestic demagogues have exploited economic anxieties and cultural fears. For instance, as industries change and some communities feel left behind, certain politicians or online agitators feed those hurting people a steady diet of scapegoats to blame – immigrants, ethnic minorities, global elites – using social media to crystallize that anger into radical political action. When people feel ignored or disrespected, they are more vulnerable to messages that validate their pain and direct it at a target. Social media allowed those messages to proliferate widely, often unchecked.
Psychologically, humans have a yearning for simple explanations in complex times. The modern world is complex and often unsettling. Conspiracy theories and extremist ideologies often provide a simplistic, if false, explanation (“It’s all a secret plot by X group”). They also provide a sense of community – suddenly you’re “in the know” with others who see the “truth.” Digital platforms connected and nurtured these communities like never before.
A person who might have felt alone in their doubts about official narratives could go online and instantly find thousands of others echoing their thoughts, reinforcing them. It’s empowering in a way, but it can lead them down very dark paths. Those psychological hooks – belonging, identity, clarity in a confusing world – were exploited by those who spread QAnon conspiracies or election lies. They offered people an identity (“digital soldier” was a term QAnon pushed) and an easy blame for their problems. And too many good, decent folks got sucked in, because the propaganda spoke to their fears or resentments.
Our educational and media literacy shortcomings also became apparent. Civics education has been declining; many Americans have a limited understanding of how our government works or the basics of evaluating sources of information. This created fertile ground for manipulation.
If you don’t know, for example, how the vote counting process works, you might be easily misled by a viral video that claims normal counting delays are evidence of fraud. If you’re not taught to spot bias or check multiple sources, you might take that sensational Facebook post at face value.
Moreover, older generations – who didn’t grow up with the internet’s tricks – found themselves especially vulnerable to fake news on social media, forwarding it in chain emails or Facebook shares without realizing it was bogus. Younger people, on the other hand, face a constant onslaught of content and often skepticism about all information (which can lead to nihilism or apathy if not addressed).
Another vulnerability is the decline of local communities and local news. Democracy is strongest when rooted in community – when people of different views still interact at the PTA meeting, or read the same town newspaper, sharing local facts. But local newspapers have closed in droves, and community life has shifted online (or fractured entirely). In that void, national partisan narratives often swoop in to fill the identity gap.
People identify more with being part of an online ideological tribe than as neighbors in a town. That makes them more likely to believe wild claims about those who aren’t in their tribe and to support extreme measures “against the other side,” because the personal connection and empathy have eroded.
In summary, American society had some cracks: polarization, social inequalities, cultural conflicts, educational gaps, weakening community bonds. Tech and social media did not create those cracks, but they expertly exploited them and pried them wider. The forces undermining democracy found where we were divided or confused or hurting, and they poured their efforts into those areas, making the divides deeper, the confusion more profound, and the pain more politically volatile.
If we don’t address these underlying vulnerabilities, treating just the tech side might not be enough. We have to heal ourselves even as we fix our information systems. Otherwise, we’ll always be at risk of the next manipulator who finds a new way to twist the knife in our open wounds.
Conclusion: Defending Democracy in the Digital Age
American democracy stands at a crossroads. We have seen how technology and social media, when misused, can act as a wrecking ball against the pillars of truth, trust, and unity that uphold our republic. From 2016’s wake-up call to the harrowing lessons of January 6, and as we look toward the challenges looming in 2026 and 2028, one thing is clear: if we do nothing, the problems will worsen.
The same forces that brought us to this point – the microtargeting of our data, the viral spread of lies, the psychological manipulation, foreign interference, erosion of trusted institutions, unaccountable Big Tech power, complicit political actors, and the emergence of AI-driven propaganda – will continue to evolve and potentially grow stronger. The very nature of truth and consensus in America could fracture beyond repair, leaving future generations in a nation where up is down, where cynical disengagement or blind partisan allegiance replace informed citizenship. That is not a future we should accept.
But we are not helpless. Just as a lawyer builds a case to seek justice, we as citizens can build a case for saving our democracy and act on it. Reflection and action must go hand in hand.
First, we need to collectively reflect on how precious and fragile our democratic institutions are. We cannot take for granted that America will always “muddle through” no matter how much misinformation poisons the well.
The Framers of our Constitution envisioned an informed electorate, enlightened debate, and leaders accountable to facts and reason. Those ideals have been challenged before in our history, but the digital onslaught is a new kind of test. Recognizing the gravity of this moment is itself a crucial step – it means we treat disinformation and tech manipulation not as minor nuisances but as direct threats to government of, by, and for the people.
From that understanding flows action. What can we do? As everyday Americans, we have more power than we might think:
- Demand Accountability and Reform: Use your voice in the civic arena to push for change. Support lawmakers (regardless of party) who acknowledge these problems and propose serious solutions – whether it’s stronger privacy protections, transparency requirements for algorithms and ads, or thoughtful regulation of AI in elections. Let your representatives know that democracy’s integrity is a top priority for you.
Big Tech should no longer get a free pass to profit off lies and division. We should insist on rules that protect our data, punish the most egregious spread of falsehoods, and hold platforms to higher standards when it comes to public safety and truth. If the U.S. leads, it can set a global example.
- Reinforce Our Information Immune System: This means improving education and awareness. Encourage schools to teach media literacy and critical thinking from a young age – our children must learn how to navigate the digital world skeptically and smartly.
In our own lives, we should verify sensational claims before sharing, seek out reliable news sources (and subscribe to them if possible, to support quality journalism), and gently correct misinformation when we see friends or family falling for it. It’s not about partisan arguing; it’s about calmly pointing to facts. Over time, if each of us helps pop one or two misinformation bubbles in our circles, it can make a difference.
- Build Bridges, Not Just Clicks: One antidote to polarization is personal connection. Make an effort to step outside your echo chamber – online and offline. Follow a variety of voices, including some that challenge your perspective (as long as they are respectful and fact-based). More importantly, rekindle community bonds.
Talk to neighbors, engage in local issues, join civic groups. It’s harder for malignant forces to tear us apart if we have strong, real-world relationships with people of different views. When we see each other as fellow Americans rather than faceless avatars or stereotypes, the disinformation that tries to dehumanize “the other side” loses power.
- Support Ethical Technology and Innovation: Technology got us into this mess, and technology can help get us out – but only if guided by human values. Advocate for (or choose to use) platforms that prioritize user well-being and truth over engagement-at-any-cost.
There are alternative social networks emerging with different models (some open-source, some subscription-based) that don’t rely solely on ads and algorithms that exploit us (for these reasons I’m a big fan of the new “Bluesky” platform). Encourage development of tools that can detect deepfakes and flag AI-generated content.
Imagine if the next big tech craze was not a platform that divides us, but one that helps verify information or facilitates constructive debate – that’s possible, but demand matters. If millions of us called for better tech norms, companies would listen or new ones would spring up to meet the need.
- Keep the Faith and Stay Engaged: It’s easy to feel overwhelmed or cynical. That is exactly what the peddlers of chaos want – a demoralized, confused public that either believes their lies or shrugs and stops caring. We must not give them that victory.
Instead, reaffirm your commitment to the democratic process. Vote in every election, because a vote is a counterweight to manipulation. When you vote based on careful consideration rather than fear-baiting memes, democracy wins. Encourage others to vote and participate, especially younger folks inheriting this digital landscape. We need their energy and insight in this fight.
As a society, we also should consider deeper questions: How do we reconcile free speech with the need for truthful discourse? How can we update our laws without endangering the First Amendment? These are hard questions, but not impossible ones.
The Constitution is not a suicide pact; protecting the marketplace of ideas from being willfully flooded with falsehoods is crucial to truly free speech (speech that enlightens and informs). Solutions might include targeted interventions: for example, requiring disclosure when content is the result of a bot or AI, so people know when they’re not dealing with a real human. Or mandating that social media companies give users the choice to see an unalgorithmed chronological feed, so we are not beholden to the engagement-driven ranking.
There’s also talk of “circuit breakers” on virality – if a piece of content suddenly explodes in shares, maybe systems could automatically pause to fact-check it before allowing further spread. These are the kinds of innovative ideas that can be explored, if we insist that our leaders and tech CEOs treat disinformation as the crisis it is.
In this fight, we should remember what we are defending. It’s not just an abstract idea of democracy; it’s the very real future of our communities and children. We want a country where debate is vigorous but grounded in reality, where elections are hard-fought but broadly accepted as fair, where technology serves the people instead of secretly manipulating them. We want a future generation that can tell truth from falsehood, that can disagree with each other without viewing fellow citizens as enemies, and that can harness digital tools for creativity and connection rather than hate and deception.
The challenges are immense, but America has faced down grave threats before. Each time, it required clear-eyed acknowledgment of the danger and a united effort to overcome it. This moment is no different.
The tools and battlefields have changed – they are Facebook pages, Twitter threads, TikTok videos, AI-generated avatars – but the essence of the struggle is the same: will we, the people, control our own destiny, or will we be controlled by forces that profit from our division and ignorance? American democracy has always derived its strength from an informed, engaged citizenry. That is our heritage and our best hope.
In closing, let’s channel our urgency into determination. Let’s take the anger or fear we might feel about these threats and turn it into purpose. We each have a role to play in fortifying our nation’s democratic ideals against this onslaught.
The message is as clear and bold as can be: We must reclaim our digital public square for truth and accountability, and we must do it together, now. If we succeed, future generations of Americans will inherit a democracy that is not only intact but reinvigorated – one that harnessed technology for good while rejecting its misuse for destruction. They will look back on these years as a trial we endured and overcame, preserving the torch of liberty and truth for them to carry forward.
Mitch Jackson, Esq. | links
Enjoy the podcast conversation on this article and topic (it’s really good!)
This post is free.
But free doesn’t build the future.
Independent journalism only works when people like you choose to lean in—not just with attention, but with support.
If this work matters to you, today’s a great day to take the leap.
$5 a month. $50 a year. For you or gift to a friend.
A small investment in something bigger than all of us.
The prevailing system of governance is outdated and out of line (arguably broken) with the current cultural priorities. The systematic breakdown is being exploited by election politics. You point out the complicit cooperation of big tech in exchange for financial gain. However, you don't include capitalism in your conversation of democracy. Capitalism is as vital an institution as democracy and in America, the two are symbiotic. I contend the solution will involve addressing both. In fact, I contend the cornerstone of America's multifaceted sociopolitical divides is the wealth divide. The US economy is the source of America's greatness as a humanitarian democracy. The US GDP comes primarily (70%) from consumer spending. Consumers, or shall we say, Citizen-Consumers, are the greatest asset of the US economy and in turn, America's greatness as a humanitarian democracy. Citizen-Consumers are the fuel that drive those big tech firms and every other larger corporation for that matter. The US public financial market is a national asset that depends on the purchasing of Citizen-Consumers - rent, mortgage, care payments, groceries, phone bills, Netflix ... The market capitalization value of all those publicly traded companies depends on Citizen Consumers, BUT the Citizen Consumers have no benefit from the value of the publicly trade stock that depends on their purchasing loyalty. If all Citizens were shareholders, the political, big tech broeaucracy would be a different game. Don't forget the role of capitalism when you formulate solutions:
https://open.substack.com/pub/findingwe/p/a-novel-simple-and-compelling-economic?r=3ji8n5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Posted on Linked. Later on Blue Sky. The only two social media I use, the latter not as much. Re-stacked here. Are you in contact with the fair Ms Cadwalladr?