
FYI: AI Misused: The Real-World Harms Unfolding Today
Artificial intelligence (AI) promised a new era of efficiency, insight, and innovation. From automating repetitive tasks to analyzing complex datasets, AI has transformed industries at an astonishing pace.
But alongside its benefits, human misuse of AI has already generated significant harm—hurting individuals, undermining social trust, and creating ethical crises across sectors. These problems aren’t hypothetical; they are happening now, in healthcare, employment, digital media, public safety, and personal privacy. AI is a tool like a gun. Guns do not kill people; people kill people.
Deepfakes and Digital Harm: Personal and Public Misinformation
One of the most visible and troubling problems from AI misuse is the explosion of deepfakes—synthetic audio, video, and images that convincingly portray real people saying or doing things they never did. These tools were originally developed for creative and research purposes, but have rapidly become instruments of deception and hatred.
Deepfakes are being used to target individuals with fabricated explicit content that inflicts emotional trauma and reputational damage. In one reported case, a middle school girl was subjected to AI-generated nude images shared by classmates, causing intense distress and even disciplinary consequences at school despite her being a victim. AP News+1
AI-generated impersonations are not limited to minors. Health professionals have found their likenesses used in deepfake videos promoting dubious medical products and misinformation on social platforms—leading to confusion and mistrust about legitimate healthcare advice. The Guardian
Beyond individual harm, deepfakes have been employed in political and financial manipulation. Fake videos of politicians and fabricated incident footage have briefly influenced markets, distorted public opinion, and fueled conspiracy theories. Wikipedia
The consequence is a fragmented information environment in which digital content cannot be trusted at face value.
2. Bias and Discrimination in Decision-Making Systems
AI systems are only as unbiased as the data they learn from—and much of that data reflects historical social inequities. When developers deploy AI without sufficient safeguards, the results can systematically disadvantage entire groups.
Hiring algorithms, for example, have discriminated on the basis of age, gender, or background. One corporate AI tool rejected older applicants at disproportionate rates, resulting in legal action and settlements. Techopedia
Likewise, Amazon’s early AI recruiting tool learned to prefer male candidates because it was trained primarily on male-dominated hiring histories. Techopedia
In health systems, algorithms designed to prioritize care have underscored sicker patients from minority communities by using historical spending as a proxy for health needs—leading to unequal care. Medium
These examples reveal the broader issue: AI can entrench and amplify existing social biases when left unchecked. This isn’t just a technical shortcoming—it has real effects on people’s livelihoods, access to services, and dignity.
3. Fraud, Scams, and Financial Abuse
AI is being harnessed by criminals to enrich themselves at the expense of unsuspecting victims. Voice-cloning and other generative techniques have been used to impersonate executives and authority figures, duping employees into wiring large sums of money to fraudulent accounts.
For example, one UK deepfake voice scam netted over €220,000, and in a separate incident, AI-driven impersonation enabled a multimillion-dollar bank heist. humanity.org
These frauds exploit the illusion of authenticity that AI can generate. When a message sounds exactly like a trusted CEO’s voice or looks like a legitimate official communication, even experienced professionals can be deceived.
Such schemes illustrate a broader trend: AI not only automates labor but also automates deception—making scams more believable and harder to detect.
4. Mental Health and Vulnerable Populations
AI misuse can have tragic human costs beyond economic loss. Instances of poorly designed chatbots interacting with vulnerable users have been implicated in encouraging self-harm. In one widely reported episode, a young man in distress received harmful suggestions from an AI companion app, which may have contributed to his suicide. humanity.org
While supportive AI can help with wellness and engagement when properly designed and overseen, without safeguards, such systems can misinterpret emotional nuance and inadvertently harm users in crisis.
5. Privacy Invasion and Surveillance
AI systems often require vast amounts of personal data to function. Misuse can occur when organizations or individuals use this data without consent or deploy AI for intrusive monitoring.
For example, wearable AI and smart glasses, combined with facial recognition, have been shown to identify people and retrieve personal information without their consent. Kaira Software
Beyond device misuse, governments and corporations increasingly rely on AI-driven surveillance, raising concerns about civil liberties, freedom of movement, and data exploitation. LinkedIn
The result is a privacy landscape under threat, where individuals lose control over basic information about themselves.
Why These Misuses Happen
Underlying all these harms is one common factor: human choice and incentive misalignment. AI itself is a tool—a set of algorithms and data. What determines whether its impact is positive or negative is how humans design, deploy, regulate, and govern it.
Key reasons misuse emerges include:
Lack of oversight and regulation: Rapid AI deployment often outpaces rule-making. Systems are rolled out before safeguards are robustly in place.
Profit incentives over ethics: Companies may prioritize innovation and revenue over fairness, safety, and accountability.
Unintended consequences of optimization: AI systems that maximize metric performance can inadvertently harm people if the metric doesn’t align with human values.
Public illiteracy about AI risks: Many users and decision-makers lack the expertise to identify or mitigate AI-driven harms.
Conclusion
AI has already touched nearly every aspect of modern life, and its potential for good is undeniable. Still, the historical record shows that misuse—deliberate or inadvertent—can cause profound harm. From distorted truth and deepfakes to biased decision-making, fraud, and privacy violations, the problems are real and urgent.
Addressing them will require not just better technology, but ethical frameworks, thoughtful regulation, human oversight, and informed public engagement. Only then can society harness AI to promote human dignity and collective well-being, rather than undermine it.
For the video, click here