Blog

What Is social engineering?

When attackers target human behavior to compromise cyber security, the tactic is known as social engineering.

Tyler-Moffitt headshot

Tyler Moffitt

August 25, 2025

Color photo illustration of fingers typing on computer keyboard with web and social media icons floating above.

When most people think about cyberattacks, they picture malicious code or technical exploits. But more often than not, attackers bypass firewalls and endpoint protection by going after something far more vulnerable—human behavior. This tactic is known as social engineering.

Social engineering is the art of manipulating people into revealing sensitive information or performing actions that compromise security. It is one of the most effective and widely used tools in the cybercriminal playbook because it targets the person behind the screen rather than the system itself.

Despite investments in technical defenses and decades of awareness campaigns, social engineering remains at the center of most successful breaches. Whether it is an email from a fake CEO asking for a wire transfer or a phone call from someone pretending to be IT, these attacks work because they exploit trust, urgency, and fear.

This article breaks down what social engineering is, how it works, the most common types, and why it continues to be one of the most dangerous threats to organizations of all sizes.

Social engineering explained

Social engineering is a tactic used by attackers to trick individuals into divulging confidential information, granting access, or taking actions that benefit the attacker. Rather than hacking a device or breaking into a network, social engineers focus on manipulating people.

At its core, social engineering relies on psychological manipulation. Attackers may pretend to be someone familiar, create a sense of urgency, or use language that taps into fear or helpfulness. Their goal is to convince someone to bypass security protocols or provide access that would otherwise be blocked.

Social engineering is not a specific attack method. It is a technique that can be applied across many channels and threat types. For example, phishing emails are a form of social engineering. So are fake tech support calls, malicious QR codes, and impersonated vendors.

This makes social engineering highly adaptable and hard to detect with technical tools alone. The attack is often delivered through seemingly normal interactions—email, phone, text, or even face-to-face communication. Because of this, social engineering remains one of the hardest cyber threats to prevent.

Common types of social engineering

Social engineering comes in many forms. Understanding the most common tactics is the first step toward building better awareness and defense.

Phishing
Phishing is the most well-known and widely used social engineering method. Attackers send emails that appear to come from trusted sources—colleagues, service providers, even executives. These messages often include a link or attachment that installs malware or leads to a fake login page.

Variations of phishing include:

  • Spear phishing – Targeted attacks on specific individuals
  • Whaling – Targeted attacks on executives or high-value personnel
  • Smishing – Phishing via SMS text messages
  • Vishing – Voice phishing through phone calls
  • Quishing – QR code phishing that links to malicious websites

Pretexting
Pretexting involves creating a fake identity or scenario to gain trust and extract information. Attackers may pose as HR, tech support, legal counsel, or a vendor, claiming to need credentials or private data to complete a task.

Baiting
Baiting tricks users into downloading malware by offering something enticing, such as free music, software, or access to restricted content. In some cases, physical bait is used—like leaving infected USB drives in employee parking lots.

Quid pro quo
In these attacks, the attacker offers a benefit in exchange for information. A common example is fake tech support offering to fix a device if the user grants remote access.

Tailgating or piggybacking
These are physical forms of social engineering. An attacker follows an authorized person into a secure building or restricted area. This can happen when someone holds a door open for a “delivery driver” or someone claiming to be a contractor.

Impersonation and deepfakes
With advances in AI, attackers are now impersonating executives using cloned voices or synthetic video. These tactics are increasingly being used to bypass identity verification over the phone or during virtual meetings.

Each of these types exploits human psychology in different ways, but they all share the same goal: to trick someone into making a security mistake.

How social engineering works

Social engineering attacks often follow a structured playbook. While individual tactics may vary, most campaigns include four core steps:

1. Research
Attackers begin by gathering information on the target. This might include job titles, organizational charts, email addresses, vendors, or recent activity. Public sources like LinkedIn, company websites, and social media make this step easy.

2. Engagement
Once the attacker knows who to target and how to approach them, they initiate contact. This might be through an email, phone call, text message, or in-person interaction. The goal is to create a believable scenario.

3. Manipulation
This is where social engineering shines. The attacker uses urgency, authority, fear, or familiarity to manipulate the victim. They may pressure someone to act quickly, cite internal policies, or create a sense of crisis that bypasses critical thinking.

4. Execution
Once trust is established, the attacker gets what they need—login credentials, access to a system, permission to install software, or a financial transfer. In some cases, this is only the beginning. The attacker may now have a foothold inside the organization.

This process can happen in minutes or play out over days or weeks. In more advanced campaigns, attackers may use multiple channels and team members to carry out social engineering in layers.

Why social engineering works

Even well-trained employees can fall for a social engineering attack. That is because these tactics exploit natural human instincts.

Trust
Most people want to believe others are acting in good faith, especially when communication appears to come from a known source.

Authority
If a message or request appears to come from a superior, such as a CEO or manager, employees are more likely to comply without questioning.

Urgency
By creating time pressure, attackers reduce the chance of someone double-checking a request. Statements like “this must be done in the next 15 minutes” are red flags.

Fear
Threats of job loss, legal trouble, or financial penalties can force people into compliance.

Social norms
Attackers may rely on politeness or helpfulness. For example, someone might hold a door open rather than confront a stranger.

Add to this the growing number of remote workers, digital distractions, and high-volume communication, and the odds of a successful social engineering attack rise dramatically.

Detection and prevention

Social engineering cannot be stopped with software alone. Because the attack vector is human, defense requires a layered approach that blends technology, training, and culture.

User awareness training
Education remains the most effective defense. Regular phishing simulations, security workshops, and real-world examples help employees spot suspicious behavior. Reinforcing the idea that “trust but verify” is not only allowed but expected can shift culture in the right direction.

Email and communication security
Use secure email gateways that block spoofed domains, scan attachments, and rewrite suspicious links. Implement domain-based message authentication (DMARC, DKIM, SPF) to prevent attackers from impersonating company email addresses.

Access controls and multi-Factor authentication
Even if an attacker obtains credentials through social engineering, MFA can stop them from logging in. Applying least privilege access limits what an attacker can do if a user is tricked into granting access.

Verification culture
Encourage employees to double-check requests, especially financial or access-related ones. This can be as simple as a quick call to verify a wire transfer or confirming a change in banking information with a known contact.

Behavioral monitoring
Security platforms can flag unusual activity like rapid login attempts, off-hours file access, or new devices logging in from unexpected regions. These signals can reveal when a social engineering attack has moved beyond the initial stage.

Incident simulation and tabletop exercises
Run regular drills to simulate social engineering scenarios. These help teams test their response, refine policies, and increase confidence in recognizing real threats when they occur.

Prevention is about more than catching phishing emails. It is about building a workplace that expects deception and is prepared to question things that do not feel right.

AI’s role in social engineering

Artificial intelligence is changing the game for both attackers and defenders. The use of AI has made social engineering more personalized, scalable, and difficult to detect.

Offensive use of AI
Attackers now use AI to craft highly convincing phishing messages. Language models can correct grammar, adjust tone, and mirror the communication style of a target company. Some tools even generate emails based on scraped LinkedIn data or past public messages.

Voice cloning and video deepfakes take this further. Attackers can generate synthetic audio of an executive’s voice and use it to pressure employees into urgent actions. Deepfake videos can mimic live video calls, making impersonation attacks more dangerous.

Social bots can gather reconnaissance at scale. They crawl through social media, news, and dark web forums to build detailed target profiles, automating what once took days of manual research.

Defensive use of AI
Defenders are also using AI to level the field. Machine learning tools analyze email behavior, detect linguistic anomalies, and flag unusual communication patterns. These systems can block threats before a user even sees them.

AI-driven identity verification is improving. Some companies now use biometric patterns or behavioral signals to verify voice calls and detect deepfakes. Others deploy AI to detect and respond to early-stage manipulation attempts by correlating user activity across systems.

The arms race between attackers and defenders is accelerating. The organizations that embrace AI in their defense strategy will be better positioned to adapt as social engineering tactics evolve.

Real-world case studies

Twitter internal access breach (2020)
A coordinated social engineering campaign tricked Twitter employees into providing access to internal tools. Attackers used phone-based vishing tactics, posing as IT staff. They gained access to high-profile accounts including Elon Musk and Barack Obama, tweeting scam links and collecting cryptocurrency from users.

Scattered Spider group (2023–2024)
This threat group used social engineering to compromise major US organizations, including telecommunications and hospitality companies. They specialized in SIM swapping, phishing help desks, and impersonating employees to gain access. Their tactics demonstrate how far threat actors can go with confidence, patience, and social pressure alone.

BEC financial Fraud (ongoing)
Business Email Compromise continues to be one of the most expensive cybercrimes. In many cases, attackers impersonate CEOs or vendors and request urgent wire transfers. According to the FBI’s Internet Crime Complaint Center, BEC attacks have caused billions in losses over the past five years.

Small business invoice scam
A regional construction company received a fake invoice that appeared to come from a trusted subcontractor. The message included updated bank account information for payment. Because the sender’s name and project details looked accurate, the invoice was paid. Only after the real vendor followed up did the fraud become apparent. The funds were unrecoverable.

Deepfake CEO voice incident
An international firm reported receiving a phone call from a person claiming to be the CEO, instructing the finance lead to urgently transfer funds to a new account. The voice matched perfectly. It was only later determined to be a deepfake created from public audio clips of the executive.

These cases show that social engineering is not theoretical. It is happening every day, in companies of all sizes, across every industry.

Conclusion

Social engineering succeeds because it bypasses technology and targets people. It does not matter how advanced your firewall or endpoint protection is if an attacker can convince someone to hand over access voluntarily.

The best defense is awareness. Training, policies, and tools can reduce risk, but the real change happens when employees expect deception and feel empowered to stop and verify. A call to confirm a request or a second look at a strange email could be the moment that prevents a major breach.

As attackers continue to refine their techniques using AI, automation, and human psychology, organizations must respond with equal speed and vigilance. Social engineering will always evolve, but with the right strategy, its impact can be greatly reduced.

Discover how security awareness training combats social engineering

OpenText™ Core Security Awareness Training offers a comprehensive, customizable  program to educate employees on the latest cybersecurity threats and best practices.

Learn more
Tyler-Moffitt headshot

Tyler Moffitt

Tyler Moffitt is a senior threat research analyst who stays deeply immersed within the world of malware and antimalware. He is focused on improving the customer experience through his work directly with malware samples, creating antimalware intelligence, writing blogs, and testing in-house tools.