Digital Technologies and Intimate Partner Violence: A New Threat Model
📂 General
# Digital Technologies and Intimate Partner Violence: A New Threat Model
**Video Category:** Technology Research / Human-Computer Interaction / Cybersecurity
## ð 0. Video Metadata
**Video Title:** Human-Computer Interaction Seminar: Digital Technologies and Intimate Partner Violence
**YouTube Channel:** Stanford Center for Professional Development (scpd.stanford.edu)
**Publication Date:** February 16, 2018
**Video Duration:** ~1 hour
## ð 1. Core Summary (TL;DR)
This presentation explores how digital technologies are weaponized in cases of Intimate Partner Violence (IPV). The core problem is that current cybersecurity models are designed to protect against anonymous, remote hackers, leaving systems highly vulnerable to abusers who possess intimate knowledge, physical access, and coercive power over their victims. The opportunity lies in developing "adversary-aware" Human-Computer Interaction (HCI) to redesign interfaces, authentication methods, and reporting mechanisms to protect vulnerable individuals from the people closest to them.
## 2. Core Concepts & Frameworks
* **Concept:** Intimate Partner Violence (IPV) -> **Meaning:** Also known as domestic violence; encompasses rape, physical violence, and/or stalking by a current or former intimate partner. -> **Application:** Understanding the real-world context where technology abuse occurs, recognizing it affects ~25% of women and 11% of men.
* **Concept:** The UI-Bound Adversary -> **Meaning:** A threat model where the attacker is an authenticated user exploiting standard User Interface (UI) features rather than utilizing sophisticated technical hacks (like SQL injection). They gain access through guessed passwords, shared devices, or physical coercion. -> **Application:** Designing security systems that account for attackers who legitimately log into an account but act with malicious intent toward the true owner.
* **Concept:** Adversary-Aware HCI -> **Meaning:** An approach to Human-Computer Interaction that proactively evaluates how UI designs might be abused in an IPV context and seeks to intentionally degrade that "abusability." -> **Application:** Designing features like location sharing or password recovery with built-in safeguards to prevent silent monitoring or account lockouts by an intimate partner.
* **Concept:** Magical Hacking -> **Meaning:** A term used to describe the phenomenon where victims and support professionals incorrectly attribute advanced technical skills to an abuser who is merely exploiting basic UI features or shared passwords. -> **Application:** Training support workers to demystify tech abuse, shifting the focus from impossible "hacking" to practical account security and device management.
* **Concept:** Dual-Use Applications -> **Meaning:** Legitimate software or features (e.g., family location tracking, cloud syncing) that are repurposed by abusers for surveillance and control. -> **Application:** Identifying which standard app features require stricter consent and visibility controls to prevent covert monitoring.
## 3. Evidence & Examples (Hyper-Specific Details)
* **Prevalence of IPV:** The National Intimate Partner and Sexual Violence Survey (2012) indicates 25% of women and 11% of men suffer IPV. Applied to tech platforms, this represents approximately ~360 million Facebook users and ~252 million Android users potentially impacted.
* **Research Methodology:** The study partnered with the NYC Mayor's Office to Combat Domestic Violence (OCDV). Researchers conducted 11 focus groups with 39 survivors (ages 18-65) across 15 countries and semi-structured interviews with 50 professionals (NYPD, case managers, attorneys) across all 5 boroughs of NYC, resulting in over 1,000 pages of transcripts.
* **Client Story 1 - Shared Ownership & Humiliation:** An abuser installed spyware on a shared family computer. Leveraging his background in programming, he obtained the victim's Facebook and email passwords. He then stole non-consensual naked photos and sent them to her bosses, friends, and family via Facebook messages and email, causing severe public humiliation.
* **Client Story 2 - Account Compromise & iCloud Syncing:** An abuser demanded the victim's passwords under threat of ending the relationship. He used iCloud syncing to automatically read her iMessages on his own Apple device. When she left him, he hacked her accounts, changed her recovery security questions, used "Find My iPhone" to place her device in "lost and erased mode," and routed all password reset texts to his own phone.
* **Proxy Harassment:** Abusers circumvent blocking tools by recruiting third partiesâsuch as a new partner or deceived friendsâto harass the victim on their behalf.
* **Coded Threats (Detection Failure):** An attorney reported that abusers post threats on Facebook using "code language" (e.g., referencing a shared child or specific location). While terrifying to the victim, standard platform moderators fail to recognize these as violations of community guidelines.
* **Flawed Professional Advice:** A social worker advised a victim to "Delete your Facebook completely... throw away your phone and get a new phone." This advice was unworkable because victims often rely on these tools for safety, legally mandated communication regarding child custody, and support from overseas family.
* **Escalation Risk:** A social worker noted that when victims follow advice to shut off contact and change numbers, the abuser loses their mechanism of control and is highly likely to escalate their behavior to physical stalking or violence.
* **Fake Tinder Account (Revenge Porn):** An abuser created a fake Tinder profile using the victim's address and explicit photos, soliciting sex. This resulted in 25 strangers ringing her doorbell at night. Because the abuser legally paid for promotional features on the app, support services found it incredibly difficult to get the content removed.
* **The "Come Rape Me" Ad:** A police officer described an abuser placing an explicit ad mimicking the victim. The officer noted extreme difficulty in getting the ad removed by the platform because it appeared to be a standard, paid advertisement rather than an explicit terms-of-service violation.
## 4. Actionable Takeaways (Implementation Rules)
* **Rule 1: Redefine the Threat Model for Consumer Tech** - Security teams must expand their threat models beyond the "stranger on the internet." Design authentication and recovery flows assuming the attacker may have physical access to the device, know the answers to security questions, and share a home IP address.
* **Rule 2: Implement Adversary-Aware HCI Design** - Systematically evaluate how new features could be weaponized for surveillance or harassment. For example, ensure "dual-use" features like location tracking provide persistent, visible indicators to the user that they are being monitored.
* **Rule 3: Stop Recommending "Digital Isolation"** - Support professionals must stop advising victims to simply throw away their devices or delete accounts. Instead, provide actionable steps for securing existing accounts, recognizing that digital access is crucial for the victim's safety and support network.
* **Rule 4: Build Context-Aware Abuse Reporting** - Technology platforms must develop reporting mechanisms that allow victims to provide context for "coded threats" or proxy harassment. Standard automated moderation fails to catch the nuanced abuse prevalent in IPV.
* **Rule 5: Establish Expedited Takedown Channels for IPV** - Platforms must create specialized, responsive channels for IPV advocates and law enforcement to quickly remove fake accounts, spoofed profiles, and non-consensual explicit imagery, rather than relying on generic customer support queues.
## 5. Pitfalls & Limitations (Anti-Patterns)
* **Pitfall:** Relying on Two-Factor Authentication (2FA) as a cure-all. -> **Why it fails:** In IPV, the abuser often controls the devices or sets up the accounts, routing the 2FA SMS or email to their own device. -> **Warning sign:** The victim attempts a password reset but the confirmation message goes to an unknown or abuser-controlled number.
* **Pitfall:** Assuming abuse requires sophisticated hacking. -> **Why it fails:** It leads to "Magical Hacking" syndrome, where professionals throw up their hands in defeat, missing simple solutions like logging out of active web sessions or revoking shared app permissions. -> **Warning sign:** Support workers dismiss a victim's concerns by stating "he must be a hacker" instead of auditing standard account settings.
* **Pitfall:** Providing high-level, generic tech safety advice. -> **Why it fails:** Advice like "check your privacy settings" is not actionable for a user experiencing trauma and unfamiliar with complex, frequently changing platform UIs. -> **Warning sign:** Victims repeatedly ask caseworkers *how* to access specific settings, indicating the provided tip sheets are insufficient.
* **Pitfall:** Depending on standard terms-of-service reporting. -> **Why it fails:** Abusers exploit the rules, using fake names or coded language that doesn't trigger automated filters, leaving the burden of proof entirely on the traumatized victim. -> **Warning sign:** A victim reports a clearly abusive fake profile, but receives an automated response that it "does not violate community standards."
## 6. Key Quote / Core Insight
The relational and social context of intimate partner violence completely undermines our conventional approaches to digital security. We build systems to protect users from anonymous strangers on the internet, but we fail to protect them when the most dangerous adversary is the person they share a home, a data plan, and a life with.
## 7. Additional Resources & References
* **Resource:** Freed et al., "Digital Technologies and Intimate Partner Violence: A Qualitative Analysis with Multiple Stakeholders", CSCW 2018. - **Type:** Academic Paper - **Relevance:** Foundational qualitative study mapping the ecosystem of tech abuse in NYC.
* **Resource:** Freed et al., "A Stalker's Paradise: How Intimate Partner Abusers Exploit Technology", CHI 2018. - **Type:** Academic Paper (Best Paper Award) - **Relevance:** Details the specific mechanisms and UI features abusers exploit.
* **Resource:** Chatterjee et al., "The Spyware Used in Intimate Partner Violence", IEEE Symposium on Security and Privacy, Oakland 2018. - **Type:** Academic Paper - **Relevance:** A measurement study analyzing the prevalence and types of spyware used in IPV contexts.
* **Resource:** National Network to End Domestic Violence (NNEDV) Safety Net Project - **Type:** Organization - **Relevance:** Mentioned as a primary source for tech safety tip sheets used by professionals, though noted as needing more actionable detail.
* **Resource:** New York City Mayor's Office to Combat Domestic Violence (OCDV) / Family Justice Centers - **Type:** Government Initiative - **Relevance:** The partner organization providing the physical locations and ecosystem where the research was conducted.