Jump to content

Internet safety

From Wikipedia, the free encyclopedia
(Redirected from Offensive content)

Internet safety, also known as online safety, cyber safety, and digital safety, is the science and practice of reducing harms that occur through the (mis)use of technology.[1] It is a multidisciplinary, multi-stakeholder field that encompasses the design and delivery of policies, practices, technologies, and educational initiatives. Its purpose is to protect users (and especially vulnerable users) while preserving the benefits of digital participation.

Internet safety takes a human-centered approach that addresses the complex interplay of technology, behavior, and social context in digital spaces. The field has evolved from primarily focused on reactive threat mitigation to incorporate more proactive harm prevention and positive digital citizenship promotion.

Relationship to other online harm reduction disciplines

[edit]

Internet safety operates alongside several related disciplines. Cybersecurity which primarily focuses on technical threats to systems and data infrastructure. Trust and safety typically refers to platform-specific functions for content moderation and user protection within individual services. Cybercrime enforcement addresses criminal activities in digital spaces through law enforcement and judicial processes. Privacy and data protection focuses on safeguarding personal information and ensuring individuals have control over their data. Internet safety takes a human-centered approach that addresses the complex interplay of technology, behavior, and social context across these domains. The field has evolved from reactive threat mitigation to proactive harm prevention and positive digital citizenship promotion.

Types of online harm experienced

[edit]

Internet safety addresses what are commonly referred to as online harms - the various ways that technology can be misused to cause damage to individuals, communities, and society. These harms that occur through the (mis)use of technology can be categorized into several interconnected types. Harms can occur immediately through direct actions, or take effect over time through gradual manipulation or erosion of autonomy and/or perceptions of safety and security.[2]

Psychological Harm: Damage to mental health and wellbeing through experiences of cyberbullying, harassment, exposure to disturbing content, addiction-like behaviors, and the erosion of self-esteem through social comparison. This category also includes grooming, manipulation, and other forms of psychological abuse facilitated by digital platforms.

Financial Harm: Economic damage through fraud, scams, identity theft, unauthorized transactions, and other forms of financial exploitation. This includes both direct monetary losses and longer-term economic consequences such as damaged credit or compromised financial accounts.

Physical Harm: Threats to physical safety that originate online, including stalking that moves offline, sharing of location data that enables real-world harassment, encouragement of self-harm or dangerous behaviors, and coordination of offline violence or abuse.

Societal Harm: Damage to democratic processes, social cohesion, and public discourse through misinformation, hate speech, extremist recruitment, election interference, and the amplification of harmful conspiracy theories. This category includes threats to institutional trust and social stability.

These categories often overlap and interact with each other. For example, financial scams may cause both economic and psychological harm, while misinformation campaigns can lead to both societal damage and individual psychological distress. The interconnected nature of these harms requires comprehensive approaches that address multiple dimensions simultaneously.

Harmful activities

[edit]

The activities and behaviors that contribute to the harm experienced are commonly categorized using the "4 C's" framework: Content, Contact, Conduct, and Commercial risks.[3]

Content Risks: Harms arising from exposure to problematic material online. This includes violent or disturbing imagery, hate speech, misinformation, content promoting self-harm or suicide, developmentally inappropriate material such as pornography accessible to children, and extremist content that promotes dangerous ideologies or activities.

Contact Risks: Harms occurring through direct interaction with others online. This encompasses cyberbullying and harassment, grooming for sexual exploitation, unwanted contact from strangers, stalking and persistent unwanted communication, and recruitment for harmful activities including extremist groups or criminal enterprises.

Conduct Risks: Harms resulting from the individual's own online behavior, often influenced by digital environments. This includes sharing personal information inappropriately, engaging in risky behaviors encouraged online, participating in harmful challenges or trends, excessive screen time affecting wellbeing, and creating or sharing harmful content that may later cause regret or consequences.

Commercial Risks: Harms arising from exploitative commercial practices and inappropriate transactional relationships online. This includes fraud and financial scams, identity theft for economic gain, exploitative marketing practices targeting vulnerable users, inappropriate collection and use of personal data for commercial purposes, and predatory monetization of user engagement or addiction-like behaviors.

These categories recognize that harmful activities often involve complex interactions between platform design, user behavior, and external actors with malicious intent. The 4 C's framework is primarily focused on individual-level activities and risks. While this captures many important dimensions of online safety, some harms manifest at the societal level through systemic effects that may not be reducible to individual experiences - such as the erosion of democratic discourse, institutional trust, or social cohesion through coordinated manipulation of information ecosystems.

Multidisciplinary foundations

[edit]

Internet safety draws from a wide range of academic disciplines and professional fields, each contributing distinct perspectives, methodologies, and expertise to understanding and addressing online harms. This multidisciplinary approach reflects the complex nature of technology-mediated risks, which cannot be adequately addressed through any single lens or domain of knowledge.

Computer Science and Engineering: Technical safety measures, content moderation systems, privacy-preserving technologies, and platform design principles.

Criminology and Sociology: Understanding online communities, digital inequality, the social factors that contribute to harmful behaviors, and the criminological aspects of online harm.

Education and Digital literacy: Developing critical thinking skills, media literacy, and safe online practices through formal and informal learning.

Law and Policy: Legal frameworks for online harm, regulatory approaches, human rights considerations, and international governance mechanisms.

Media studies and Communication: Information ecosystem health, misinformation spread, and platform governance.

Psychology and Behavioral Science: Understanding the psychological impacts of online experiences, digital addiction, cyberbullying effects, and user behavior patterns.

Public Health: Population-level approaches to digital wellness, measuring harm at scale, and prevention strategies.

Multistakeholder approach

[edit]

The multidisciplinary nature of internet safety challenges necessitates a multistakeholder approach, bringing together different sectors with complementary expertise, responsibilities, and capabilities. No single organization or sector has the knowledge, authority, or resources to address the full spectrum of online harms effectively. This collaborative model recognizes that sustainable solutions require coordination across government, industry, civil society, academia, and user communities.

Government and Regulators play a crucial role in developing legal frameworks and enforcement mechanisms that establish baseline safety standards. They set compliance requirements for platforms and services, fund research initiatives and public awareness campaigns, and facilitate international cooperation on cross-border issues. Regulatory bodies also provide oversight and accountability mechanisms that ensure other stakeholders fulfill their responsibilities.

Technology Companies and Platforms are responsible for implementing safety-by-design principles in their products and services. This includes developing and maintaining content moderation systems, community management processes, and user empowerment tools that give individuals control over their online experiences. Companies also contribute through transparency reporting, external audits, and collaboration with other stakeholders on emerging challenges.

Civil Society and NGOs advocate for user rights and the protection of vulnerable populations while providing digital literacy education and training programs. These organizations conduct independent research, develop policy recommendations, and support victims of online harm through direct services. They also serve as important bridges between affected communities and other stakeholders, ensuring that policy discussions reflect real-world impacts.

Academic and Research Institutions provide the evidence base for understanding online harms and evaluating the effectiveness of interventions. They develop new safety technologies and approaches, train professionals in the field, and conduct independent research that informs policy and practice. Universities also serve as neutral spaces for multistakeholder dialogue and collaboration.

Users and Communities practice digital citizenship and provide peer support within online spaces. They report harmful content and behaviors, participate in safety education initiatives, and advocate for safer online environments. User communities also contribute valuable insights about emerging risks and the real-world effectiveness of safety measures through their lived experiences.

Approaches to online safety

[edit]

Internet safety employs both proactive and reactive approaches to address online harms. Proactive measures focus on preventing harms before they occur through thoughtful design, education, and regulation and increasing both system level and individual user resilience. Reactive measures address harms that have already manifested, providing response mechanisms and support for those affected.

Proactive safety measures

[edit]

Safety by Design incorporates safety considerations into technology development from the earliest stages, including user interface design, algorithmic systems that minimize harmful content amplification, and platform architectures that protect user privacy and autonomy.

Digital Literacy and Education builds users' capacity to navigate online spaces safely, recognize risks, critically evaluate information, and develop healthy relationships with technology through schools, community programs, and public awareness campaigns.

Regulation establishes legal frameworks, safety standards, and compliance requirements that platforms and services must meet. This includes laws governing content moderation, data protection, child safety, and transparency reporting obligations.[4][5]

Positive Digital Citizenship promotes respectful and constructive online behaviors through community building, social norm development, and programs that encourage empathy and ethical reasoning in digital contexts.

Empowerment Tools provide users with controls over their online experience, including content filtering, privacy settings, blocking mechanisms, and tools to manage their digital footprint according to their preferences and risk tolerance.

Reactive safety measures

[edit]

Content moderation involves systems for identifying, reviewing, and addressing harmful content through both automated detection and human review processes, balancing harm removal with protection of legitimate expression.

Enforcement implements regulatory penalties and legal consequences when safety standards are violated, including fines, sanctions, and coordination with law enforcement for criminal activities.

Incident Response provides rapid protocols for addressing acute safety threats, ensuring serious harms are escalated appropriately and victims receive timely protection and support.

Victim Support offers resources for those who have experienced online harm, including counseling services, legal aid, and technical assistance to help individuals regain control of their digital presence.

Harm Mitigation reduces the spread and impact of harmful content through content labeling, reduced distribution, account restrictions, and other measures that limit reach while preserving evidence for investigations.

Emerging approaches

[edit]

The field continues to evolve as technology and online behavior change. Key areas of development include algorithmic accountability to ensure recommendation and content moderation systems operate fairly and transparently, privacy-preserving safety measures that protect user privacy while preventing harm, enhanced global governance mechanisms for addressing cross-border online harms, inclusion and equity initiatives to ensure safety measures protect all users particularly marginalized communities, and mental health integration that better incorporates digital wellness considerations into safety frameworks.

Global frameworks and governance

[edit]

The complex, cross-border nature of online harms has catalyzed the development of new governance models that reflect internet safety's multidisciplinary and multistakeholder foundations. These emerging frameworks move beyond traditional regulatory approaches to embrace collaborative models that bring together governments, technology companies, civil society, and academic institutions.

Regional Legislative Frameworks represent coordinated attempts to establish comprehensive safety standards. The European Union's Digital Services Act creates binding obligations for platforms while establishing new oversight bodies that work across multiple member states.[6] The United Kingdom's Online Safety Act 2023 introduces a risk-based regulatory approach that requires platforms to assess and mitigate harms specific to their services.[7] Similar legislative initiatives across jurisdictions reflect growing recognition that effective governance requires both technical expertise and democratic accountability.

Multistakeholder Initiatives demonstrate how different sectors can collaborate on shared challenges. The Global Internet Forum to Counter Terrorism brings together major technology companies to share technical solutions and threat intelligence.[8] The Global Partnership to End Violence Against Children coordinates efforts across governments, civil society, and private sector actors to address online child exploitation. These initiatives show how complex problems require diverse expertise and shared responsibility.

International Cooperation Mechanisms facilitate coordination across borders and sectors. The Christchurch Call unites governments and technology companies in addressing terrorist and violent extremist content online. Regional bodies like the Council of Europe develop binding instruments such as the Convention on Cybercrime that create shared legal frameworks while respecting different constitutional traditions.

Human Rights Frameworks provide foundational principles that guide internet safety efforts across different contexts. The UN Guiding Principles on Business and Human Rights establish corporate responsibilities for preventing and addressing human rights impacts online. UNESCO's initiatives on information integrity and media literacy demonstrate how international organizations can convene diverse stakeholders around shared educational and normative goals.

Technical Standards and Industry Collaboration enable practical cooperation on safety challenges. The Global Alliance for Responsible Media coordinates advertisers, agencies, and platforms to address brand safety concerns while supporting independent journalism. Industry-led initiatives like the Shared Industry Hash Database allow companies to collaborate on identifying harmful content while preserving competitive dynamics and user privacy.

These governance approaches reflect growing understanding that internet safety challenges require institutional innovations that transcend traditional sectoral and jurisdictional boundaries. While still evolving, they point toward more collaborative and adaptive forms of governance suited to the global, interconnected nature of digital technologies.

Research and evidence base

[edit]

The field of internet safety is supported by growing research evidence across multiple domains, providing the empirical foundation for understanding online harms and developing effective interventions.

Prevalence Studies: Large-scale surveys measuring the extent and nature of online harm across different populations and platforms. Notable examples include the EU Kids Online network, which has conducted comprehensive surveys across 25 European countries, surveying over 25,000 children and revealing that exposure to various online risks varies significantly by country and demographic factors.[9] Similarly, the Global Kids Online initiative, led by UNICEF and LSE, has extended this research globally, surveying over 14,000 internet-using children across multiple countries to understand digital experiences in diverse cultural contexts.[10] Pew Research Center studies represent another significant contribution, showing that 46% of U.S. teens have experienced online bullying or harassment, with documented demographic variations in both platform usage and risk exposure.[11]

Impact Research: Studies documenting the psychological, social, and economic effects of online experiences on individuals and communities. Key examples include work by the Cyberbullying Research Center, where studies by Hinduja and Patchin have demonstrated significant connections between cyberbullying experiences and increased rates of suicidal ideation among adolescents, with both victimization and perpetration linked to mental health impacts.[12] Research by the Young and Resilient Research Centre at Western Sydney University provides another example, exploring how digital participation affects youth resilience and wellbeing, particularly among marginalized communities, with studies examining over 8,000 children and young people from more than 80 countries.[13]

Intervention Effectiveness: Randomized controlled trials and other rigorous evaluations of safety measures and educational programs. Examples include assessments of digital literacy curricula, evaluations of content moderation techniques, and studies measuring the effectiveness of bystander intervention programs in reducing online harassment. For instance, the Young and Resilient Research Centre has developed and evaluated youth-centered approaches to online safety education, demonstrating the importance of including young people's voices in designing interventions.[14]

Technology Evaluation: Research on the effectiveness and unintended consequences of content moderation systems, recommendation algorithms, and other safety technologies. Studies examine accuracy rates of automated content detection, potential biases in algorithmic decision-making, and the broader impacts of platform design choices on user behavior and wellbeing. One example of collaborative research in this area is the work of the Global Internet Forum to Counter Terrorism, which has contributed research on hash-sharing databases and collaborative technical approaches to identifying harmful content across platforms.[15]

Current challenges

[edit]

Despite significant advances in understanding and addressing online harms, the field of internet safety continues to face several persistent and emerging challenges that require ongoing attention and innovative solutions.

Guided Autonomy vs Developmental Mismatch: Child development requires gradual exposure to risk and complexity with appropriate support. However, digital environments often bypass this developmental scaffolding, exposing children to content and interactions before they have the capacity to handle them safely.

Scale and Automation: The volume of online content and interactions makes it difficult to identify and address harmful behavior at scale, leading to reliance on automated systems that may lack nuance.

Cross-Platform Coordination: Harmful actors often operate across multiple platforms, requiring coordination between companies that may compete with each other.

Cultural and Linguistic Diversity: Safety approaches developed in one cultural context may not translate effectively to others, requiring localized solutions.

Emerging Technologies: New technologies such as artificial intelligence, virtual reality, and blockchain create novel safety challenges that existing frameworks may not address.

Balancing Safety and Rights: Ensuring that safety measures do not disproportionately restrict freedom of expression, privacy, or other fundamental rights.

See also

[edit]

References

[edit]
  1. ^ "What's needed to tackle online harm". World Economic Forum. 2023-08-28. Retrieved 2024-01-15.
  2. ^ Smuha, Nathalie A. (2021-09-30). "Beyond the Individual: Governing AI's Societal Harm". Internet Policy Review. 10 (3). doi:10.14763/2021.3.1574.
  3. ^ Livingstone, Sonia; Stoilova, Mariya (2021-03-08). "The 4Cs: Classifying Online Risk to Children". CO:RE Short Report Series on Key Topics. Leibniz-Institut für Medienforschung. doi:10.21241/ssoar.71817.
  4. ^ "The Digital Services Act package". European Commission. Retrieved 2024-01-15.
  5. ^ "Online Safety Act 2023". UK Parliament. 2023-10-26. Retrieved 2024-01-15.
  6. ^ "Regulation (EU) 2022/2065 on a Single Market For Digital Services (Digital Services Act)". Official Journal of the European Union. 2022-10-27. Retrieved 2024-01-15.
  7. ^ "Online Safety Act: explainer". UK Government. 2023-10-26. Retrieved 2024-01-15.
  8. ^ "About GIFCT". Global Internet Forum to Counter Terrorism. Retrieved 2024-01-15.
  9. ^ Smahel, David; Machackova, Hana; Mascheroni, Giovanna (2020). "EU Kids Online 2020: Survey results from 19 countries". EU Kids Online. Retrieved 2024-01-15.
  10. ^ "Global Kids Online: Growing up in a connected world". UNICEF Innocenti. Retrieved 2024-01-15.
  11. ^ "Teens and social media: Key findings from Pew Research Center surveys". Pew Research Center. 2023-04-24. Retrieved 2024-01-15.
  12. ^ Hinduja, Sameer; Patchin, Justin W. (2010). "Bullying, cyberbullying, and suicide". Archives of Suicide Research. 14 (3): 206–221. doi:10.1080/13811118.2010.494133. PMID 20658375.
  13. ^ "Young and Resilient Research Centre". Western Sydney University. Retrieved 2024-01-15.
  14. ^ "New report re-imagines young people's online safety education". Western Sydney University. 2023. Retrieved 2024-01-15.
  15. ^ "Global Internet Forum to Counter Terrorism". Retrieved 2024-01-15.
[edit]