Child Safety Policy

Home

|

Child Safety Policy

1. Introduction

Enso Webworks Private Limited (hereinafter referred to as "we," "our," "us," or "the company") incorporated as private limited company under the Companies Act 1956 and governed by Companies Act 2013 along with the rules and regulations operating under the trade name "InfoProfile", is dedicated to ensuring a safe and supportive digital space for children to learn, connect, and express themselves. This Child Safety Standards Policy highlights our commitment to safeguarding children online by addressing

2. Purpose

This policy outlines measures to protect children's privacy and well-being, ensuring a secure and enriching experience on our platform. It outlines our commitment to providing a safe and secure environment for children. This policy aims to prevent exploitation, ensure privacy, and promote safe digital engagement for children.

3. Scope

This Child Safety Standards Policy applies to all users, content, content creators, stakeholders, including users, moderators, and third-party partners and activities on InfoProfile Platform, focusing on safeguarding children from harm and promoting a safe digital environment. It outlines measures to prevent exploitation, exposure to inappropriate content, and other risks associated with online interactions. The policy governs platform features, user behavior, and content moderation while aligning with global child protection standards.

4. Definitions

  1. Child/Children: Individuals under the respective age as per the local legislation. For example, Individuals under the age of 18 are considered a child/minor in India, Russia, France and many others.

  2. Harmful Content: Any content deemed inappropriate, unsafe, or exploitative for children, including cyberbullying, sexually explicit material, violent content, or material promoting self-harm or other dangerous activities.

  3. Parental Consent: Verifiable consent provided by a parent or guardian for the collection, use, or processing of a child's personal data or account creation.

  4. Content Moderation: The process of monitoring, reviewing, and managing user- generated content to ensure compliance with the platform's policies and prevent harm to children.

  5. Child Sexual Abuse Material (CSAM): Any visual depiction of sexually explicit conduct involving a child which is strictly prohibited under Indian law and international standards.

  6. Cyberbullying: The use of electronic communication to bully a person, typically by sending messages of an intimidating or threatening nature. This includes online harassment, stalking, and the dissemination of private information

5. Account Creation

Children under the age of 13 are strictly prohibited from creating an account or using the InfoProfile platform in any manner.

6. Parental Consent

Accounts for all the users who are above 13 years of age but fall under the definition of children will require verifiable parental consent.

Parents or guardians will have access to manage their child's account, including monitoring privacy settings and approving platform usage.

7. Content Moderation and Community Guidelines

At InfoProfile, we prioritize creating a safe and respectful environment for all users, especially children. This section outlines the prohibited content, the tools and techniques used to moderate content, and the steps we take to ensure compliance with this policy.

7.1 Prohibited Content

The following types of content are strictly prohibited on our platform. Violations will result in content removal, account suspension, and potential legal action:

  1. Child Exploitation and Abuse:

    • Any material depicting or promoting child sexual abuse, exploitation, or nudity of minors.
    • Images, videos, or text glorifying or normalizing child pornography.
    • Sextortion, solicitation of minors, or any behavior that facilitates exploitation.
  2. Cyberbullying and Harassment:

    • Threats, harassment, or intimidation directed at children, such as:
      • Sending offensive messages or images.
      • Encouraging exclusion or public shaming of a child.
      • Using fake profiles to impersonate, bully, or humiliate a child.
  3. Hate Speech and Discrimination:

    • Content promoting hatred, violence, or discrimination based on race, religion, gender, disability, or other identity factors, especially targeting children.
  4. Violent and Graphic Content:

    • Depictions of extreme violence, self-harm, or cruelty that are inappropriate for children.
    • Videos or images promoting weapons, gore, or harmful practices.
  5. Content Encouraging Self-Harm or Suicide:

    • Posts or messages encouraging children to harm themselves or glorifying suicidal behavior.
    • Challenges or trends that may pose a physical or psychological risk to children.
  6. Inappropriate Content:

    • Pornography, sexually explicit material, or suggestive content that is not age-appropriate.
    • Misleading content targeting children but containing adult themes.
  7. Misinformation and Dangerous Trends:

    • Fake news, hoaxes, or misinformation that could lead to harm or panic among children.
    • Dangerous viral trends or challenges, such as those promoting risky or unlawful activities.
  8. Illegal Activities:

    • Content involving the promotion or encouragement of illegal activities such as drug use, illegal hacking, gambling, or the sale of prohibited substances.

7.2 Content Moderation Process

To ensure adherence to the above guidelines, InfoProfile employs a multi-layered content moderation strategy combining advanced technology and human oversight.

7.2.1 Proactive Content Detection

  1. AI-Based Monitoring:

    • AI algorithms analyze user-generated content (text, images, audio, and video) in real-time to detect:
      • Nudity, explicit language, and graphic violence.
      • Indicators of cyberbullying, harassment, or grooming behavior.
    • Machine learning models are regularly updated to recognize evolving trends and harmful behavior patterns.
  2. Keyword and Phrase Filtering:

    • Specific keywords, phrases, and hashtags linked to harmful content are flagged automatically for review.
    • Filters are customized to detect content in multiple languages.
  3. Image and Video Recognition Tools:

    • Advanced tools, such as hash-matching technology, are used to identify known abusive material.
    • AI-based moderation systems blur or block harmful visuals until reviewed by moderators.

7.2.2 Human Moderation

Trained Moderators:

  1. A team of trained moderators reviews flagged content for context, and adherence to community guidelines.
  2. Moderators are trained in:
    • Recognizing harmful behavior or content specific to children.
    • Handling sensitive issues with care and confidentiality.

7.2.3 Age-Appropriate Content Filters

  • Child accounts have stricter content filters to prevent exposure to mature or inappropriate material.
  • Search functionalities and content recommendations are tailored to display age-appropriate material only.

8. Reporting Mechanisms and Response Protocol

8.1 Reporting Tools

  • A dedicated "Report Abuse" feature will allow users to report harmful or inappropriate content.
  • Reports involving children will be prioritized and addressed within 24 hours.

8.2 Collaboration with Law Enforcement

  • Cases involving CSAM or child exploitation will be promptly reported to the law enforcement agencies.

9. Enforcement and Accountability

9.1 Policy Violations

  • Violations of this policy will result in content removal, account suspension, and, where applicable, reporting to authorities.

9.2 Audits

  • Internal and external audits will ensure continuous improvement in safety measures.

9.3 User Accountability

  • Users who violate the platform's child safety standards may face permanent bans and legal action.

10. Contact Information

For concerns regarding child safety or to report violations, please contact:

11. Policy Updates

This policy will be reviewed and updated periodically to address new challenges and comply with evolving laws and regulations. Significant changes will be communicated to all users via email or in-app notifications