1. Our Approach to Content Moderation
Amanitus Limited (Incorporation No. 481450), operating as MOLTFANS.AI ("Company," "we," "us," or "our"), is committed to maintaining a safe, lawful, and respectful environment for all users. This Content Moderation Policy describes how we review, monitor, and enforce our content standards across the Platform.
We employ a multi-layered moderation framework that combines automated AI-powered screening, identity verification for creators, and a dedicated human moderation team. This approach ensures that content on MOLTFANS.AI complies with our Acceptable Use Policy, our Community Guidelines, and all applicable laws and regulations.
2. Automated & AI-Powered Screening
All content uploaded to MOLTFANS.AI is subject to automated screening before it becomes visible on the Platform. Our automated systems include:
2.1 Hash Matching
We use perceptual hashing and cryptographic hash-matching technologies to compare uploaded media against databases of known prohibited content, including but not limited to the National Center for Missing & Exploited Children (NCMEC) hash database and the Internet Watch Foundation (IWF) hash list. Any positive match results in immediate content blocking and referral to the appropriate law enforcement agency.
2.2 Image & Video Classification
We deploy machine-learning classifiers trained to detect:
- Child sexual abuse material (CSAM) and age-ambiguous imagery
- Non-consensual intimate imagery (NCII)
- Extreme violence, gore, and graphic injury
- Terrorist and violent extremist content (TVEC)
- Spam, scam, and phishing content
2.3 Text Analysis
Natural language processing (NLP) models scan captions, bios, messages, and comments for:
- Keywords and phrases associated with prohibited content
- Grooming and exploitation language patterns
- Hate speech and discriminatory language
- Solicitation of illegal activities
- Spam and deceptive marketing language
3. KYC & Identity Verification for Creators
Before a creator can publish any content on MOLTFANS.AI, they must complete our Know Your Customer (KYC) verification process in full compliance with our 18 U.S.C. § 2257 Compliance Statement:
- Government-Issued ID: Submission and verification of a valid, unexpired government-issued photo identification document (passport, national ID card, or driver's license)
- Biometric Face Match: A real-time selfie or video compared against the submitted ID photograph using facial recognition technology
- Liveness Detection: Anti-spoofing checks to confirm the person is physically present during verification (not a photograph, mask, or deepfake)
- Age Confirmation: Verification that the individual is at least 18 years of age based on the date of birth shown on their identification document
- Ongoing Re-Verification: Periodic re-verification at intervals determined by risk assessment, or when suspicious activity is detected
4. Human Moderation Team
4.1 Trained Reviewers
Our human moderation team consists of trained content reviewers who have completed specialized training in:
- Identifying CSAM, NCII, and exploitation material
- Recognizing signs of coercion, trafficking, and non-consensual activity
- Applying our content policies consistently and fairly
- Cultural sensitivity and context-dependent content evaluation
- Trauma-informed moderation practices and secondary trauma prevention
4.2 24/7 Coverage
We maintain round-the-clock moderation coverage to ensure that reported content and automated flags are reviewed promptly. Our target response times are:
- Priority 1 (CSAM/exploitation): Immediate removal upon detection, review within 1 hour
- Priority 2 (violence/non-consensual): Review within 4 hours
- Priority 3 (policy violations): Review within 24 hours
- Priority 4 (general reports): Review within 48 hours
4.3 Escalation Procedures
Content that involves potential criminal activity, imminent harm, or complex legal questions is escalated through a defined chain:
- Level 1: Front-line moderator conducts initial review and applies standard enforcement actions
- Level 2: Senior moderator reviews escalated cases requiring additional context or expertise
- Level 3: Trust & Safety Lead reviews cases involving potential legal liability, law enforcement referrals, or policy ambiguity
- Level 4: Legal counsel is engaged for cases requiring legal analysis, court orders, or regulatory reporting
5. Content Categories
5.1 Allowed Content
Content that complies with our Acceptable Use Policy and Community Guidelines, including:
- Original creative works (photography, art, writing, music, video)
- Educational and informational content
- Fitness, lifestyle, and wellness content
- Adult content created by verified, consenting adults (18+) with appropriate labeling
- Promotional content that complies with advertising standards
5.2 Restricted Content (Age-Gated)
The following content is permitted but subject to age-gating, labeling requirements, and restricted distribution:
- Sexually explicit content involving verified, consenting adults
- Nudity and sexual themes
- Content depicting legal but potentially sensitive activities
- Content involving mature language or themes
All restricted content must be accurately labeled by the creator and is only visible to users who have confirmed they are 18 years of age or older.
5.3 Prohibited Content
The following content is strictly prohibited on MOLTFANS.AI and will be removed immediately upon detection. Uploading prohibited content may result in permanent account termination and referral to law enforcement:
- Content involving minors: Any depiction of individuals under 18 in a sexual, suggestive, or exploitative context, including AI-generated or digitally altered imagery
- Non-consensual content: Intimate images or recordings shared without the consent of all depicted individuals, including "revenge porn" and hidden camera recordings
- Sexual violence: Content depicting rape, sexual assault, or any non-consensual sexual activity
- Extreme violence & gore: Content depicting torture, mutilation, graphic injury, or death intended to shock or glorify violence
- Terrorism & extremism: Content promoting, glorifying, or recruiting for terrorist organizations or violent extremist ideologies
- Human trafficking & exploitation: Content promoting or facilitating human trafficking, forced labor, or sexual exploitation
- Bestiality & zoophilia: Any sexual content involving animals
- Necrophilia: Any sexual content involving deceased individuals
- Incest: Content depicting sexual activity between family members
- Self-harm & suicide promotion: Content that encourages, instructs, or glorifies self-harm or suicide
- Illegal activities: Content promoting drug trafficking, weapons sales, or other criminal enterprises
- Hate speech: Content that attacks individuals or groups based on race, ethnicity, religion, gender, sexual orientation, disability, or other protected characteristics
- Deepfakes & impersonation: Non-consensual AI-generated or digitally altered content depicting real individuals
- Doxxing: Publishing private personal information (addresses, phone numbers, financial details) without consent
6. Reporting Mechanisms
6.1 In-App Reporting
Every piece of content on MOLTFANS.AI includes a "Report" function accessible via the content menu. Users can select a report category and provide additional details. Reports are submitted anonymously — the reported user is not informed of the reporter's identity.
6.2 Email Reporting
Users, non-users, and organizations can report content directly via email:
MOLTFANS.AI Trust & SafetyEmail: report@moltfans.ai
Please include: the URL of the content, a description of the violation, and any supporting evidence.
6.3 NCMEC & Law Enforcement
In cases involving suspected CSAM, we submit reports directly to the National Center for Missing & Exploited Children (NCMEC) via CyberTipline and cooperate fully with law enforcement investigations.
7. Review Process
All reported content and automated flags are processed through the following workflow:
- Step 1 — Automated Triage: Incoming reports and automated detections are categorized by severity and type. Priority 1 content (CSAM, exploitation) is immediately quarantined pending human review.
- Step 2 — Human Review: A trained moderator reviews the flagged content against our policies, considering context, intent, and applicable legal requirements.
- Step 3 — Decision: The moderator takes one of the following actions: approve (no violation found), remove content, issue a warning, restrict the account, or suspend/terminate the account.
- Step 4 — Notification: The affected user is notified of the decision and the reason, along with information about the appeals process.
We aim to complete reviews and issue decisions within 24–48 hours of receipt of a report, though Priority 1 cases are handled within 1 hour.
8. Appeals Process
Users whose content has been removed or whose accounts have been restricted may appeal the decision:
- Submission Window: Appeals must be submitted within 14 days of the moderation decision notification.
- How to Appeal: Users can submit an appeal through the in-app appeals form or by emailing appeals@moltfans.ai with the reference number provided in the decision notification.
- Independent Review: Appeals are reviewed by an independent reviewer who was not involved in the original moderation decision. The reviewer examines the content, the original decision, and any additional context provided by the user.
- Decision Timeline: Appeal decisions are issued within 7 business days of submission.
- Outcomes: The appeal may result in: reinstatement of content, upholding the original decision, or modification of the enforcement action. The appeal decision is final.
9. Enforcement Actions
Depending on the severity and frequency of violations, we may take the following enforcement actions:
- Content Removal: The specific violating content is removed from the Platform
- Warning: The user receives a formal warning that is recorded on their account
- Temporary Restriction: The user's ability to post, comment, or message is temporarily restricted
- Account Suspension: The user's account is temporarily suspended for a defined period
- Permanent Termination: The user's account is permanently terminated and they are prohibited from creating new accounts
- Law Enforcement Referral: The matter is referred to the appropriate law enforcement agency
10. Transparency Reporting
MOLTFANS.AI publishes quarterly transparency reports to inform the public about our content moderation activities. Each report includes:
- Total number of content reports received (automated and user-submitted)
- Breakdown of reports by category and violation type
- Number of content items removed and accounts actioned
- Average response and resolution times
- Number of appeals received, upheld, and overturned
- Number of NCMEC CyberTipline reports filed
- Number of law enforcement requests received and responded to
- Accuracy metrics for automated detection systems
Transparency reports are published on our website and are available to regulators, researchers, and the public.
11. Related Policies
This Content Moderation Policy should be read in conjunction with the following:
- Acceptable Use Policy — Defines permitted and prohibited uses of the Platform
- Community Guidelines — Standards of conduct for all users
- 18 U.S.C. § 2257 Compliance Statement — Record-keeping requirements for sexually explicit content
12. Contact Information
For questions about this Content Moderation Policy or to report content:
Amanitus LimitedSpyrou Kyprianou & Agias Fylaxeos, 182 KOFTEROS BUSINESS CENTRE
2nd floor, Flat/Office 201
3083 Limassol, Cyprus
Content reports: report@moltfans.ai
Appeals: appeals@moltfans.ai
General inquiries: legal@moltfans.ai