Community Guidelines
Last Updated: 2 February 2026
Effective Date: 2 February 2026
These Community Guidelines ("Guidelines") govern acceptable behavior, content standards, moderation practices, and enforcement actions on Vibeshared ("Platform", "we", "us").
⚠️ Important Notice
These Guidelines form an integral part of the Platform's Terms & Conditions. Violation of these Guidelines may result in enforcement actions, including content removal, account suspension, or permanent termination.
1. Purpose and Philosophy
1.1 Why These Guidelines Exist
The Platform exists to enable: Expression, Interaction, Creativity, and Community building. At the same time, we are committed to: User safety, Legal compliance, Platform integrity, and Responsible expression. These Guidelines define clear boundaries to protect users and the Platform.
1.2 Balance Between Expression and Safety
We aim to balance freedom of expression with protection from harm, abuse, and illegal activity. Not all harmful behavior is illegal, and not all legal behavior is acceptable on the Platform.
1.3 Scope of Application
These Guidelines apply to:
- Posts, Comments, Images, videos, audio
- Usernames and bios, Messages (where applicable)
- Profile behavior
- Off-platform behavior that directly and demonstrably impacts Platform safety
2. General Community Standards
All users must adhere to the following baseline standards.
2.1 Respect and Dignity
Users must treat others with respect, avoid harassment or humiliation, and engage in good faith interactions.
We do not tolerate behavior intended to: Degrade, Dehumanize, Intimidate, or Silence others.
2.2 Authenticity
Users must not:
- Impersonate individuals or organizations
- Misrepresent identity, affiliation, or credentials
- Operate fake or deceptive accounts
Authenticity is critical to trust and safety.
2.3 Lawful Use
Users must not use the Platform to:
- Violate any applicable law
- Evade law enforcement
- Promote illegal activity
Illegal content will be removed and may be reported to authorities where required.
3. Prohibited Content Categories
The following content is strictly prohibited.
3.1 Hate Speech and Harassment
Content is prohibited if it includes:
- Attacks based on race, caste, religion, gender, sexual orientation, disability, nationality, or similar protected characteristics
- Slurs, demeaning language, or dehumanizing expressions
- Calls for exclusion, segregation, or discrimination
This includes: Direct attacks, Indirect coded language, Memes or symbols used to harass.
3.2 Threats, Violence, and Harm
Prohibited content includes:
- Threats of physical harm
- Calls for violence
- Celebration or glorification of violence
- Instructions to harm oneself or others
Credible threats may result in immediate account suspension and escalation.
3.3 Sexual Exploitation and Abuse
We strictly prohibit:
- Sexual exploitation
- Non-consensual sexual content
- Sexual violence or coercion
- Pornographic content involving minors (zero tolerance)
Any content involving child sexual exploitation will be: Removed immediately and Reported to appropriate authorities.
3.4 Terrorism and Extremism
Prohibited content includes:
- Promotion of terrorist organizations
- Praise or advocacy of extremist ideology
- Recruitment or fundraising for such groups
Educational or journalistic discussion may be allowed in limited contexts.
3.5 Misinformation Causing Harm
We may remove content that:
- Intentionally spreads false information
- Causes public harm (health, safety, civic processes)
- Is part of coordinated misinformation campaigns
Context, intent, and impact are considered.
4. Abuse, Spam, and Manipulation
4.1 Harassment and Bullying
Harassment includes:
- Repeated unwanted contact
- Targeted insults
- Coordinated attacks
- Doxxing or threats of exposure
Harassment severity and repetition affect enforcement level.
4.2 Spam and Platform Abuse
Prohibited spam behaviors include:
- Mass posting or commenting
- Scam links
- Fake engagement
- Automated or bot activity
- Engagement manipulation
5. Impersonation
Impersonation includes:
- Using another person's name or image deceptively
- Pretending to represent organizations or officials
- Fake verified-style accounts
Impersonation may lead to immediate removal.
6. Intellectual Property
Users must respect intellectual property rights. Do not post content that infringes on copyrights, trademarks, patents, or other proprietary rights without proper authorization or legal basis (such as fair use).
7. Child Safety
We have zero tolerance for content that exploits, endangers, or sexualizes minors. Any such content will be removed immediately and reported to relevant authorities. Users are encouraged to report any suspected child exploitation content.
8. Reporting Content and User Behavior
8.1 Who Can Report
Any User may report: Content, Accounts, Messages (where applicable), Behavior that violates these Guidelines or applicable law.
Reports are available to: Affected users, Bystanders, Rights holders (e.g., IP owners).
8.2 What Can Be Reported
Reportable violations include, but are not limited to:
- Hate speech or harassment
- Threats or violence
- Sexual exploitation or abuse
- Child safety violations
- Spam, scams, or fraud
- Impersonation
- Intellectual property infringement
- Terrorism or extremist content
- Privacy violations
8.3 How to Submit a Report
Reports may be submitted via:
- In-platform reporting tools
- Designated reporting forms
- Email or other official communication channels (for legal notices)
Reports should include:
- Identification of the content or account
- Description of the issue
- Supporting context or evidence (if available)
Incomplete reports may delay review.
8.4 Anonymous Reporting
Where supported, reports may be submitted anonymously. However: Anonymous reports may limit follow-up, and false or malicious reports remain prohibited.
8.5 Report Review Process
Upon receiving a report, the Platform may:
- Perform automated checks
- Queue the report for human review
- Take temporary precautionary action (e.g., visibility restriction)
8.6 Review Criteria
Reported content or behavior is assessed based on:
- These Community Guidelines
- Applicable laws
- Context and intent
- Severity and potential harm
- User history and patterns
8.7 Review Timelines
We aim to review reports within a reasonable timeframe. Timelines may vary depending on: Volume of reports, Complexity of the case, Severity and risk level. Emergency cases may be prioritized.
8.8 Outcomes of Reports
After review, we may:
- Take no action
- Remove or restrict content
- Apply warnings or strikes
- Suspend or terminate accounts
- Escalate to authorities where required
8.9 Notification
Where appropriate, we may notify: The reporting User, The affected User. Notifications may be limited to protect privacy, safety, or investigation integrity.
9. Enforcement Actions
9.1 Enforcement Philosophy
Enforcement exists to:
- Protect users from harm
- Maintain trust in the Platform
- Prevent misuse and abuse
- Comply with legal obligations
Enforcement is not punitive by default. It is protective and corrective, escalating only when necessary.
9.2 Enforcement Action Types
Content-Level Actions (applied to individual posts, comments, or media):
- Content removal
- Content visibility restriction
- Warning labels or notices
- Temporary blocking of interactions
- Demonetization of specific content
Account-Level Actions (applied to user accounts):
- Warning notices
- Feature restrictions (posting, commenting, messaging)
- Temporary suspension
- Permanent account termination
- Device, IP, or identity-based restrictions in cases of abuse, evasion, or repeat violations
Emergency Actions (in cases involving):
- Credible threats
- Child safety risks
- Terrorism or extremism
- Court or government orders
We may take immediate action without prior notice.
9.3 Severity Tiers
Tier 1 – Low Severity Violations
Examples: Minor spam, Off-topic posting, Accidental guideline breaches, Non-malicious policy misunderstandings
Typical Enforcement: Educational warning, Content removal, No strike or limited impact
Tier 2 – Medium Severity Violations
Examples: Repeated harassment, Hate-adjacent language, Spam with intent, Impersonation without widespread harm
Typical Enforcement: Content removal, Formal warning, Temporary feature restrictions, Strike added to account
Tier 3 – High Severity Violations
Examples: Hate speech, Threats of violence, Coordinated harassment, Scams or fraud, Non-consensual sexual content
Typical Enforcement: Immediate content removal, Temporary suspension, Multiple strikes, Monetization removal
Tier 4 – Zero-Tolerance Violations
Examples: Child sexual exploitation, Terrorist propaganda, Credible violent threats, Severe criminal activity
Typical Enforcement: Immediate permanent account termination, Reporting to authorities where required, No appeal eligibility in most cases
9.4 Strike System
A "strike" is a formal record of a policy violation. Strikes help identify repeat offenders, escalate enforcement fairly, and protect the Platform community.
Strike Accumulation
Strikes may accumulate based on severity, frequency, and pattern of behavior.
Example escalation:
- 1 strike → Warning / limited restriction
- 2 strikes → Temporary suspension
- 3 strikes → Long suspension or termination
Strike Expiry
Strikes may expire after a defined period, be reduced for good behavior, or remain permanent for severe violations. Strike expiry is not guaranteed.
9.5 Enforcement Matrix
| Violation Type | Severity | Typical Action |
|---|---|---|
| Minor spam | Tier 1 | Warning + removal |
| Harassment (repeated) | Tier 2 | Strike + restriction |
| Hate speech | Tier 3 | Suspension |
| Impersonation | Tier 2–3 | Removal + suspension |
| Scam/fraud | Tier 3 | Suspension / ban |
| Child exploitation | Tier 4 | Permanent ban |
This table is illustrative, not exhaustive.
9.6 Factors Considered in Enforcement
When deciding enforcement actions, we may consider:
- Intent (accidental vs malicious)
- Context and surrounding content
- Targeted vs general behavior
- Prior violation history
- User cooperation
- Potential or actual harm
9.7 No Guarantee of Warnings
We may skip warnings and proceed directly to suspension or termination where: Harm is severe, Risk is imminent, or Law requires immediate action.
9.8 Circumvention and Evasion
Users must not attempt to evade enforcement by:
- Creating new accounts
- Using alternate identities
- Using VPNs or proxies to bypass bans
- Transferring control to others
Circumvention is treated as a separate violation.
10. Appeals Process
10.1 Who Can Appeal
Users may appeal: Content removal, Account suspension, Feature restrictions, Demonetization decisions.
Appeals are generally available to the affected account holder.
10.2 How to Submit an Appeal
Appeals must be submitted through: In-platform appeal tools or Designated support channels.
Appeals should include:
- Reference to the enforcement action
- Explanation or context
- Any relevant supporting information
10.3 Appeal Review
Appeals may be reviewed by: Human moderators, Internal compliance teams, or Automated systems (where appropriate).
Not all appeals are guaranteed human review unless required by law.
10.4 Appeal Outcomes
Appeal decisions may result in:
- Reversal of enforcement
- Modification of enforcement
- Confirmation of enforcement
Decisions are final, subject to statutory rights.
10.5 Reinstatement Eligibility
Temporary Suspensions: Users suspended temporarily may be reinstated after suspension period ends and required actions are completed (if any).
Permanent Terminations: Generally not eligible for reinstatement. Exceptions may apply in rare cases involving clear error, identity theft, or successful appeal based on new evidence.
No Guaranteed Reinstatement: Reinstatement is discretionary and not guaranteed, even after appeal.
10.6 Abuse of Reporting or Appeals
Submitting reports that are knowingly false, misleading, or intended to harass or silence others may result in enforcement action against the reporting User.
Repeated, frivolous, or bad-faith appeals may result in appeal restrictions or additional enforcement actions.
Grievance Redressal Mechanism
In compliance with the Information Technology Act, 2000 and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the Platform has appointed a Grievance Officer to address user complaints and concerns related to content moderation, enforcement actions, and policy violations.
- Name: Grievance Officer – Vibeshared
- Designation: Grievance Officer
- Email: support@vibeshared.com
- Address: India (Correspondence via email only)
- Response Timeline: Acknowledgment within 24 hours and resolution within 15 days
11. Special Content Categories
Certain types of content require context-sensitive moderation. These categories are not automatically violations but are reviewed with additional care.
11.1 News, Journalism, and Public Interest Content
Content shared for news reporting, documentary purposes, academic research, or public awareness may include references to violence, extremism, hate speech, or criminal activity.
Such content may be allowed where: The intent is informational or critical, There is no promotion or endorsement, and Context is clear and not misleading.
We may apply reduced distribution, warning labels, or context notices.
11.2 Satire, Parody, and Artistic Expression
Satirical or artistic content may reference public figures, social issues, or controversial topics.
However, satire does not exempt content from enforcement where it targets protected groups with hate, encourages violence, causes real-world harm, or is used as a cover for abuse.
Context, audience, and history are considered.
11.3 Educational and Historical Content
Educational material discussing historical events, extremist ideologies, or criminal cases may be permitted where the purpose is clearly educational, there is no glorification or advocacy, and presentation is responsible.
11.4 Political and Civic Content
Political expression is generally allowed, including opinions, criticism of public policy, and civic discussion.
However, prohibited behavior includes: Voter suppression, Election interference, Coordinated political misinformation, Paid political influence without disclosure (where applicable).
12. Off-Platform Behavior and Platform Safety
12.1 When Off-Platform Behavior Matters
We may consider off-platform behavior where it:
- Poses a credible risk to users
- Demonstrates intent to cause harm on the Platform
- Is directly linked to Platform abuse
- Involves severe criminal activity
This authority is exercised sparingly and cautiously.
12.2 Evidence and Verification
Off-platform enforcement may require credible evidence, clear linkage to Platform activity, and internal review.
13. User Education and Prevention
13.1 Preventive Measures
We may use educational prompts, warnings before posting, safety reminders, and policy explanations.
These measures aim to reduce accidental violations and encourage responsible behavior.
13.2 No Guarantee of Prevention
Preventive tools do not guarantee a violation-free experience or immunity from enforcement. Users remain responsible for compliance.
14. Updates to Community Guidelines
14.1 Right to Update
We may update these Guidelines to reflect legal or regulatory changes, new safety risks, platform feature changes, or community feedback.
14.2 Notification of Changes
Material updates may be communicated via platform notices, updated "Effective Date" date, or in-app messages.
Continued use of the Platform after updates may constitute acceptance, to the extent permitted by applicable law.
15. Relationship to Other Policies
These Community Guidelines must be read together with: Terms & Conditions, Privacy Policy, and Additional feature-specific policies.
In case of conflict, Terms & Conditions shall prevail.
16. User Acknowledgement
By accessing or using the Platform, you acknowledge that:
- You have read and understood these Community Guidelines
- You agree to comply with them
- You understand that violations may result in enforcement actions
RELATED POLICIES AND INCORPORATION BY REFERENCE
The following policies form an integral part of these Terms and are incorporated herein by reference. By accessing or using the Platform, Users agree to be bound by all applicable policies in addition to these Terms.
In the event of any conflict between these Terms and any related policy, these Terms shall prevail unless expressly stated otherwise.
Note: These Guidelines serve as behavioral standards, not exhaustive rules. They do not create contractual rights beyond the Terms, do not guarantee specific enforcement outcomes, and do not limit the Platform's discretion.
If any provision of these Guidelines is held invalid or unenforceable, that provision shall be severed or limited, and the remaining provisions shall remain in full force.
