Blocks
Report Abuse
A comprehensive abuse reporting system with categorization, validation, and privacy-focused design
Overview
The Report Abuse block provides platforms with a professional content moderation interface. Essential for platforms like Mintlify and Hashnode that host user-generated content, this component enables visitors to report inappropriate content while maintaining user privacy and preventing false reports. It features a clean dialog interface with comprehensive form validation and clear submission flow.
Installation
npx @vercel/platforms@latest add report-abuseFeatures
- Categorized reporting: Customizable abuse categories for accurate classification
- Optional URL field: Users can specify the exact content location
- Detailed descriptions: Textarea for comprehensive issue reporting
- Email validation: Collects reporter contact for follow-up if needed
- Privacy notice: Built-in privacy disclosure for transparency
- Form validation: Required fields and input validation
- Clean dialog UI: Non-intrusive modal interface
- Responsive design: Works seamlessly across all devices
- Visual indicators: Icons and colors enhance user understanding
Usage
Basic Implementation
import { ReportAbuse } from '@/components/blocks/report-abuse'
export default function ContentPage() {
const abuseTypes = [
{ value: "spam", label: "Spam or Advertising" },
{ value: "inappropriate", label: "Inappropriate Content" },
{ value: "copyright", label: "Copyright Violation" },
{ value: "misinformation", label: "False Information" },
{ value: "harassment", label: "Harassment or Hate Speech" },
{ value: "other", label: "Other" }
]
return (
<div className="content-footer">
<ReportAbuse types={abuseTypes} />
</div>
)
}With Server Action
import { ReportAbuse } from '@/components/blocks/report-abuse'
import { reportContent } from '@/actions/moderation'
export default function BlogPost({ postId }) {
const handleReport = async (data) => {
await reportContent({
...data,
postId,
reportedAt: new Date()
})
}
const types = [
{ value: "spam", label: "Spam" },
{ value: "inappropriate", label: "Inappropriate" },
{ value: "copyright", label: "Copyright" }
]
return (
<>
<article>...</article>
<ReportAbuse
types={types}
onSubmit={handleReport}
/>
</>
)
}Props
Prop
Type
Form Fields
Category Selection
- Required: Yes
- Type: Select dropdown
- Purpose: Classifies the type of abuse for proper routing
Content URL
- Required: No
- Type: URL input
- Purpose: Links directly to problematic content
- Validation: Must be valid URL format
Description
- Required: Yes
- Type: Textarea
- Purpose: Detailed explanation of the issue
- Min length: Recommended 20+ characters
Reporter Email
- Required: Yes
- Type: Email input
- Purpose: Enables follow-up communication
- Validation: Standard email format
Advanced Examples
Platform-Specific Categories
// Documentation platform
const docAbuseTypes = [
{ value: "outdated", label: "Outdated Information" },
{ value: "broken-code", label: "Non-functional Code Examples" },
{ value: "security-issue", label: "Security Vulnerability" },
{ value: "plagiarism", label: "Plagiarized Content" },
{ value: "spam", label: "Spam or Promotional" }
]
// Blog platform
const blogAbuseTypes = [
{ value: "hate-speech", label: "Hate Speech" },
{ value: "misinformation", label: "Misinformation" },
{ value: "adult-content", label: "Adult Content" },
{ value: "harassment", label: "Harassment" },
{ value: "copyright", label: "Copyright Infringement" }
]
// E-commerce platform
const shopAbuseTypes = [
{ value: "counterfeit", label: "Counterfeit Product" },
{ value: "misleading", label: "Misleading Description" },
{ value: "prohibited", label: "Prohibited Item" },
{ value: "scam", label: "Potential Scam" }
]Integration with Moderation System
import { ReportAbuse } from '@/components/blocks/report-abuse'
import { createModerationTicket } from '@/lib/moderation'
import { sendAlertEmail } from '@/lib/email'
export function ModeratedContent({ content }) {
const handleAbuseReport = async (report) => {
// Create moderation ticket
const ticket = await createModerationTicket({
contentId: content.id,
reportType: report.category,
description: report.description,
reporterEmail: report.email,
contentUrl: report.contentUrl || content.url,
status: 'pending'
})
// Send alert for high-priority categories
const highPriority = ['hate-speech', 'illegal', 'csam']
if (highPriority.includes(report.category)) {
await sendAlertEmail({
to: 'moderation@platform.com',
subject: `Urgent: ${report.category} reported`,
ticketId: ticket.id
})
}
return { success: true, ticketId: ticket.id }
}
const types = [
{ value: "spam", label: "Spam" },
{ value: "hate-speech", label: "Hate Speech" },
{ value: "illegal", label: "Illegal Content" }
]
return (
<div>
{content.body}
<ReportAbuse
types={types}
onSubmit={handleAbuseReport}
/>
</div>
)
}With Rate Limiting
import { ReportAbuse } from '@/components/blocks/report-abuse'
import { rateLimit } from '@/lib/rate-limit'
const limiter = rateLimit({
interval: 60 * 1000, // 1 minute
uniqueTokenPerInterval: 500,
maxReports: 3
})
export function RateLimitedReporting() {
const handleSubmit = async (data) => {
try {
await limiter.check(data.email, 3)
// Process report
} catch {
throw new Error('Too many reports. Please try again later.')
}
}
return <ReportAbuse types={types} onSubmit={handleSubmit} />
}Customization
Styling the Dialog
/* Custom dialog styles */
[data-report-abuse-dialog] {
@apply bg-background/95 backdrop-blur-md;
}
/* Custom button styling */
[data-report-abuse-trigger] {
@apply bg-red-600 hover:bg-red-700;
}Custom Privacy Notice
const CustomReportAbuse = ({ types }) => {
return (
<ReportAbuse
types={types}
privacyNotice={
<div className="text-sm text-muted-foreground">
<p>Your report is confidential and will be reviewed within 24 hours.</p>
<a href="/privacy" className="underline">
Learn more about our moderation process
</a>
</div>
}
/>
)
}Security Considerations
- Rate limiting: Implement rate limits to prevent spam reports
- Authentication: Consider requiring login for reporting
- IP logging: Log IP addresses for abuse prevention
- Validation: Sanitize all inputs before processing
- CAPTCHA: Add CAPTCHA for anonymous reporting
Best Practices
- Clear categories: Use specific, non-overlapping categories
- Quick response: Acknowledge reports immediately
- Follow-up: Send confirmation emails when appropriate
- Transparency: Publish moderation guidelines
- Appeals process: Provide a way to contest false reports
- Analytics: Track report patterns to identify issues
Moderation Workflow
// Example moderation workflow
const ModerationWorkflow = {
// 1. Receive report
receiveReport: async (report) => {
const ticket = await createTicket(report)
await notifyModerators(ticket)
return ticket.id
},
// 2. Review content
reviewContent: async (ticketId) => {
const ticket = await getTicket(ticketId)
const content = await getContent(ticket.contentId)
return { ticket, content }
},
// 3. Take action
takeAction: async (ticketId, action) => {
switch(action) {
case 'remove':
await removeContent(ticketId)
break
case 'warning':
await sendWarning(ticketId)
break
case 'dismiss':
await dismissReport(ticketId)
break
}
},
// 4. Notify reporter
notifyReporter: async (ticketId, outcome) => {
const ticket = await getTicket(ticketId)
await sendEmail(ticket.reporterEmail, outcome)
}
}