| dc.description.abstract |
In order to promote peer-professional interaction and offer emotional support,
online community platforms for mental health have become ever more crucial. However,
toxic content, including hate speech, cyberbullying, vulgar language, and posts that
encourage self-harm, may compromise user safety on social networking sites. In order to
ensure an appropriate environment for users talking about mental health, this thesis
focuses on creating an automated content moderation system for the microblogging
platform SAHARI. Using DeepSeek AI and Natural Language Processing (NLP), the
system analyzes posts and comments in real-time to detect and manage harmful content,
including hate speech, cyberbullying, and self-harm. This moderation system quickly
detects or removes offensive material to preserve a great user experience on the website.
Testing demonstrated how effectively the technology detected violations, greatly
enhancing platform security. This study improves online safety and fosters positive
relationships by supporting AI-based content control on mental health platforms. |
en_US |