
Filtering Unwanted Messages in Online Social Networking User walls
Filtering Unwanted Messages in Online Social Networking User walls
ABSTRACT:
With the rapid growth of Online Social Networking (OSN) platforms, managing and filtering unwanted messages has become a significant challenge. Many users exploit these platforms to spread inappropriate, offensive, or illegal content, leading to a negative user experience and potential security risks. To address this issue, we have developed a web-based application that effectively filters unwanted messages in OSN user walls. This system ensures a safer and more controlled online environment by preventing the posting of offensive content.
The necessity for this system arises from the increasing misuse of social networking platforms, where harmful messages can impact individuals and society. Existing platforms implement basic content moderation, but many lack a structured mechanism for real-time message filtering before posting. The proposed system fills this gap by allowing administrators to define and regulate unwanted messages based on three categories: Bad Words, Illegal Words, and Restricted Words. By implementing a proactive filtering mechanism, our system prevents the dissemination of harmful content while also ensuring that repeat offenders are restricted from accessing the platform.
The system is developed using Java as the coding language, with JSP, CSS, and JavaScript for the frontend and MySQL as the database. It consists of two primary entities: Admin and Users. The Admin is responsible for managing the list of unwanted messages by adding, updating, and categorizing them. Additionally, the admin can monitor user activity, view blocked users, and analyze message trends using a bar chart that compares the total number of messages against unwanted messages posted.
On the other hand, Users can register, log in, find and add friends, accept friend requests, and post messages on the public wall. However, if a user attempts to post a message containing words flagged as unwanted by the admin, the system automatically blocks the message and restricts the user from further access. This ensures that offensive content is not displayed publicly, maintaining the integrity of the platform.
Unlike machine learning-based content moderation systems, this project employs a rule-based filtering mechanism that directly compares user input with the predefined list of unwanted words stored in the database. This approach ensures quick and efficient detection of inappropriate content without requiring extensive training datasets or complex models. By integrating these features, our system provides an effective solution for moderating online social networking environments, enhancing user safety, and promoting responsible content sharing.
PROJECT OUTPUT VIDEO:
EXISTING SYSTEM:
- The existing system of Online Social Networking (OSN) systems provided a platform for users to connect, communicate, and share content with others. These systems allowed users to register, create profiles, add friends, send friend requests, and interact through public or private messaging features. Users could post messages, images, and other content on their personal walls or group discussions, enabling seamless social interactions.
- Content moderation in the existing OSN platforms was typically managed through basic filtering mechanisms. Some systems implemented predefined keyword-based filtering to prevent the posting of specific words or phrases. Others provided manual moderation, where administrators or community moderators reviewed reported content and took appropriate actions such as content removal or user restrictions. Additionally, many platforms included privacy settings that allowed users to control who could view or comment on their posts.
- In some cases, OSN platforms integrated basic rule-based filtering techniques that relied on word matching or user-reported content flagging. These systems ensured that users had a structured way to engage with others while maintaining a certain level of content moderation. However, the moderation process often depended on post-publication detection, where inappropriate content was removed after being posted rather than being prevented in real-time.
- Overall, the existing OSN systems were designed to facilitate social interactions, content sharing, and user engagement, incorporating fundamental content control mechanisms to maintain a secure and interactive environment.
DISADVANTAGES OF EXISTING SYSTEM:
- Despite the fundamental content moderation mechanisms in the existing Online Social Networking (OSN) systems, several limitations were observed in effectively preventing unwanted messages and maintaining a safe online environment.
- Lack of Real-time Filtering – Many existing systems relied on post-publication moderation, meaning that inappropriate content was detected and removed only after it had already been posted. This delay allowed offensive messages to be visible for some time before action was taken.
- Dependency on Manual Moderation – Some platforms required administrators or community moderators to review and remove inappropriate content manually. This approach was time-consuming, inefficient, and often impractical for handling large volumes of user-generated content.
- Limited Keyword-based Filtering – The earlier systems primarily relied on basic word-matching techniques, which were not always effective. Users could bypass filters by slightly modifying words, using symbols, or employing alternate spellings, making it challenging to accurately detect unwanted messages.
- Lack of User Restrictions – Even if a user posted offensive content, many systems only removed the message without imposing strict penalties on the user. This allowed repeat offenders to continue posting inappropriate content without significant consequences.
- No Categorization of Unwanted Messages – Traditional filtering mechanisms did not distinguish between different types of offensive content, such as bad words, illegal words, or restricted words. This lack of classification reduced the effectiveness of filtering and enforcement actions.
- Limited Insights and Monitoring – The earlier systems lacked comprehensive analytics and reporting tools to track trends in unwanted messages. There was no way for administrators to analyze how frequently inappropriate content was being posted or identify patterns of misuse effectively.
- Due to these limitations, there was a growing need for a more robust system that could prevent the posting of unwanted messages in real time, categorize offensive content effectively, and impose stricter actions on users violating platform guidelines.
PROPOSED SYSTEM:
- The proposed system is designed to provide an advanced approach to filtering unwanted messages in Online Social Networking (OSN) user walls. It introduces a rule-based filtering mechanism that prevents the posting of inappropriate content in real-time while maintaining a structured and secure online environment. The system ensures that offensive, illegal, or restricted messages are identified and blocked before they appear on the public wall.
- The system is developed as a web-based application using Java as the programming language, with JSP, CSS, and JavaScript for the frontend and MySQL as the database. It consists of two primary entities: Admin and Users.
- Admin Role: The admin has complete control over managing unwanted messages. The admin can categorize and add words under three predefined categories:
- Bad Words – Includes offensive or inappropriate language.
- Illegal Words – Covers terms related to prohibited activities.
- Restricted Words – Encompasses words that are not allowed based on platform policies.
- Additionally, the admin can update unwanted words, view the list of registered users, track blocked users, and analyze system activity through a bar chart representation comparing the total number of messages against unwanted messages.
- User Role: Users can register, log in, find and add friends, send and accept friend requests, and share posts on the public wall. The system incorporates standard social networking features, ensuring a user-friendly experience. However, if a user attempts to post a message containing unwanted words, the system immediately detects and blocks the post from being published. Furthermore, the system restricts the user’s account, preventing further access to maintain platform integrity.
- Unlike traditional content moderation techniques, the proposed system does not rely on machine learning models but instead uses a predefined rule-based filtering approach. This ensures a straightforward and efficient detection process without requiring extensive datasets or training models. By implementing real-time content filtering and user restriction mechanisms, the system enhances the security and reliability of online social networking interactions.
ADVANTAGES OF PROPOSED SYSTEM:
- The proposed system offers several key advantages that enhance content moderation and security in Online Social Networking (OSN) platforms. By implementing a real-time rule-based filtering mechanism, the system ensures a safer and more controlled environment for users.
- Real-time Filtering of Unwanted Messages – The system immediately detects and blocks posts containing offensive, illegal, or restricted words before they are published on the public wall, preventing the spread of inappropriate content.
- Automated User Restriction – If a user attempts to post unwanted content, the system not only blocks the message but also restricts the user from further accessing the platform, ensuring strict enforcement of content policies.
- Categorization of Unwanted Words – The system allows the admin to categorize unwanted words into Bad Words, Illegal Words, and Restricted Words, providing a structured and efficient approach to content moderation.
- Admin Control and Monitoring – The admin has full control over managing unwanted words, updating them as needed, and monitoring user activities. Additionally, the system provides a bar chart representation to analyze trends in unwanted messages, helping administrators make informed decisions.
- User-friendly Social Networking Features – The system includes essential social networking functionalities such as user registration, friend requests, and post sharing, ensuring a seamless user experience while maintaining a secure platform.
- Efficient and Lightweight Approach – Unlike machine learning-based moderation systems that require extensive training datasets and computational resources, this system uses a simple yet effective rule-based filtering mechanism, making it faster and more efficient.
- Enhanced Security and Platform Integrity – By proactively blocking offensive messages and restricting violators, the system fosters a safer online community, reducing the risks associated with inappropriate content.
- Scalability and Flexibility – The system allows admins to easily add, update, or modify the list of unwanted words, making it adaptable to evolving content moderation needs.
- By integrating these advantages, the proposed system ensures a secure, structured, and effective approach to filtering unwanted messages in online social networking environments.
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
- System : Pentium i3 Processor.
- Hard Disk : 500 GB.
- Monitor : 15’’ LED.
- Input Devices : Keyboard, Mouse.
- Ram : 4 GB.
SOFTWARE REQUIREMENTS:
- Operating system : Windows 10/11.
- Coding Language : Java.
- Frontend : JSP, CSS, JavaScript.
- JDK Version : JDK 23.0.1.
- IDE Tool : Apache Netbeans IDE 24.
- Tomcat Server Version : Apache Tomcat 9.0.84
- Database : MYSQL.