Currently, the sharing of images across various platforms and applications has become a common practice. However, there is a growing concern regarding the presence of inappropriate content such as adult-themed, racist, or violent images. To address this issue, we have implemented a functionality for automatic blocking of inappropriate images on the Botmaker platform. Leveraging Google Cloud Vision technology, we can analyze the content of images sent by users in real-time and determine if they contain inappropriate content, thereby ensuring a safe environment.
What does the functionality entail?
The functionality involves automatically blocking inappropriate images sent by users. Its features include:
How is it configured on the platform?
In Menu > Settings > Security (https://go.botmaker.com/#/configuration/security), you can enable or disable the functionality by sliding the toggle to the "On" position.
Each received and analyzed image incurs a cost detailed in the Premium Actions section. Blocked images can be viewed on the chat screen in any conversation, provided you have activated the functionality from the settings section.
Note: Remember that if you wish to view the image despite receiving the notification of inappropriate content, you can do so by clicking directly on it.
Implement the automatic blocking functionality of inappropriate images to provide your customers with a safe environment free from offensive content, promoting a positive and risk-free experience.
Remember to visit our Help Center for more information.
Written By: Botmaker Team
Updated: 07/07/2024