Amazon Web Services AWS Certified AI Practitioner AIF-C01 Question # 23 Topic 3 Discussion
AIF-C01 Exam Topic 3 Question 23 Discussion:
Question #: 23
Topic #: 3
A company wants to identify harmful language in the comments section of social media posts by using an ML model. The company will not use labeled data to train the model. Which strategy should the company use to identify harmful language?
A.
Use Amazon Rekognition moderation.
B.
Use Amazon Comprehend toxicity detection.
C.
Use Amazon SageMaker AI built-in algorithms to train the model.
Amazon Comprehend toxicity detection is a managed NLP service that can analyze text for harmful or toxic language using pre-trained models—no need for labeled data or custom training.
B is correct: Comprehend’s toxicity detection API is designed for this use case, works out-of-the-box, and requires no data labeling or model training.
A (Rekognition) is for image and video content moderation.
C would require labeled data for training.
D (Polly) is for text-to-speech, not content moderation.
“Amazon Comprehend can detect toxicity in text with pre-trained models, requiring no labeled training data.”
(Reference: Amazon Comprehend Toxicity Detection, AWS AI Practitioner Official Guide)
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit