Winter Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: pass65

Exam Databricks-Generative-AI-Engineer-Associate All Questions
Exam Databricks-Generative-AI-Engineer-Associate All Questions

View all questions & answers for the Databricks-Generative-AI-Engineer-Associate exam

Databricks Generative AI Engineer Databricks-Generative-AI-Engineer-Associate Question # 6 Topic 1 Discussion

Databricks-Generative-AI-Engineer-Associate Exam Topic 1 Question 6 Discussion:
Question #: 6
Topic #: 1

A Generative Al Engineer has developed an LLM application to answer questions about internal company policies. The Generative AI Engineer must ensure that the application doesn’t hallucinate or leak confidential data.

Which approach should NOT be used to mitigate hallucination or confidential data leakage?


A.

Add guardrails to filter outputs from the LLM before it is shown to the user


B.

Fine-tune the model on your data, hoping it will learn what is appropriate and not


C.

Limit the data available based on the user’s access level


D.

Use a strong system prompt to ensure the model aligns with your needs.


Get Premium Databricks-Generative-AI-Engineer-Associate Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.