Winter Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: pass65

Exam Databricks-Generative-AI-Engineer-Associate All Questions
Exam Databricks-Generative-AI-Engineer-Associate All Questions

View all questions & answers for the Databricks-Generative-AI-Engineer-Associate exam

Databricks Generative AI Engineer Databricks-Generative-AI-Engineer-Associate Question # 8 Topic 1 Discussion

Databricks-Generative-AI-Engineer-Associate Exam Topic 1 Question 8 Discussion:
Question #: 8
Topic #: 1

A Generative Al Engineer is creating an LLM-based application. The documents for its retriever have been chunked to a maximum of 512 tokens each. The Generative Al Engineer knows that cost and latency are more important than quality for this application. They have several context length levels to choose from.

Which will fulfill their need?


A.

context length 514; smallest model is 0.44GB and embedding dimension 768


B.

context length 2048: smallest model is 11GB and embedding dimension 2560


C.

context length 32768: smallest model is 14GB and embedding dimension 4096


D.

context length 512: smallest model is 0.13GB and embedding dimension 384


Get Premium Databricks-Generative-AI-Engineer-Associate Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.