Amazon Web Services AWS Certified AI Practitioner AIF-C01 Question # 6 Topic 1 Discussion
AIF-C01 Exam Topic 1 Question 6 Discussion:
Question #: 6
Topic #: 1
A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.
In generative AI models, such as those built on Amazon Bedrock, inference costs are driven by the number of tokens processed. A token can be as short as one character or as long as one word, and the more tokens consumed during the inference process, the higher the cost.
Option A (Correct): "Number of tokens consumed": This is the correct answer because the inference cost is directly related to the number of tokens processed by the model.
Option B: "Temperature value" is incorrect as it affects the randomness of the model's output but not the cost directly.
Option C: "Amount of data used to train the LLM" is incorrect because training data size affects training costs, not inference costs.
Option D: "Total training time" is incorrect because it relates to the cost of training the model, not the cost of inference.
AWS AI Practitioner References:
Understanding Inference Costs on AWS: AWS documentation highlights that inference costs for generative models are largely based on the number of tokens processed.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit