In the context of developing an AI application using NVIDIA’s NGC containers, how does the use of containerized environments enhance the reproducibility of LLM training and deployment workflows?
Which technique is used in prompt engineering to guide LLMs in generating more accurate and contextually appropriate responses?
What distinguishes BLEU scores from ROUGE scores when evaluating natural language processing models?
Why do we need positional encoding in transformer-based models?
What is the fundamental role of LangChain in an LLM workflow?
When fine-tuning an LLM for a specific application, why is it essential to perform exploratory data analysis (EDA) on the new training dataset?
What is Retrieval Augmented Generation (RAG)?
In neural networks, the vanishing gradient problem refers to what problem or issue?
What are the main advantages of instructed large language models over traditional, small language models (< 300M parameters)? (Pick the 2 correct responses)
Which metric is commonly used to evaluate machine-translation models?