Generative AI E1 | Competency Id 6618 | Quiz Answers

Generative AI E1 | Competency Id 6618 | Quiz Answers

Sunday, June 2, 2024
~ 6 min read
TCS iEvolve Generative AI competency, E1 competency, quiz, final assessment answers

Updated Question Set: click here


Q1: What happens if the token limit is exceeded?

ā—‹ The model refuses the request

ā—‹ The model processes the request partially

ā—‹ The model ignores the token limit

ā—‹ The model crashes


Answer: The model refuses the request



Q2: What is a one shot prompt?

ā—‹ A prompt without any examples

ā—‹ A prompt with multiple examples

ā—‹ A prompt with only one example

ā—‹ A prompt with negative examples


Answer: A prompt with only one example



Q3: How can you influence GitHub Copilot's suggestions?

ā—‹ By providing extensive comments

ā—‹ By using different function class names

ā—‹ By adding types to your code

ā—‹ All of the above


Answer: All of the above



Q4: What is chunking?

ā—‹ Breaking up a large piece of text into smaller chunks

ā—‹ Combining multiple texts into a single chunk

ā—‹ Removing unnecessary tokens from a text

ā—‹ Rearranging the tokens in a text


Answer: Breaking up a large piece of text into smaller chunks



Q5: How can token limits be avoided?

ā—‹ By restructuring the initial prompt

ā—‹ By using chunking techniques

ā—‹ By summarizing parts of the text

ā—‹ All of the above


Answer: All of the above



Q6: What is the current credit amount given for api keys?

ā—‹ 3

ā—‹ 18

ā—‹ 20

ā—‹ 5


Answer: 20



Q7: What is the purpose of retry logic in prompt engineering?

ā—‹ To prevent hallucinations in the Al's response

ā—‹ To ensure consistent formatting of the response

ā—‹ To handle cases where the Al fails to provide the expected format

ā—‹ To improve the reliability of the Al's responses


Answer: To handle cases where the Al fails to provide the expected format



Q8: How can you access GitHub Copilot?

ā—‹ Through a web browser

ā—‹ Through Visual Studio Code

ā—‹ Through GitHub's website

ā—‹ Through a mobile app


Answer: Through Visual Studio Code



Q9: What is the role of human evaluation in prompt engineering?

ā—‹ To determine the reliability of the prompt

ā—‹ To fine-tune the model for specific use cases

ā—‹ To eliminate the need for prompt engineering

ā—‹ To prevent prompt injection attacks


Answer: To determine the reliability of the prompt



Q10: How does prompt engineering contribute to the reliability of Al models?

ā—‹ It decreases the reliability of Al output

ā—‹ It has no impact on the reliability of Al output

ā—‹ It increases the reliability of Al output

ā—‹ It depends on the complexity of the prompt


Answer: It increases the reliability of Al output



Q11: What is the importance of context length?

ā—‹ It affects the cost of the request

ā—‹ It determines the model's output

ā—‹ It affects the token limit

ā—‹ It determines the model's accuracy


Answer: It affects the token limit



Q12: What can be specified to change the style of an image generated by the AI?

ā—‹ The type of style needed

ā—‹ The format of the response

ā—‹ The number of retry attempts

ā—‹ The prompt engineering technique


Answer: The type of style needed



Q13: How can you toggle through, multiple suggestions in GitHub Copilot?

ā—‹ By pressing the Tab key

ā—‹ By pressing the Enter key

ā—‹ By pressing the Esc key

ā—‹ By pressing the Shift key


Answer: By pressing the Tab key



Q14: What does copilot do when you define a complete function?

ā—‹ It generates the documentation for the function

ā—‹ It copies some of the original code from the if statement

ā—‹ It understands and learns your coding patterns

ā—‹ It scans your entire project directory


Answer: It understands and learns your coding patterns



Q15: When should prompt engineering be used?

ā—‹ When creating a product name for personal use

ā—‹ When rigorously testing a prompt for development and subsequently production

ā—‹ When evaluating image models

ā—‹ When using toxic words in a prompt


Answer: When rigorously testing a prompt for development and subsequently production



Q16: What is the recommended approach when providing examples in prompt engineering?

ā—‹ Give two similar examples to constrain the creative space

ā—‹ Give multiple examples for better results

ā—‹ Give negative examples to limit Al output

ā—‹ Give no examples for better results


Answer: Give two similar examples to constrain the creative space



Q17: How can hallucinations be avoided when using language models?

ā—‹ By providing factual answers

ā—‹ By adding extra prompts

ā—‹ By training the models on complete information

ā—‹ By avoiding biases in the training data


Answer: By adding extra prompts



Q18: What is the correct order of steps to follow when making a request to the Chatgpt API?

ā—‹ Select post method, paste URL, select body, select raw, select JSON, provide model and messages

ā—‹ Select get method, paste URL, select body, select raw, select JSON, provide model and messages

ā—‹ Select post method, paste URL, select body, select raw, select text, provide model and messages

ā—‹ Select get method, paste URL, select body, select raw, select text, provide model and messages


Answer: Select post method, paste URL, select body, select raw, select JSON, provide model and messages



Q19: What can you adjust in chat mode to increase variability?

ā—‹ Maximum length

ā—‹ Model parameters

ā—‹ Temperature

ā—‹ Instructions


Answer: Temperature



Q20: What are neural networks?

ā—‹ Algorithms that analyze big data

ā—‹ Algorithms that simulate like the human brain

ā—‹ Algorithms that create new datasets

ā—‹ Algorithms that solve complex equations


Answer: Algorithms that simulate like the human brain

Post a comment

Comments

Join the conversation and share your thoughts! Leave the first comment.

Get your FREE PDF on "100 Ways to Try ChatGPT Today"

Generating link, please wait for: 60 seconds

Checkout all hot deals now šŸ”„

Search blogs

No blog posts found