AWS Certified AI Practitioner 온라인 연습
최종 업데이트 시간: 2026년02월14일
당신은 온라인 연습 문제를 통해 Amazon AIF-C01 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AIF-C01 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 87개의 시험 문제와 답을 포함하십시오.

정답: 
Explanation:
Step 1: Store meeting audio recordings in an Amazon S3 bucket.
Step 2: Convert meeting audio recordings to meeting text files by using Amazon Transcribe.
Step 3: Summarize meeting text files by using Amazon Bedrock.
The company wants to create an application to summarize meeting audio recordings, which requires a sequence of steps involving storage, speech-to-text conversion, and text summarization. Amazon S3 is the recommended storage service for audio files, Amazon Transcribe converts audio to text, and Amazon Bedrock provides generative AI capabilities for summarization. These three steps, in this order, create an efficient workflow for the application.
Exact Extract from AWS AI Documents:
From the Amazon Transcribe Developer Guide:
"Amazon Transcribe uses deep learning to convert audio files into text, supporting applications such as meeting transcription. Audio files can be stored in Amazon S3, and Transcribe can process them directly from an S3 bucket."
From the AWS Bedrock User Guide:
"Amazon Bedrock provides foundation models that can perform text summarization, enabling developers to build applications that generate concise summaries from text data, such as meeting transcripts."
(Source: Amazon Transcribe Developer Guide, Introduction to Amazon Transcribe; AWS Bedrock User Guide, Text Generation and Summarization)
Detailed
Step 1: Store meeting audio recordings in an Amazon S3 bucket. Amazon S3 is the standard storage service for audio files in AWS workflows, especially for integration with services like Amazon Transcribe. Storing the recordings in S3 allows Transcribe to access and process them efficiently. This is the first logical step.
Step 2: Convert meeting audio recordings to meeting text files by using Amazon Transcribe. Amazon Transcribe is designed for automatic speech recognition (ASR), converting audio files (stored in S3) into text. This step is necessary to transform the meeting recordings into a format that can be summarized.
Step 3: Summarize meeting text files by using Amazon Bedrock. Amazon Bedrock provides foundation models capable of generative AI tasks like text summarization. Once the audio is converted to text, Bedrock can summarize the meeting transcripts, completing the application’s requirements.
Unused Options Analysis:
Convert meeting audio recordings to meeting text files by using Amazon Polly. Amazon Polly is a text-to-speech service, not for converting audio to text. This option is incorrect and not used.
Store meeting audio recordings in an Amazon Elastic Block Store (Amazon EBS) volume. Amazon EBS is for block storage, typically used for compute instances, not for storing files for processing by services like Transcribe. S3 is the better choice, so this option is not used.
Summarize meeting text files by using Amazon Lex. Amazon Lex is for building conversational interfaces (chatbots), not for text summarization. Bedrock is the appropriate service for summarization, so this option is not used.
Hotspot Selection Analysis:
The task requires selecting and ordering three steps from the list, with each step used exactly once or not at all. The selected steps―storing in S3, converting with Transcribe, and summarizing with Bedrock―form a complete and logical workflow for the application.
Reference: Amazon Transcribe Developer Guide: Introduction to Amazon Transcribe (https://docs.aws.amazon.com/transcribe/latest/dg/what-is.html)
AWS Bedrock User Guide: Text Generation and Summarization (https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html)
AWS AI Practitioner Learning Path: Module on Speech-to-Text and Generative AI
Amazon S3 User Guide: Storing Data for Processing (https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html)
정답:
Explanation:
The company needs to automatically group similar customers and products based on their characteristics, which is a clustering task. Unsupervised learning is the ML strategy for grouping data without labeled outcomes, making it ideal for this requirement.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"Unsupervised learning is used to identify patterns or groupings in data without labeled outcomes. Common applications include clustering, such as grouping similar customers or products based on their characteristics, using algorithms like K-means or hierarchical clustering."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Strategies)
Detailed
Option A: Unsupervised learning This is the correct answer. Unsupervised learning, specifically clustering, is designed to group similar entities (e.g., customers or products) based on their characteristics without requiring labeled data.
Option B: Supervised learning Supervised learning requires labeled data to train a model for prediction or classification, which is not applicable here since the task involves grouping without predefined labels.
Option C: Reinforcement learning Reinforcement learning involves training an agent to make decisions through rewards and penalties, not for grouping data. This option is irrelevant.
Option D: Semi-supervised learning Semi-supervised learning uses a mix of labeled and unlabeled data, but the task here does not involve any labeled data, making unsupervised learning more appropriate.
Reference: AWS AI Practitioner Learning Path: Module on Machine Learning Strategies
Amazon SageMaker Developer Guide: Unsupervised Learning Algorithms (https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html)
AWS Documentation: Introduction to Unsupervised Learning (https://aws.amazon.com/machine-learning/)
정답:
Explanation:
The company wants an AI application to help employees check open customer claims, identify claim details, and access related documents. Agents for Amazon Bedrock can automate tasks by interacting
with external systems, while Amazon Bedrock knowledge bases provide a repository of information (e.g., claim details and documents) that the agent can query to respond to employee requests, making this the best solution.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Agents for Amazon Bedrock enable developers to build applications that can perform tasks by interacting with external systems and data sources. When paired with Amazon Bedrock knowledge bases, agents can access structured and unstructured data, such as documents or databases, to provide detailed responses for use cases like customer service or claims management."
(Source: AWS Bedrock User Guide, Agents and Knowledge Bases)
Detailed
Option A: Use Agents for Amazon Bedrock with Amazon Fraud Detector to build the application. Amazon Fraud Detector is for detecting fraudulent activities, not for managing customer claims or accessing documents. This option is irrelevant.
Option B: Use Agents for Amazon Bedrock with Amazon Bedrock knowledge bases to build the application. This is the correct answer. Agents for Amazon Bedrock can interact with knowledge bases to retrieve claim details and documents, enabling employees to check open claims and access relevant information.
Option C: Use Amazon Personalize with Amazon Bedrock knowledge bases to build the application.Amazon Personalize is for building recommendation systems, not for retrieving claim details or documents. This option does not meet the requirements.
Option D: Use Amazon SageMaker AI to build the application by training a new ML model. Training a new ML model on SageMaker is unnecessary and complex for this use case, as the task can be efficiently handled by Agents and knowledge bases on Amazon Bedrock.
Reference: AWS Bedrock User Guide: Agents and Knowledge Bases (https://docs.aws.amazon.com/bedrock/latest/userguide/agents.html)
AWS AI Practitioner Learning Path: Module on Generative AI and Knowledge Bases
Amazon Bedrock Developer Guide: Building AI Applications (https://aws.amazon.com/bedrock/)
정답:
Explanation:
Overfitting occurs when an ML model learns the training data too well, including noise and patterns that do not generalize to new data. A key cause of overfitting is when the training dataset does not represent all possible input values, leading the model to over-specialize on the limited data it was trained on, failing to generalize to unseen data.
Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
"Overfitting often occurs when the training dataset is not representative of the broader population of possible inputs, causing the model to memorize specific patterns, including noise, rather than learning generalizable features."
(Source: Amazon SageMaker Developer Guide, Model Evaluation and Overfitting)
Detailed
Option A: The training dataset does not represent all possible input values. This is the correct answer. If the training dataset lacks diversity and does not cover the range of possible inputs, the model overfits by learning patterns specific to the training data, failing to generalize.
Option B: The model contains a regularization method. Regularization methods (e.g., L2 regularization) are used to prevent overfitting, not cause it. This option is incorrect.
Option C: The model training stops early because of an early stopping criterion. Early stopping is a technique to prevent overfitting by halting training when performance on a validation set degrades. It does not cause overfitting.
Option D: The training dataset contains too many features. While too many features can contribute to overfitting (e.g., by increasing model complexity), this is less directly tied to overfitting than a non-representative dataset. The dataset’s representativeness is the primary cause.
Reference: Amazon SageMaker Developer Guide: Model Evaluation and Overfitting (https://docs.aws.amazon.com/sagemaker/latest/dg/model-evaluation.html)
AWS AI Practitioner Learning Path: Module on Model Performance and Evaluation
AWS Documentation: Understanding Overfitting (https://aws.amazon.com/machine-learning/)
정답:
Explanation:
The company needs to address the degradation in model inference quality after 4 months in production and prevent future occurrences by receiving notifications. Retraining the model can address the current degradation, likely caused by data drift (changes in the data distribution over time). Amazon SageMaker Model Monitor is designed to detect and monitor model drift, alerting the company when inference quality degrades, thus meeting both requirements.
Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
"Amazon SageMaker Model Monitor enables you to monitor machine learning models in production for data drift, model performance degradation, and other quality issues. It can detect drift in feature distributions and inference quality, sending notifications when deviations are detected, allowing you to take corrective actions such as retraining the model."
(Source: Amazon SageMaker Developer Guide, Monitoring Models with SageMaker Model Monitor) Detailed
Option A: Retrain the model. Monitor model drift by using Amazon SageMaker Clarify. SageMaker Clarify is used for bias detection and explainability, not for monitoring model drift or inference quality in production. This option does not fully meet the requirements.
Option B: Retrain the model. Monitor model drift by using Amazon SageMaker Model Monitor. This is the correct answer. Retraining addresses the current degradation, and SageMaker Model Monitor can detect future drift in inference quality, sending notifications to prevent recurrence, as required.
Option C: Build a new model. Monitor model drift by using Amazon SageMaker Feature Store. SageMaker Feature Store is for managing and sharing features, not for monitoring model drift or inference quality. Building a new model may not be necessary if retraining can address the issue.
Option D: Build a new model. Monitor model drift by using Amazon SageMaker JumpStart. SageMaker JumpStart provides pre-trained models and solutions for quick deployment, but it does not offer specific tools for monitoring model drift or inference quality in production.
Reference: Amazon SageMaker Developer Guide: Monitoring Models with SageMaker Model Monitor (https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html)
AWS AI Practitioner Learning Path: Module on Model Monitoring and Maintenance
AWS Documentation: Addressing Model Drift in Production (https://aws.amazon.com/sagemaker/)
정답:
Explanation:
The ML model shows 90% recall on training data but only 40% recall on unseen testing data, indicating a significant performance drop. This discrepancy suggests the model has learned the training data too well, including noise and specific patterns that do not generalize to new data, which
is a classic sign of overfitting.
Exact Extract from AWS AI Documents:
From the Amazon SageMaker Developer Guide:
"Overfitting occurs when a model performs well on training data but poorly on unseen test data, as it has learned patterns specific to the training set, including noise, that do not generalize. A large gap between training and testing performance metrics, such as recall, is a common indicator of overfitting."
(Source: Amazon SageMaker Developer Guide, Model Evaluation and Overfitting)
Detailed
Option A: The model is overfitting on the training data. This is the correct answer. The significant drop in recall from 90% (training) to 40% (testing) indicates the model is overfitting, as it performs well on training data but fails to generalize to unseen data.
Option B: The model is underfitting on the training data. Underfitting occurs when the model performs poorly on both training and testing data due to insufficient learning. With 90% recall on training data, the model is not underfitting.
Option C: The model has insufficient training data. Insufficient training data could lead to poor performance, but the high recall on training data (90%) suggests the model has learned the training data well, pointing to overfitting rather than a lack of data.
Option D: The model has insufficient testing data. Insufficient testing data might lead to unreliable test metrics, but it does not explain the large performance gap between training and testing, which is more indicative of overfitting.
Reference: Amazon SageMaker Developer Guide: Model Evaluation and Overfitting (https://docs.aws.amazon.com/sagemaker/latest/dg/model-evaluation.html)
AWS AI Practitioner Learning Path: Module on Model Performance and Evaluation
AWS Documentation: Understanding Overfitting and Underfitting (https://aws.amazon.com/machine-learning/)
정답:
정답:
Explanation:
The ecommerce company is deploying a chatbot that needs safeguards to filter harmful content from input prompts and responses. Amazon Bedrock Guardrails provide mechanisms to ensure responsible AI usage by filtering harmful content, such as hate speech, violence, or misinformation, making it the appropriate feature for this requirement.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Amazon Bedrock Guardrails enable developers to implement safeguards for generative AI applications, such as chatbots, by filtering harmful content in input prompts and model responses. Guardrails include content filters, word filters, and denied topics to ensure safe and responsible outputs."
(Source: AWS Bedrock User Guide, Guardrails for Responsible AI)
Detailed
Option A: Amazon Bedrock Guardrails This is the correct answer. Amazon Bedrock Guardrails are specifically designed to filter harmful content from chatbot inputs and responses, ensuring safe interactions for users.
Option B: Amazon Bedrock Agents Amazon Bedrock Agents are used to automate tasks and integrate with external tools, not to filter harmful content. This option does not meet the requirement.
Option C: Amazon Bedrock inference APIs Amazon Bedrock inference APIs allow users to invoke foundation models for generating responses, but they do not provide built-in content filtering mechanisms.
Option D: Amazon Bedrock custom models Custom models on Amazon Bedrock allow users to fine-tune models, but they do not inherently include safeguards for filtering harmful content unless explicitly implemented.
Reference: AWS Bedrock User Guide: Guardrails for Responsible AI (https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html)
AWS AI Practitioner Learning Path: Module on Responsible AI and Model Safety
Amazon Bedrock Developer Guide: Building Safe AI Applications (https://aws.amazon.com/bedrock/)
정답:
Explanation:
The manufacturing company needs to create product descriptions in multiple languages, which requires automated language translation. Amazon Translate is a fully managed service that uses machine learning to provide high-quality translation between languages, making it the ideal solution for this task.
Exact Extract from AWS AI Documents:
From the Amazon Translate Developer Guide:
"Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. It can be used to automatically translate text, such as product descriptions, into multiple languages to reach a global audience."
(Source: Amazon Translate Developer Guide, Introduction to Amazon Translate)
Detailed
Option A: Amazon Translate This is the correct answer. Amazon Translate automates the translation of text into multiple languages, directly addressing the company’s need to create product descriptions in different languages.
Option B: Amazon Transcribe Amazon Transcribe converts speech to text, which is unrelated to translating text into multiple languages. This option is incorrect.
Option C: Amazon Kendra Amazon Kendra is an intelligent search service that uses machine learning to provide answers from documents, not for translating text. This option is irrelevant.
Option D: Amazon Polly Amazon Polly is a text-to-speech service that generates spoken audio from text, not for translating text into other languages. This option does not meet the requirements.
Reference: Amazon Translate Developer Guide: Introduction to Amazon Translate (https://docs.aws.amazon.com/translate/latest/dg/what-is.html)
AWS AI Practitioner Learning Path: Module on Natural Language Processing Services
AWS Documentation: Language Translation with Amazon Translate (https://aws.amazon.com/translate/)

정답: 
Explanation:
Block input prompts or model responses that contain harmful content such as hate, insults, violence, or misconduct: Content filters
Avoid subjects related to illegal investment advice or legal advice: Denied topics
Detect and block specific offensive terms: Word filters
Detect and filter out information in the model’s responses that is not grounded in the provided source information: Contextual grounding check
The company is using a generative AI model on Amazon Bedrock and needs to mitigate undesirable and potentially harmful content in the model’s responses. Amazon Bedrock provides several guardrail mechanisms, including content filters, denied topics, word filters, and contextual grounding checks, to ensure safe and accurate outputs. Each mitigation action in the hotspot aligns with a specific Bedrock filter policy, and each policy must be used exactly once.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
*"Amazon Bedrock guardrails provide mechanisms to control model outputs, including:
Content filters: Block harmful content such as hate speech, violence, or misconduct.
Denied topics: Prevent the model from generating responses on specific subjects, such as illegal activities or advice.
Word filters: Detect and block specific offensive or inappropriate terms.
Contextual grounding check: Ensure responses are grounded in the provided source information, filtering out ungrounded or hallucinated content."*(Source: AWS Bedrock User Guide, Guardrails for Responsible AI)
Detailed
Block input prompts or model responses that contain harmful content such as hate, insults, violence, or misconduct: Content filters Content filters in Amazon Bedrock are designed to detect and block harmful content, such as hate speech, insults, violence, or misconduct, ensuring the model’s outputs are safe and appropriate. This matches the first mitigation action.
Avoid subjects related to illegal investment advice or legal advice: Denied topics Denied topics allow users to specify subjects the model should avoid, such as illegal investment advice or legal advice, which could have regulatory implications. This policy aligns with the second mitigation action.
Detect and block specific offensive terms: Word filters Word filters enable the detection and blocking of specific offensive or inappropriate terms defined by the user, making them ideal for this mitigation action focused on specific terms.
Detect and filter out information in the model’s responses that is not grounded in the provided source information: Contextual grounding check The contextual grounding check ensures that the model’s responses are based on the provided source information, filtering out ungrounded or hallucinated content. This matches the fourth mitigation action.
Hotspot Selection Analysis:
The hotspot lists four mitigation actions, each with the same dropdown options: "Select...," "Content filters," "Contextual grounding check," "Denied topics," and "Word filters." The correct selections are:
First action: Content filters
Second action: Denied topics
Third action: Word filters
Fourth action: Contextual grounding check
Each filter policy is used exactly once, as required, and aligns with Amazon Bedrock’s guardrail capabilities.
Reference: AWS Bedrock User Guide: Guardrails for Responsible AI (https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html)
AWS AI Practitioner Learning Path: Module on Responsible AI and Model Safety
Amazon Bedrock Developer Guide: Configuring Guardrails (https://aws.amazon.com/bedrock/)
정답:
Explanation:
Fine-tuning involves training a pre-trained AI model on a labeled dataset specific to a particular task or domain, adapting it to industry terminology and requirements. This process adjusts the model’s parameters to better fit the target use case, such as understanding specialized vocabulary or meeting domain-specific needs.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Fine-tuning allows you to adapt a pre-trained foundation model to your specific use case by training it on a labeled dataset. This technique is commonly used to customize models forindustry-specific terminology, improving their accuracy for specialized tasks."
(Source: AWS Bedrock User Guide, Model Customization)
Detailed
Option A: Data augmentation Data augmentation involves generating synthetic data to expand a training dataset, typically for tasks like image or text generation. It does not specifically adapt models to industry terminology or requirements.
Option B: Fine-tuning This is the correct answer. Fine-tuning trains a pre-trained model on a labeled dataset tailored to the target domain, enabling it to learn industry-specific terminology and requirements, as described in the question.
Option C: Model quantization Model quantization reduces the precision of a model’s weights to optimize it for deployment (e.g., on edge devices). It does not involve training on labeled datasets or adapting to industry terminology.
Option D: Continuous pre-training Continuous pre-training extends the initial training of a model on a large, general dataset. While it can improve general performance, it is not specifically tailored to industry requirements using labeled datasets, unlike fine-tuning.
Reference: AWS Bedrock User Guide: Model Customization (https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html)
AWS AI Practitioner Learning Path: Module on Model Training and Customization
Amazon SageMaker Developer Guide: Fine-Tuning Models (https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html)
정답:
Explanation:
The company wants to improve the accuracy of a generative AI application using a foundation model (FM) on Amazon Bedrock in the most cost-effective way. Prompt engineering involves optimizing the input prompts to guide the FM to produce more accurate responses without modifying the model itself. This approach is cost-effective because it does not require additional computational resources or training, unlike fine-tuning or retraining.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Prompt engineering is a cost-effective technique to improve the performance of foundation models. By crafting precise and context-rich prompts, users can guide the model to generate more accurate and relevant responses without the need for fine-tuning or retraining."
(Source: AWS Bedrock User Guide, Prompt Engineering for Foundation Models)
Detailed
Option A: Fine-tune the FM. Fine-tuning involves retraining the FM on a custom dataset, which requires computational resources, time, and cost (e.g., for Amazon Bedrock fine-tuning jobs). It is not the most cost-effective solution.
Option B: Retrain the FM. Retraining an FM from scratch is highly resource-intensive and expensive, as it requires large datasets and significant compute power. This is not cost-effective.
Option C: Train a new FM. Training a new FM is the most expensive option, as it involves building a model from the ground up, requiring extensive data, compute resources, and expertise. This is not cost-effective.
Option D: Use prompt engineering. This is the correct answer. Prompt engineering adjusts the input prompts to improve the FM’s responses without incurring additional compute costs, making it the most cost-effective solution for improving accuracy on Amazon Bedrock.
Reference: AWS Bedrock User Guide: Prompt Engineering for Foundation Models (https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-engineering.html)
AWS AI Practitioner Learning Path: Module on Generative AI Optimization
Amazon Bedrock Developer Guide: Cost Optimization for Generative AI (https://aws.amazon.com/bedrock/)
정답:
Explanation:
The mobile app for users with visual impairment needs to hear user speech and provide voice responses, requiring speech-to-text (speech recognition) and text-to-speech capabilities. Deep learning neural networks are widely used for speech recognition tasks, as they can effectively process and transcribe spoken language. AWS services like Amazon Transcribe, which uses deep learning for speech recognition, can fulfill this requirement by converting user speech to text, and Amazon Polly can generate voice responses.
Exact Extract from AWS AI Documents:
From the AWS Documentation on Amazon Transcribe:
"Amazon Transcribe uses deep learning neural networks to perform automatic speech recognition (ASR), converting spoken language into text with high accuracy. This is ideal for applications requiring voice input, such as accessibility features for visually impaired users."
(Source: Amazon Transcribe Developer Guide, Introduction to Amazon Transcribe)
Detailed
Option A: Use a deep learning neural network to perform speech recognition. This is the correct answer. Deep learning neural networks are the foundation of modern speech recognition systems, as used in AWS services like Amazon Transcribe. They enable the app to hear and transcribe user speech, and a service like Amazon Polly can handle voice responses, meeting the requirements.
Option B: Build ML models to search for patterns in numeric data. This option is irrelevant, as the task involves processing speech (audio data) and generating voice responses, not analyzing numeric data patterns.
Option C: Use generative AI summarization to generate human-like text. Generative AI summarization focuses on summarizing text, not processing speech or generating voice responses. This option does not address the core requirement of speech recognition.
Option D: Build custom models for image classification and recognition. Image classification and recognition are unrelated to processing speech or generating voice responses, making this option incorrect for an app focused on audio interaction.
Reference: Amazon Transcribe Developer Guide: Introduction to Amazon Transcribe (https://docs.aws.amazon.com/transcribe/latest/dg/what-is.html)
Amazon Polly Developer Guide: Text-to-Speech Overview (https://docs.aws.amazon.com/polly/latest/dg/what-is.html)
AWS AI Practitioner Learning Path: Module on Speech Recognition and Synthesis

정답: 
Explanation:
Building a well-architected ML workload follows a structured lifecycle as outlined in AWS best practices. The process begins with defining the business goal and framing the ML problem to ensure the project aligns with organizational objectives. Next, the model is developed, which includes data preparation, training, and evaluation. Once the model is ready, it is deployed to make predictions in a production environment. Finally, the model is monitored to ensure it performs as expected and to address any issues like drift or degradation over time. This order ensures a systematic approach to ML development.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"The machine learning lifecycle typically follows these stages: 1) Define the business goal and frame the ML problem, 2) Develop the model (including data preparation, training, and evaluation), 3) Deploy the model to production, and 4) Monitor the model for performance and drift to ensure it continues to meet business needs."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Lifecycle)
Detailed
Step 1: Define business goal and frame ML problem This is the first step in any ML project. It involves understanding the business objective (e.g., reducing churn) and framing the ML problem (e.g., classification or regression). Without this step, the project lacks direction. The hotspot lists this option as "Define business goal and frame ML problem," which matches this stage.
Step 2: Develop model After defining the problem, the next step is to develop the model. This includes collecting and preparing data, selecting an algorithm, training the model, and evaluating its performance. The hotspot lists "Develop model" as an option, aligning with this stage.
Step 3: Deploy model Once the model is developed and meets performance requirements, it is deployed to a production environment to make predictions or automate decisions. The hotspot includes "Deploy model" as an option, which fits this stage.
Step 4: Monitor model After deployment, the model must be monitored to ensure it performs well over time, addressing issues like data drift or performance degradation. The hotspot lists "Monitor model" as an option, completing the lifecycle.
Hotspot Selection Analysis:
The hotspot provides four steps, each with the same dropdown options: "Select...," "Deploy model," "Develop model," "Monitor model," and "Define business goal and frame ML problem." The correct selections are:
Step 1: Define business goal and frame ML problem
Step 2: Develop model
Step 3: Deploy model
Step 4: Monitor model
Each option is used exactly once, as required, and follows the logical order of the ML lifecycle.
Reference: AWS AI Practitioner Learning Path: Module on Machine Learning Lifecycle
Amazon SageMaker Developer Guide: Machine Learning Workflow (https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-mlconcepts.html)
AWS Well-Architected Framework: Machine Learning Lens (https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/)
정답:
Explanation:
The evaluation stage of the generative AI model lifecycle involves testing the model to assess its performance, including accuracy, coherence, and other metrics. This stage ensures the model meets the desired quality standards before deployment.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"The evaluation phase in the machine learning lifecycle involves testing the model against validation or test datasets to measure its performance metrics, such as accuracy, precision, recall, or task-specific metrics for generative AI models."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Lifecycle)
Detailed
Option A: Deployment Deployment involves making the model available for use in production. While monitoring occurs post-deployment, accuracy testing is performed earlier in the evaluation stage.
Option B: Data selection Data selection involves choosing and preparing data for training, not testing the model’s accuracy.
Option C: Fine-tuning Fine-tuning adjusts a pre-trained model to improve performance for a specific task, but it is not the stage where accuracy is formally tested.
Option D: Evaluation This is the correct answer. The evaluation stage is where tests are conducted to examine the model’s accuracy and other performance metrics, ensuring it meets requirements.
Reference: AWS AI Practitioner Learning Path: Module on Machine Learning Lifecycle
Amazon SageMaker Developer Guide: Model Evaluation (https://docs.aws.amazon.com/sagemaker/latest/dg/model-evaluation.html)
AWS Documentation: Generative AI Lifecycle (https://aws.amazon.com/machine-learning/)