MLA-C01: Resources
Module 3: Data Storage
- Amazon EFS now supports up to 2.5 million IOPS per file system
- Multi-Attach support now available on Amazon EBS Provisioned IOPS volume type, io2
- AWS ML Engineer Associate 1.1 Collect, Ingest, and Store Data
Module 4: Data Preparation
- Registry of Open Data on AWS
- AWS Data Exchange
- CreateTrainingJob API Reference
- InvokeEndpoint API - SageMaker Runtime API Reference
- Hugging Face Safetensors
- Python PICKLE Format
- AWS ML Engineer Associate 1.1 Collect, Ingest, and Store Data
- AWS ML Engineer Associate 1.2 Transform Data
- AWS ML Engineer Associate 1.3 Validate Data and Prepare for Modeling
Module 5: Model Development
- Amazon SageMaker Autopilot
- Machine Learning on AWS
- Amazon SageMaker Pricing
- Model Interpretability with Amazon SageMaker Clarify
- AWS ML Engineer Associate 2.1 Choose a Modeling Approach
- AWS ML Engineer Associate 2.2 Train Models
- AWS ML Engineer Associate 2.3 Refine Models
- AWS ML Engineer Associate 2.4 Analyze Model Performance
- AWS ML Engineer Associate 4.2 Monitor and Optimize Infrastructure and Costs
Module 6: Model Training
- Train Machine Learning Models
- EC2 Instance Types
- AWS Neuron SDK
- Scaling Rufus with AWS Inferentia and Trainium for Prime Day
- What is Overfitting
- Efficiently Train, Tune, and Deploy Custom Ensembles with Amazon SageMaker
- Amazon SageMaker Pipelines
- AWS ML Engineer Associate 2.2 Train Models
Module 7: Model Tuning & Evaluation
- AWS Well Architected Framework: ML Lens - MLPER-09 Performance Trade-off Analysis
- Multiclass Model Insights
- MLU-EXPLAIN: ROC and AUC
- XGBoost Algorithm with Amazon SageMaker AI
- Distributed Training in Amazon SageMaker AI
- Automatic Model Tuning with SageMaker AI
- Amazon SageMaker Automatic Model Tuning with Hyperband
- AWS ML Engineer Associate 2.3 Refine Models
- AWS ML Engineer Associate 2.4 Analyze Model Performance
Module 8: Model Deployment
- Model Hosting Patterns Part 6: Best Practices in Testing and Updating Models
- Managed Spot Training in Amazon SageMaker AI
- Lyft Case Study - Spot Instances
- AWS ML Engineer Associate 3.1 Select a Deployment Infrastructure
- AWS ML Engineer Associate 3.2 Create and Script Infrastructure
Module 9: Security
- Securing Amazon SageMaker Studio Connectivity Using a Private VPC
- CreateTrainingJob API Reference
- InvokeEndpoint API - SageMaker Runtime
- Hugging Face SafeTensors
- Python PICKLE Format
- AWS ML Engineer Associate 4.3 Secure AWS ML Resources
Module 10: Monitoring & Operations
- Going Faster with Continuous Delivery
- AWS Builders' Library
- SageMaker AI Operators for Kubernetes
- CreateEndpointConfig API Reference
- AWS ML Engineer Associate 4.1 Monitor Model Performance and Data Quality
- AWS ML Engineer Associate 4.2 Monitor and Optimize Infrastructure and Costs
Case Studies
- Aviva: Scalable, Secure MLOps Platform using Amazon SageMaker
- Perplexity: 40% Faster Foundation Model Training with SageMaker HyperPod
- The Weather Company: Generative AI Case Study
- Lumi: Streamlining Loan Approvals with Amazon SageMaker AI
- BMW Group: Accelerating AI/ML Development with Amazon SageMaker Studio
- Amazon SageMaker AI Customers
Key Facts
Amazon Q Architecture
Amazon Q (including Q Developer and Q Business) is built on Amazon Bedrock, AWS's fully-managed platform for foundation models.