Data Science at Scale using Spark and Hadoop
Who should attend
- Developers
- Data analysts
- Statisticians
Prerequisites
- Proficiency in a scripting language
- Python is strongly preferred
- Perl or Ruby is sufficient
- Basic knowledge of Apache Hadoop
- Experience working in Linux environments
Course Objectives
After completing this class, you will learn:
- How to identify potential business use cases where data science can provide impactful results
- How to obtain, clean and combine disparate data sources to create a coherent picture for analysis
- What statistical methods to leverage for data exploration that will provide critical insight into your data
- Where and when to leverage Hadoop streaming and Apache Spark for data science pipelines
- What machine learning technique to use for a particular data science project
- How to implement and manage recommenders using Spark’s MLlib, and how to set up and evaluate data experiments
- What are the pitfalls of deploying new analytics projects to production, at scale
Product Description
In the Data Science at Scale using Spark and Hadoop class, you will learn how scientists use data to solve problems by understanding the tools and techniques they use. Through simulations, you will apply data science methods to real-world challenges in different industries and prepare for data scientist roles in the field.
Outline
Data Science Overview
- What Is Data Science?
- The Growing Need for Data Science
- The Role of a Data Scientist
Use Cases
- Finance
- Retail
- Advertising
- Defense and Intelligence
- Telecommunications and Utilities
- Healthcare and Pharmaceuticals
Project Lifecycle
- Steps in the Project Lifecycle
- Lab Scenario Explanation
Data Acquisition
- Where to Source Data
- Acquisition Techniques
Evaluating Input Data
- Data Formats
- Data Quantity
- Data Quality
Data Transformation
- File Format Conversion
- Joining Data Sets
- Anonymization
Data Analysis and Statistical Methods
- Relationship Between Statistics and Probability
- Descriptive Statistics
- Inferential Statistics
- Vectors and Matrices
Fundamentals of Machine Learning
- Overview
- The Three C’s of Machine Learning
- Importance of Data and Algorithms
- Spotlight: Naive Bayes Classifiers
Recommender Overview
- What is a Recommender System?
- Types of Collaborative Filtering
- Limitations of Recommender Systems
- Fundamental Concepts
Introduction to Apache Spark and MLlib
- What is Apache Spark?
- Comparison to MapReduce
- Fundamentals of Apache Spark
- Spark’s MLlib Package
Implementing Recommenders with MLlib
- Overview of ALS Method for Latent Factor Recommenders
- Hyperparameters for ALS Recommenders
- Building a Recommender in MLlib
- Tuning Hyperparameters
- Weighting
Experimentation and Evaluation
- Designing Effective Experiments
- Conducting an Effective Experiment
- User Interfaces for Recommenders
Production Deployment and Beyond
- Deploying to Production
- Tips and Techniques for Working at Scale
- Summarizing and Visualizing Results
- Considerations for Improvement
- Next Steps for Recommenders