# **Training Session: Understanding Basic Terminology in ML to DNN**
This session introduces the **foundational terms** you need before working with Deep Neural Networks (DNN).
## **1. Machine Learning (ML) Basics**
* **Machine Learning**: A method where computers learn patterns from data instead of being explicitly programmed.
* Types of ML:
1. **Supervised Learning** – learns from labeled data (e.g., predicting house prices).
2. **Unsupervised Learning** – finds hidden patterns in unlabeled data (e.g., clustering customers).
3. **Reinforcement Learning** – learns by reward/punishment (e.g., game-playing AI).
## **2. Key Terms in Neural Networks**
1. **Parameters**
* These are the **weights and biases** inside the neural network.
* They are updated during training to minimize error.
2. **Encoder**
* Part of a model that **compresses input data into a smaller representation** (latent space).
* Example: In translation, the encoder turns English into a numeric representation.
3. **Decoder**
* Reconstructs the output from the encoded representation.
* Example: In translation, the decoder takes the representation and outputs French.
4. **Backpropagation**
* The learning process used to update parameters.
* It calculates the error and **moves weights in the right direction** to reduce it.
5. **Activation Function**
* Decides whether a neuron should fire.
* Examples: ReLU, Sigmoid, Tanh.
6. **Layers**
* **Input Layer**: Takes the data (e.g., pixels of an image).
* **Hidden Layers**: Do the processing with neurons.
* **Output Layer**: Gives the final prediction (e.g., cat or dog).
7. **Loss Function**
* Measures how wrong the model is.
* Example: Mean Squared Error, Cross-Entropy.
## **3. How DNN Works (Simple Flow)**
1. Data goes into the **input layer**.
2. Each layer processes it using **parameters (weights + biases)**.
3. The **activation function** decides output of neurons.
4. Compare prediction with the correct answer → **loss function**.
5. **Backpropagation** updates weights to reduce error.
6. Repeat until the model performs well.
## **4. Example**
* Suppose we build a DNN to recognize **digits (0–9)**:
* Input: image pixels.
* Hidden layers: process patterns.
* Output: probability for each digit.
* Loss: Cross-entropy compares predicted digit with actual digit.
* Backpropagation adjusts weights to improve accuracy.
## **5. Possible Questions & Answers**
**Q1:** What are parameters in a neural network?
**A1:** The weights and biases that the model learns during training.
**Q2:** What is the role of the encoder and decoder?
**A2:** The encoder compresses input into a representation; the decoder reconstructs the output from it.
**Q3:** Why do we need backpropagation?
**A3:** To adjust weights by calculating error and improving model accuracy.
**Q4:** What is an activation function?
**A4:** A mathematical function that decides if a neuron is activated (e.g., ReLU, Sigmoid).
**Q5:** How does the model know it is wrong?
**A5:** The **loss function** measures the difference between predicted and actual output.
Great 👍 let’s link yesterday’s class (ML → DNN basics) directly into today’s topic (OCI) so the students see the connection.
Linking Yesterday’s Class to Today’s (ML → OCI)
Recap Yesterday (ML & DNN Terms)
-
We learned that ML/DNN uses:
-
Parameters (weights + biases) → what the model learns
-
Encoder & Decoder → compressing and reconstructing data
-
Backpropagation → how the model improves
-
Activation Functions & Layers → how the network processes information
-
Loss Function → how we measure error
-
Challenge: Training these models requires a lot of computing power, memory, and data storage.
Today’s Topic: OCI (Oracle Cloud Infrastructure)
This is where OCI comes in—it provides the tools and resources to train, deploy, and scale the ML/DNN models we studied yesterday.
1. Why OCI for ML/DNN?
-
Training DNNs on laptops is often too slow.
-
OCI offers:
-
Compute power (GPUs, CPUs) for training.
-
Storage for datasets and trained models.
-
AI/ML services for building or using pretrained models.
-
2. Key OCI Services for ML/DNN
-
OCI Compute – Run VMs, Bare Metal, or GPU-powered instances to train models.
-
OCI Object Storage – Store large datasets (images, text, videos).
-
OCI Data Science – Provides JupyterLab environments to build/train models.
-
OCI AI Services – Use pretrained models for vision, speech, NLP, anomaly detection.
-
OCI Functions & API Gateway – Deploy trained models as APIs for real use.
3. Example Workflow (Tying it to Yesterday’s Concepts)
-
Step 1: Upload training data (e.g., images of digits) to OCI Object Storage.
-
Step 2: Use OCI Compute with GPUs to train your DNN.
-
Parameters (weights + biases) will be updated with backpropagation.
-
-
Step 3: Store the trained model in Object Storage.
-
Step 4: Deploy it as an API using OCI Functions, so users can send an image and get predictions.
This is the same ML pipeline we studied yesterday, but now powered by the cloud.
4. Questions & Answers
Q1: How does OCI help with training DNN models?
A1: It provides GPU-powered compute resources, storage for data, and managed ML tools to speed up training and deployment.
Q2: Can I use OCI if I don’t know how to build ML models?
A2: Yes, OCI offers AI Services with pretrained models (e.g., vision, language, speech) that you can use directly.
Q3: What is the role of Object Storage in ML?
A3: It is used to store datasets, model checkpoints, and trained models securely.
Q4: Which OCI service is like Jupyter Notebook?
A4: OCI Data Science provides JupyterLab environments to write code, train models, and analyze results.
Q5: How is yesterday’s concept of backpropagation related to OCI?
A5: Backpropagation is the learning process inside ML models. OCI provides the hardware (like GPUs) to perform backpropagation efficiently at scale.
0 comments:
Post a Comment