深度學習 Deep Learning 公開課
這些視頻播放涵蓋了從基礎理論到高級應用的各個方面。我們從基本的神經網絡(Neural Networks, NN)和卷積神經網絡(Convolutional Neural Networks, CNN)開
內容簡介
作者介紹
適合人群
你將會學到什麼
購買須知
-
深度學習介紹(Intro)
深度學習作為當前人工智慧領域最引人注目的技術之一,其在影像辨識、語音處理、自然語言理解等多個方面展現了驚人的能力。本系列影片介紹了深度學習的基本概念、主要理論、神經網路的工作原理及其在實際中的應用。透過這些視頻,觀眾可以從初步了解深度學習的定義和背景,逐漸深入網路結構、優化技術和理論基礎。無論是新手還是有一定基礎的學習者,都可以在這些影片中找到有價值的內容。
-
Why deep learning is becoming so popular? | Deep Learning Tutorial 2 (Tensorflow2.0, Keras & Python) - codebasics
This video explains four reasons why deep learning has become so popular in past few years. In this deep learning tutorial python, I will cover following things in this video, 00:00 Introduction 00:24 Data growth 01:25 Hardware advancements 02:40 Python and opensource ecosystem 04:00 Cloud and AI boom I remember I did a project in my college 17 years ago and implemented error backpropogation algorithm in C++. Why is it that deep learning is taking off in recent years? The data growth due to IT adoption, IOT devices, social media is generating so much truth so that deep learning algorithms can really produce useful results. Neural network and deep learning shows its real power when training data size is huge. Due to advancements in hardwares such as GPU and TPUs one can run so many computations in parallel making it possible to run deep learning training in a reasonable amount of time. Python and opensource ecosystem on the other hand reduced the barries for people who don't know programming and they can try python with pytorch or tensorflow and write deep learning programs easily. One doesn't need to buy expensive hardware, they can rent a machine in a cloud and can still write machine learning programs. There is a prevalent AI boom in the businesses nowadays where all business executives want to benefit from artificial intelligence. This further accelerates the growth of deep learning. 🔖 Hashtags 🔖 #deeplearningwithpython #deeplearning #deeplearningtensorflow #deeplearningtutorialpython #deeplearningtutorial #deeplearningbegininer
-
Introduction | Deep Learning Tutorial 1 (Tensorflow Tutorial, Keras & Python) - codebasics
With this video, I am beginning a new deep learning tutorial series for total beginners. In this deep learning tutorial python, I will cover following things in this series, 1. Explain neural network concepts in most easiest way 2. Go over math if needed, otherwise keep the tutorials simple and easy 3. Provide exercises that you can practice on 4. Use python, keras and tensorflow mainly. I might cover pytorch as well 5. Cover convolutional neural network (CNN) for image and video processing 6. Cover recurrent neural network (RNN) for sequential analysis and natural language processing (NLP) Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. 🔖 Hashtags 🔖 #deeplearningwithpython #deeplearning #deeplearningtensorflow #deeplearningtutorialpython #deeplearningtutorial #deeplearningbegininer
-
【機器學習 2022】魚與熊掌可以兼得的深度學習 - Hung-yi Lee
-
什么是神经网络what is neural network in machine learning - 莫烦Python
这里提到的是人工神经网路, 是存在于计算机里的神经系统.人工神经网络和自然神经网络的区别.神经网络是什么,它是怎么工作的. 都会在影片里一一提到.有网友根据我的 Tensorflow 系列做了一个很好的文字笔记, 推荐阅读: http://www.jianshu.com/p/e112012a4b2dTensorf...
-
科普: 神经网络的黑盒不黑 (深度理解神经网络) - 莫烦Python
今天我们来说说为了理解神经网络在做什么, 对神经网络这个黑盒的正确打开方式.更多内容 莫烦Python: https://mofanpy.com通过 "莫烦 Python" 支持我做出更好的视频: https://mofanpy.com/support/通过翻译,帮助其他语言的观看者:http://www.you...
-
The StatQuest Introduction to PyTorch - StatQuest
PyTorch is one of the most popular tools for making Neural Networks. This StatQuest walks you through a simple example of how to use PyTorch one step at a ti...
-
Introduction to Coding Neural Networks with PyTorch and Lightning - StatQuest
Although we've seen how to code a simple neural network with PyTorch, we can make our lives a lot easier if we add Lightning to the mix. It makes writing the...
-
Interesting things about deep learning - 李宏毅
-
Deep Learning Theory 1-1: Can shallow network fit any function? - 李宏毅
-
Deep Learning Theory 1-2: Potential of Deep - 李宏毅
-
Deep Learning Theory 1-3: Is Deep better than Shallow? - 李宏毅
-
Deep Learning Theory 2-1: When Gradient is Zero - 李宏毅
-
Deep Learning Theory 2-2: Deep Linear Network - 李宏毅
-
Deep Learning Theory 2-3: Does Deep Network have Local Minima? - 李宏毅
-
Deep Learning Theory 2-4: Geometry of Loss Surfaces (Conjecture) - 李宏毅
-
Deep Learning Theory 2-5: Geometry of Loss Surfaces (Empirical) - 李宏毅
-
Deep Learning Theory 3-1: Generalization Capability of Deep Learning - 李宏毅
-
Deep Learning Theory 3-2: Indicator of Generalization - 李宏毅
-
How to Make a Prediction - Intro to Deep Learning #1 - Siraj Raval
Welcome to Intro to Deep Learning! This course is for anyone who wants to become a deep learning engineer. I'll take you from the very basics of deep learning to the bleeding edge over the course of 4 months. In this video, we’ll predict an animal’s body weight given it’s brain weight using linear regression via 10 lines of Python. I’ll have a live session every Wednesday at 10 AM PST that covers my weekly video topics in depth. You can click on the little notification bell next to the subscribe button to get an email notification whenever I’m live. And each session is recorded and uploaded to this channel in case you miss it. This Youtube content is 100% created by me (from the writing to the editing, etc.) , it’ll all be released on my channel, and it’s totally free. I am also very proud and excited to announce my new, exclusive partnership with Udacity. Together, we’re offering the new Deep Learning Nanodegree Foundation program. If you want to take your game to the next level, this is for you! Especially since Udacity will be providing guaranteed admission to their groundbreaking Artificial Intelligence and Self-Driving Car Nanodegree programs to all graduates. They’re offering discounted limited-time pricing, so enroll now to enjoy the unique projects, program sets, and expert reviews. Plus, their community is amazing, so don’t forget to join the Slack channel after you enroll (I’ll be in there too!) And hey, I’m getting paid a small royalty from each enrollment, so let’s do this together!
-
How to Do Mathematics Easily - Intro to Deep Learning #4 - Siraj Raval
Let's learn about some key math concepts behind deep learning shall we? We'll build a 3 layer neural network and dive into some key concepts that makes deep learning give us such incredible results. Coding challenge for this video: https://github.com/llSourcell/how_to_... Jovian's Winning Code: https://github.com/jovianlin/siraj-in... Vishal's Runner up Code: https://github.com/erilyth/DeepLearni... Linear Algebra cheatsheet: http://www.souravsengupta.com/cds2016... Calculus cheatsheet: http://tutorial.math.lamar.edu/pdf/Ca... Statistics cheatsheet: http://web.mit.edu/~csvoss/Public/usa... And if you have never had experience with any of these 3 and want to learn from absolute scratch, I'd recommend the respective KhanAcademy courses: https://www.khanacademy.org/math More Learning Resources: https://people.ucsc.edu/~praman1/stat... http://www.vision.jhu.edu/tutorials/I... http://datascience.ibm.com/blog/the-m...
-
Matrix Factorization :: Basic Matrix Factorization @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Matrix Factorization :: Linear Network Hypothesis @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Matrix Factorization :: Summary of Extraction Models @ Machine Learning Techniques (機器學習技法) - 林軒田
-
How to Make Data Amazing - Intro to Deep Learning #5 - Siraj Raval
In this video, we'll go through data preprocessing steps for 3 different datasets. We'll also go in depth on a dimensionality reduction technique called Principal Component Analysis. Coding challenge for this video: https://github.com/llSourcell/How_to_... Charles-David's Winning Code: https://github.com/alkaya/earthquake-... Siby Jack Grove's Runner-up code: https://github.com/sibyjackgrove/Eart...
-
【機器學習2021】機器學習模型的可解釋性 (Explainable ML) (上) – 為什麼類神經網路可以正確分辨寶可夢和數碼寶貝呢? - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/xai_v4.pdf
-
【機器學習2021】機器學習模型的可解釋性 (Explainable ML) (下) –機器心中的貓長什麼樣子? - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/xai_v4.pdf
-
-
Deep Learning 基礎 簡介
這些視頻提供了深度學習和神經網絡的基礎知識。從人工神經網絡與生物神經網絡的比較,到神經網絡的基本概念和應用,這些教程涵蓋了深度學習的多個重要面向。視頻還介紹了神經網絡的激活函數、批量標準化、自編碼器等核心概念,並探討了如何應對不平衡數據集,以及神經網絡模型結構的構建和優化方法。
-
科普: 人工神经网络 VS 生物神经网络 - 莫烦Python
2-30年前, 一想到神经网络, 我们就会想到生物神经系统中数以万计的细胞联结, 将感官和反射器联系在一起的系统. 但是今天, 你可能的第一反应却是电脑和电脑程序当中的人工神经网络....更多内容: http://mofanpy.com通过 "莫烦 Python" 支持我做出更好的视频: https://mofa...
-
“神经网络”是什么?如何直观理解它的能力极限?它是如何无限逼近真理的? - 王木头学科学
为什么简单的神经元组合在一起,就能涌现出智能?神经网络的极限在哪里?如何用画面理解神经网络?GeoGebra模拟的简单3层神经网络https://www.geogebra.org/m/mhzsp7wy
-
怎样区分好用的特征 (深度学习)? Which are good features (deep learning)? - 莫烦Python
我们在这次视频中会分享到怎么选择一个好特征, 和好特征意味着什么. 那什么是好的特征, 你怎么知道它的好或坏呢?机器学习-简介系列 播放列表: https://www.youtube.com/playlist?list=PLXO45tsB95cIFm8Y8vMkNNPPXAtYXwKin我创建的学习网站: htt...
-
什么是“感知机”,它的缺陷为什么让“神经网络”陷入低潮 - 王木头学科学
感知机的提出带来了神经网络的繁荣,同样它的缺陷也让神经网络陷入低谷。感知机是什么?它为什么对神经网络这么关键?20多分钟,了解感知机。
-
softmax是为了解决归一问题凑出来的吗?和最大熵是什么关系?最大熵对机器学习为什么非常重要? - 王木头学科学
什么是softmax?如何用最大熵原理推出softmax和sigmoid?顺便了解,概率论里的矩,条件熵。最大熵、极大似然法、交叉熵3种方法是等价的。
-
Machine Learning Foundations/Techniques: Deep Learning Activation - 林軒田
-
Handling imbalanced dataset in machine learning | Deep Learning Tutorial 21 (Tensorflow2.0 & Python) - codebasics
Credit card fraud detection, cancer prediction, customer churn prediction are some of the examples where you might get an imbalanced dataset. Training a model on imbalanced dataset requires making certain adjustments otherwise the model will not perform as per your expectations. In this video I am discussing various techniques to handle imbalanced dataset in machine learning. I also have a python code that demonstrates these different techniques. In the end there is an exercise for you to solve along with a solution link. Code: https://github.com/codebasics/deep-le... Path for csv file: https://github.com/codebasics/deep-le... Exercise: https://github.com/codebasics/deep-le... Focal loss article: https://medium.com/analytics-vidhya/h.... #imbalanceddataset #imbalanceddatasetinmachinelearning #smotetechnique #deeplearning #imbalanceddatamachinelearning Topics 00:00 Overview 00:01 Handle imbalance using under sampling 02:05 Oversampling (blind copy) 02:35 Oversampling (SMOTE) 03:00 Ensemble 03:39 Focal loss 04:47 Python coding starts 07:56 Code - undersamping 14:31 Code - oversampling (blind copy) 19:47 Code - oversampling (SMOTE) 24:26 Code - Ensemble 35:48 Exercise
-
Dropout Regularization | Deep Learning Tutorial 20 (Tensorflow2.0, Keras & Python) - codebasics
Overfitting and underfitting are common phenomena in the field of machine learning and the techniques used to tackle overfitting problem is called regularization. In deep learning, dropout regularization is used to randomly drop neurons from hidden layers and this helps with generalization. In this video, we will see a theory behind dropout regularization. We will then implement artificial neural network for binary classification problem and see how using dropout layer can increase the performance of the model. #dropoutregularization #dropoutregularizationtechnique #dropoutregularisation #deeplearning #deeplearningtutorial #dropoutdeeplearning
-
Machine Learning Foundations/Techniques: Deep Learning Initialization / Optimization - 林軒田
-
Chain Rule | Deep Learning Tutorial 15 (Tensorflow2.0, Keras & Python) - codebasics
This video gives a very simple explanation of a chain rule that is used while training a neural network. Chain rule is something that is covered when you study differential calculus. Now don't worry about not knowing calculus, the chain rule is rather a simple mathematical concept. As a prerequisite of this video please watch my video on derivative in this sample deep learning series (link of entire series is below). It is even better if you watch this entire series step by step so that you have your foundations clear when you are watching this video. 🔖 Hashtags 🔖 #chainruleneuralnetwork #chainrule #chainrulepython #neuralnetworkpythontutorial #chainrulederivatives
-
Matrix Basics | Deep Learning Tutorial 10 (Tensorflow Tutorial, Keras & Python) - codebasics
Matrix fundamentals are essential to understand how deep learning works. In this video we will go over what is matrix, matrix multiplication, dot product etc. We will also do some coding in numpy to multiply matrix, find out dot product and so on. As usual I have intersting exercise for you to solve so make sure you work on that exercise. Code shown in this video: https://github.com/codebasics/deep-le... Exercise: https://github.com/codebasics/deep-le... Useful reading on matrix fundamentals: https://www.mathsisfun.com/algebra/ma... Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. 🔖 Hashtags 🔖 #matrixdeeplearning #matrixtutorial #matrixbasics #matrixdeeplearning #matrixinpython
-
Neural Network Simply Explained | Deep Learning Tutorial 4 (Tensorflow2.0, Keras & Python) - codebasics
What is a neural network?: Very simple explanation of a neural network using an analogy that even a high school student can understand it easily. what is a neural network exactly? I will discuss using a simple example various concepts such as what is neuron, error backpropogation algorithm, forward pass, backward pass, neural network training etc. 3b1b Video on neural net with some math: • But what is a neural network? | Chapt... 🔖 Hashtags 🔖 #neuralnetwork #artificialneuralnetwork #neuralnetworktutorial #deeplearning #deeplearningtensorflow #deeplearningtutorialpython #deeplearningtutorial
-
What is a neuron? | Deep Learning Tutorial 3 (Tensorflow Tutorial, Keras & Python) - codebasics
In this video, we will see how you can think of a logistic regression as a neuron. We will use insurance dataset as a sample and build a logistic regression. Logistic regression uses two step process for classification, step 1: linear regression to find best fit line for given dataset step 2: sigmoid or logit function to convert this line into values between 0 and 1 Using sigmoid function one can do a classification. 🔖 Hashtags 🔖 #whatisneuron #neuralnetwork #neuron #deeplearning #deeplearningtensorflow #deeplearningtutorialpython #deeplearningtutorial
-
Practical Deep Learning for Coders - Full Course from fast.ai and Jeremy Howard - freeCodeCamp.org
Practical Deep Learning for Coders is a course from fast.ai designed to give you a complete introduction to deep learning. This course was created to make deep learning accessible to as many people as possible. The only prerequisite for this course is that you know how to code (a year of experience is enough), preferably in Python, and that you have at least followed a high school math course. This course was developed by Jeremy Howard and Sylvain Gugger. Jeremy has been using and teaching machine learning for around 30 years. He is the former president of Kaggle, the world's largest machine learning community. Sylvain Gugger is a researcher who has written 10 math textbooks. 🔗 Course website with questionnaires, set-up guide, and more: https://course.fast.ai/ Lessons 7 and 8 are in a second video: • Practical Deep Learning for Coders - ... ⭐️ Course Contents ⭐️ (See next section for book & code.) ⌨️ (0:00:00) Lesson 1 - Your first modules ⌨️ (1:22:55) Lesson 2 - Evidence and p values ⌨️ (2:53:59) Lesson 3 - Production and Deployment ⌨️ (5:00:20) Lesson 4 - Stochastic Gradient Descent (SGD) from scratch ⌨️ (7:01:56) Lesson 5 - Data ethics ⌨️ (9:09:46) Lesson 6 - Collaborative filtering ⌨️ ( • Practical Deep Learning for Coders - ... ) Lesson 7 - Tabular data ⌨️ ( • Practical Deep Learning for Coders - ... ) Lesson 8 - Natural language processing
-
How Deep Neural Networks Work - Brandon Rohrer
Part of the End-to-End Machine Learning School Course 193, How Neural Networks Work at https://e2eml.school/193 Visit the blog: https://brohrer.github.io/how_neural_... Get the slides: https://docs.google.com/presentation/... Errata 3:40 - I presented a hyperbolic tangent function and labeled it a sigmoid. While it is S-shaped (the literal meaning of "sigmoid") the term is generally used as a synonym for the logistic function. The label is misleading. It should read "hyperbolic tangent". 7:10 - The two connections leading to the bottom most node in the most recently added layer are shown as black when they should be white. This is corrected in 10:10.
-
The Chain Rule - StatQuest
The Chain Rule is a method for finding complex derivatives and is used all the time in Statistics and Machine Learning. This video breaks it down into its two simple pieces and shows you how they easily come together. We then use the Chain Rule to solve a common Machine Learning problem - optimizing the Residual Squared Loss Function.
-
什么是激励函数 (深度学习)? Why need activation functions (deep learning)? - 莫烦Python
今天我们会来聊聊现代神经网络中 必不可少的一个组成部分, 激励函数, activation function. 激励函数也就是为了解决我们日常生活中不能用线性方程所概括的问题.机器学习-简介系列 播放列表: https://www.youtube.com/playlist?list=PLXO45tsB95cIFm...
-
How to Make an Image Classifier - Intro to Deep Learning #6 - Siraj Raval
We're going to make our own Image Classifier for cats & dogs in 40 lines of Python! First we'll go over the history of image classification, then we'll dive into the concepts behind convolutional networks and why they are so amazing. Coding challenge for this video: https://github.com/llSourcell/how_to_... Charles-David's winning code: https://github.com/alkaya/TFmyValenti... Dalai's runner-up code: https://github.com/mdalai/Deep-Learni... More Learning Resources: http://ufldl.stanford.edu/tutorial/su... https://adeshpande3.github.io/adeshpa... http://cs231n.github.io/convolutional... http://deeplearning.net/tutorial/lene... https://ujjwalkarn.me/2016/08/11/intu... http://neuralnetworksanddeeplearning.... http://xrds.acm.org/blog/2016/06/conv... http://andrew.gibiansky.com/blog/mach... https://medium.com/@ageitgey/machine-...
-
How to Learn from Little Data - Intro to Deep Learning #17 - Siraj Raval
One-shot learning! In this last weekly video of the course, i'll explain how memory augmented neural networks can help achieve one-shot classification for a small labeled image dataset. We'll also go over the architecture of it's inspiration (the neural turing machine). Code for this video (with challenge): https://github.com/llSourcell/How-to-...
-
Which Activation Function Should I Use? - Siraj Raval
ll neural networks use activation functions, but the reasons behind using them are never clear! Let's discuss what activation functions are, when they should be used, and what the difference between them is. Sample code from this video: https://github.com/llSourcell/Which-A...
-
Build a Neural Net in 4 Minutes - Siraj Raval
How does a Neural network work? Its the basis of deep learning and the reason why image recognition, chatbots, self driving cars, and language translation work! In this video, i'll use python to code up a neural network in just 4 minutes using just the numpy library, capable of doing matrix mathematics. Code for this video: https://github.com/llSourcell/Make_a_...
-
#4.4 AutoEncoder 自编码 (PyTorch tutorial 神经网络 教学) - 莫烦Python
神经网络也能进行非监督学习, 只需要训练数据, 不需要标签数据. 自编码就是这样一种形式. 自编码能自动分类数据, 而且也能嵌套在半监督学习的上面, 用少量的有标签样本和大量的无标签样本学习. If you like this, please star my Tutorial code on Github.Cod...
-
#5.1 为什么 Pytorch 是动态 Dynamic (PyTorch tutorial 神经网络 教学) - 莫烦Python
听说过 Torch 的人都听说了 torch 是动态的, 那他的动态到底是什么呢? 我们用一个 RNN 的例子来展示一下动态计算到底长什么样.If you like this, please star my Tutorial code on Github.Code: https://github.com/Morv...
-
什么是激励函数 (深度学习)? Why need activation functions (deep learning)? - 莫烦Python
今天我们会来聊聊现代神经网络中 必不可少的一个组成部分, 激励函数, activation function. 激励函数也就是为了解决我们日常生活中不能用线性方程所概括的问题.机器学习-简介系列 播放列表: https://www.youtube.com/playlist?list=PLXO45tsB95cIFm...
-
Review: Basic Structures for Deep Learning Models (Part I) - 李宏毅
-
Review: Basic Structures for Deep Learning Models (Part II) - 李宏毅
-
Computational Graph & Backpropagation - 李宏毅
Reference:Backpropagation for feedforward net:http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2015_2/Lecture/DNN%20backprop.ecm.mp4/index.htmlBackpropagatio...
-
-
深度學習的訓練和優化
這些視頻集中於深度學習的訓練和優化技術,提供了關於神經網絡訓練過程中的關鍵要素的深入解析。從優化器的選擇到梯度下降的不同形式,這些教程涵蓋了加速訓練過程和提高模型效能的策略。此外,它們還探討了如何處理過擬合、特徵標準化和超參數調節等常見問題。這些視頻是對想要深入了解如何優化和調節深度學習模型的學習者的有用資源。
-
优化器 Optimizer 加速神经网络训练 (深度学习) Speed up neural network training process (deep learning) - 莫烦Python
今天我们会来聊聊怎么样加速你的神经网络训练过程.里面的方法包括: Stochastic Gradient Descent (SGD);Momentum;AdaGrad;RMSProp;Adam.英文学习资料: http://sebastianruder.com/optimizing-gradient-descen...
-
Gradient Descent For Neural Network | Deep Learning Tutorial 12 (Tensorflow2.0, Keras & Python) - codebasics
Gradient descent is the heart of all supervised learning models. It is important to understand this technique if you are pursuing a career as a data scientist or a machine learning engineer. In this video we will see a very simple explanation of what a gradient descent is for a neural network or a logistic regression (remember logistic regression is a very simple single neuron neural network). We will than implement gradient descent from scratch in python. In my machine learning tutorial series I already have a video on gradient descent but that one is on linear regression whereas this video is for logistic regression for neural network. Here is the link of my linear regression GD video,
-
Loss or Cost Function | Deep Learning Tutorial 11 (Tensorflow Tutorial, Keras & Python) - codebasics
Loss or a cost function is an important concept we need to understand if you want to grasp how a neural network trains itself. We will go over various loss functions in this video such as mean absolute error (a.k.a MAE), mean squared error (a.k.a MSE), log loss or binary cross entropy. After going through theory we will implement these loss functions in python. It is important to go through this implementation as it might be useful during your interviews (if you are targeting a role of a data scientist or a machine learning engineer) Code: https://github.com/codebasics/deep-le... Exercise: Go at the end of the above notebook to see the exercise Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. 🔖 Hashtags 🔖 #lossfunction #costfunction #costfunctionneuralnetwork #lossfunctionneuralnetwork #costfunctiondeeplearning #lossfunctiondeeplearning
-
Stochastic Gradient Descent vs Batch Gradient Descent vs Mini Batch Gradient Descent |DL Tutorial 14 - codebasics
Stochastic gradient descent, batch gradient descent and mini batch gradient descent are three flavors of a gradient descent algorithm. In this video I will go over differences among these 3 and then implement them in python from scratch using housing price dataset. At the end of the video we have an exercise for you to solve. 🔖 Hashtags 🔖 #stochasticgradientdescentpython #stochasticgradientdescent #batchgradientdescent #minibatchgradientdescent #gradientdescent
-
Gradient Descent, Step-by-Step - StatQuest
Gradient Descent is the workhorse behind most of Machine Learning. When you fit a machine learning method to a training dataset, you're probably using Gradient Descent. It can optimize parameters in a wide variety of settings. Since it's so fundamental to Machine Learning, I decided to make a "step-by-step" video that shows you exactly how it works.
-
SELU - 李宏毅
Prerequisite: https://www.youtube.com/watch?v=xki61j7z-30
-
#5.2 GPU 加速 (PyTorch tutorial 神经网络 教学) - 莫烦Python
在 GPU 训练可以大幅提升运算速度. 而且 Torch 也有一套很好的 GPU 运算体系.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/PyTorch-Tutorial 详...
-
#5.3 过拟合 Dropout (PyTorch tutorial 神经网络 教学) - 莫烦Python
过拟合让人头疼, 明明训练时误差已经降得足够低, 可是测试的时候误差突然飙升. 这很有可能就是出现了过拟合现象.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/PyTorch-...
-
Data augmentation to address overfitting | Deep Learning Tutorial 26 (Tensorflow, Keras & Python) - codebasics
When we don't have enough training samples to cover diverse cases in image classification, often CNN might overfit. To address this we use a technique called data augmentation in deep learning. Data augmentation is used to generate new training samples from current training set using various transformations such as scaling, rotation, contrast change etc. In this video, we will classify flower images and see how our cnn model overfits. After that we will use data augmentation to generate new training samples and see how model performance improves. #dataaugmentation #dataaugmentationdeeplearning #addressoverfitting #cnn #deeplearningtutorial
-
[TA 補充課] Optimization for Deep Learning (2/2) (由助教簡仲明同學講授)
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML2020/Optimization.pdf
-
[TA 補充課] Optimization for Deep Learning (1/2) (由助教簡仲明同學講授)
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML2020/Optimization.pdf
-
【機器學習2021】類神經網路訓練不起來怎麼辦 (一): 局部最小值 (local minima) 與鞍點 (saddle point) - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/small-gradient-v7.pdf
-
【機器學習2021】類神經網路訓練不起來怎麼辦 (二): 批次 (batch) 與動量 (momentum) - Hung-yi Lee
Tips for training: Batch and Momentumslides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/small-gradient-v7.pdf
-
【機器學習2021】類神經網路訓練不起來怎麼辦 (三):自動調整學習速率 (Learning Rate) - Hung-yi Lee
ML2021 week3 3/12 Error surface is rugged ... Tips for training: Adaptive Learning Rateslides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/opti...
-
【機器學習2021】類神經網路訓練不起來怎麼辦 (四):損失函數 (Loss) 也可能有影響 - Hung-yi Lee
ML2021 week3 3/12 Classification(Short Version)slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/classification_v2.pdf
-
【機器學習2021】類神經網路訓練不起來怎麼辦 (五): 批次標準化 (Batch Normalization) 簡介 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/normalization_v4.pdf
-
“L1和L2正则化”直观理解(之一),从拉格朗日乘数法角度进行理解 - 王木头学科学
L1和L2正则化,可以从3个角度分别理解:拉格朗日乘数法角度权重衰减角度贝叶斯概率角度3种方法意义各不相同,却又殊途同归
-
“L1和L2正则化”直观理解(之二),为什么又叫权重衰减?到底哪里衰减了? - 王木头学科学
L1和L2正则化,可以从3个角度分别理解:拉格朗日乘数法角度权重衰减角度贝叶斯概率角度3种方法意义各不相同,却又殊途同归
-
“拉格朗日对偶问题”如何直观理解?“KKT条件” “Slater条件” “凸优化”打包理解 - 王木头学科学
拉格朗日乘数法拉格朗日对偶问题凸集凸函数、凹函数凸优化弱对偶、强对偶KKT条件Slater条件最大熵
-
如何理解“梯度下降法”?什么是“反向传播”?通过一个视频,一步一步全部搞明白 - 王木头学科学
梯度下降法是反向传播的一种算法,也是最常用的算法。梯度是什么?又是如何被用到神经网络里的?本次视频,不只有直观理解,还有清晰的数学描述
-
“交叉熵”如何做损失函数?打包理解“信息量”、“比特”、“熵”、“KL散度”、“交叉熵” - 王木头学科学
设计损失函数是的3个常见方法,其中交叉熵是设计概念最多的。一个视频把“信息量”、“比特”、“熵”、“KL散度”、“交叉熵”,这些概念彻底搞明白。
-
“损失函数”是如何设计出来的?直观理解“最小二乘法”和“极大似然估计法” - 王木头学科学
梯度下降法中求梯度,求的是损失函数的梯度。不同的损失函数会直接影响神经网络的训练效率。损失函数是如何设计出来的?有3种主要设计思路:最小二乘法、极大似然法、|交叉熵法这一次先直观理解 最小二乘法 和 极大似然法
-
“随机梯度下降、牛顿法、动量法、Nesterov、AdaGrad、RMSprop、Adam”,打包理解对梯度下降法的优化 - 王木头学科学
随机梯度下降牛顿法动量法NesterovAdaGradRMSpropAdam
-
什么是神经网络进化? What is Neuro-Evolution? - 莫烦Python
在进化算法这系列的内容中我做了很久铺垫, 现在总算到了最前沿最先进的技术了. 我们知道机器学习, 深度学习很多时候都和神经网络是分不开的. 那将进化和神经网络结合也在近些年有了突破. 进化策略简介: https://www.youtube.com/watch?v=Etj_gclFFFo遗传算法简介: http...
-
什么是 L1 L2 正规化 正则化 Regularization (深度学习 deep learning) - 莫烦Python
今天我们会来说说用于减缓过拟合问题的 L1 和 L2 regularization 正规化手段.更多内容在我的教学网站: https://morvanzhou.github.io/tutorials/Theano 使用 L1 L2 正规化: https://morvanzhou.github.io/tutoria...
-
优化器 Optimizer 加速神经网络训练 (深度学习) Speed up neural network training process (deep learning) - 莫烦Python
今天我们会来聊聊怎么样加速你的神经网络训练过程.里面的方法包括: Stochastic Gradient Descent (SGD);Momentum;AdaGrad;RMSProp;Adam.英文学习资料: http://sebastianruder.com/optimizing-gradient-descen...
-
什么是过拟合 (深度学习)? What is overfitting (deep learning)? - 莫烦Python
今天我们会来聊聊机器学习中的过拟合 overfitting 现象, 和解决过拟合的方法.机器学习-简介系列 播放列表: https://www.youtube.com/playlist?list=PLXO45tsB95cIFm8Y8vMkNNPPXAtYXwKinTensorflow dropout: https...
-
为什么要特征标准化 (深度学习)? Why need the feature normalization (deep learning)? - 莫烦Python
今天我们会来聊聊机器学习所需要的数据,为了让机器学习方面消化, 我们需不需要对数据动些手脚呢. 所以今天就会提到特征数据的标准化, 也可以说正常化, 归一化, 正规化等等. 使用这些标准化手段. 我们不仅可以快速推进机器学习的学习速度, 还可以避免机器学习 学得特扭曲.Sklearn feature normal...
-
站在巨人的肩膀上, 迁移学习 Transfer Learning - 莫烦Python
有一种偷懒是 “站在巨人的肩膀上”. 不仅能看得更远, 还能看到更多. 这也用来表达我们要善于学习先辈的经验. 这句话, 放在机器学习中, 这就是今天要说的迁移学习了, transfer learning.通过 "莫烦 Python" 支持我做出更好的视频: https://morvanzhou.github.i...
-
什么是自编码 Autoencoder (深度学习)? What is an Autoencoder in Neural Networks (deep learning)? - 莫烦Python
自编码是一种神经网络的形式, 用于压缩再解压得到的数据, 也可以用于特征的降维, 类似 PCA.Tensorflow Autoencoder: https://www.youtube.com/watch?v=F2h3tbC-sBk&list=PLXO45tsB95cKI5AIlf5TxxFPzb-0zeVZ8&i...
-
Tuning Hyperparameters - 李宏毅
-
Quantization in deep learning | Deep Learning Tutorial 49 (Tensorflow, Keras & Python) - codebasics
Are you planning to deploy a deep learning model on any edge device (microcontrollers, cell phone or wearable device)? You need to optimize or downsize your huge model so that you can run the model efficiently in low resource environment. Quantization is the technique that let's you do that. In this video we will cover topics outlined below, ⭐️ Timestamps ⭐️ 00:00 Overview 01:03 What is Quantization? 03:49 Two ways to perform Quantization 03:56 Post training Quantization 04:47 Quantization aware training 05:47 Coding Code: https://github.com/codebasics/deep-le... Tensorflow articles on quantization: https://www.tensorflow.org/model_opti... https://www.tensorflow.org/model_opti... https://blog.tensorflow.org/2020/04/q... Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... 🔖Hashtags🔖 #quantization #quantizationtensorflow #quantizationneuralnetwork #quantizationdeeplearning #tflitequantization #tflitequantizationawaretraining #tensorflowquantizationtutorial
-
Lecture 15 | Efficient Methods and Hardware for Deep Learning - Stanford University School of Engineering
In Lecture 15, guest lecturer Song Han discusses algorithms and specialized hardware that can be used to accelerate training and inference of deep learning workloads. We discuss pruning, weight sharing, quantization, and other techniques for accelerating inference, as well as parallelization, mixed precision, and other techniques for accelerating training. We discuss specialized hardware for deep learning such as GPUs, FPGAs, and ASICs, including the Tensor Cores in NVIDIA’s latest Volta GPUs as well as Google’s Tensor Processing Units (TPUs). Keywords: Hardware, CPU, GPU, ASIC, FPGA, pruning, weight sharing, quantization, low-rank approximations, binary networks, ternary networks, Winograd transformations, EIE, data parallelism, model parallelism, mixed precision, FP16, FP32, model distillation, Dense-Sparse-Dense training, NVIDIA Volta, Tensor Core, Google TPU, Google Cloud TPU Slides: http://cs231n.stanford.edu/slides/201...
-
-
神經網絡的實用技巧和策略
這些視頻提供了關於神經網絡的實用技巧和策略的深入解析,專注於深度學習實踐中的關鍵方面。從微積分的基本知識(如導數)到特定技術的應用(如詞嵌入和激活函數),這些教程涵蓋了一系列對構建和優化神經網絡至關重要的主題。它們還探討了如何處理不均衡數據、評估神經網絡的性能以及批標準化的重要性。這些內容適合那些希望提升他們在深度學習領域的技能和知識的學習者。
-
怎样检验神经网络 (深度学习)? How to evaluate neural networks (deep learning)? - 莫烦Python
检验神经网络有没有学习到东西很重要. 应该如何来评价自己的神经网络, 从评价当中如何改进我们的神经网络. 其实评价神经网络的方法, 和评价其他机器学习的方法大同小异. 我们首先说说为什么要评价,检验学习到的神经网络. 机器学习-简介系列 播放列表: https://www.youtube.com/playlist...
-
处理不均衡数据 (深度学习)! Dealing with imbalanced data (deep learning) - 莫烦Python
今天我们会来聊聊在机器学习中常会遇到的问题. 满手都是不均衡数据.很多数据中,正反数据量都是不均衡的,比如在一千个人中预测一个得癌症的人. 有时候只要一直预测多数派, model 的预测误差也能很小, 形成"已经学习好了"的假象. 今天我们来看看如何避免这种情况的发生. 机器学习-简介系列 播放列表: https...
-
Derivatives | Deep Learning Tutorial 9 (Tensorflow Tutorial, Keras & Python) - codebasics
Derivatives and partial derivatives are important concepts that we need to understand in order to gain knowledge on how neural network training works. We will be covering error back-propagation algorithm, gradient descent, chaining rule etc in upcoming videos. This video on derivatives and partial derivatives is a pre-requisite for those advanced level videos later. Derivative is very much similar to a slope. But it is a function and used for non linear equations. Slope is a constant and is used for linear equations. At the end of this video I have an exercise for you to solve. Please find derivative for those 6 equations and let me know how many you got right out of 6. Exercise: https://github.com/codebasics/deep-le... Derivatives: https://www.mathsisfun.com/calculus/d... Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. 🔖 Hashtags 🔖 #derivativeneuralnetwork #derivative #derivativedeeplearning #deeplearningtutorial
-
什么是 Batch Normalization 批标准化 (深度学习 deep learning) - 莫烦Python
Batch Normalization, 批标准化, 和普通的数据标准化类似, 是将分散的数据统一的一种做法, 也是优化神经网络的一种方法. 在之前 Normalization 的简介视频中我们一提到, 具有统一规格的数据, 能让机器学习更容易学习到数据之中的规律...使用 Tensorflow 做 Batch ...
-
Batch Normalization - 李宏毅
The lecture give at MLDS (Fall, 2017).
-
#5.4 Batch Normalization 批标准化 (PyTorch tutorial 神经网络 教学) - 莫烦Python
批标准化通俗来说就是对每一层神经网络进行标准化 (normalize) 处理, 我们知道对输入数据进行标准化能让机器学习有效率地学习. 如果把每一层后看成这种接受输入数据的模式, 那我们何不 “批标准化” 所有的层呢?If you like this, please star my Tutorial code o...
-
Activation Functions | Deep Learning Tutorial 8 (Tensorflow Tutorial, Keras & Python) - codebasics
Activation functions (step, sigmoid, tanh, relu, leaky relu ) are very important in building a non linear model for a given problem. In this video we will cover different activation functions that are used while building a neural network. We will discuss these functions with their pros and cons, 1) Step 2) Sigmoid 3) tanh 4) ReLU (rectified linear unit) 5) Leaky ReLU We will also write python code to implement these functions and see how they behave for sample inputes. Github link for code in this tutorial: : https://github.com/codebasics/deep-le... Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. 🔖 Hashtags 🔖 #activationfunction #activationfunctionneuralnetwork #neuralnetwork #deeplearning
-
Finale :: Feature Exploitation Techniques @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Matrix Factorization :: Stochastic Gradient Descent @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Converting words to numbers, Word Embeddings | Deep Learning Tutorial 39 (Tensorflow & Python) - codebasics
Machine learning models don't understand words. They should be converted to numbers before they are fed to RNN or any other machine learning model. In this tutorial, we will look into various techniques for converting words to numbers. These techniques are, 1) Using unique numbers 2) One hot encoding 3) Word embeddings Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... 🔖Hashtags🔖 #wordembeddings #wordembeddingsexplained #deeplearningtutorial #convertingwordstonumbers #wordembedding #wordembeddingnlp #tensorflowwordembedding #wordembeddingwithnlp #nlpforwordembedding #wordembeddingkeras #whatiswordembedding #wordembeddingnlptutorial
-
-
神經網絡(NN)
神經網絡(Neural Networks, NN)是深度學習的基石,用於模擬人類大腦的信息處理方式。這一系列視頻旨在向初學者介紹神經網絡的核心概念,包括神經網絡的結構、工作原理、以及如何訓練它們。這些視頻涵蓋了從單層感知器到多層前饋網絡、從基本的反向傳播算法到更複雜的網絡結構。適合對神經網絡感興趣的初學者還是希望進一步深入了解的研究者。
-
Neural Network Simply Explained | Deep Learning Tutorial 4 (Tensorflow2.0, Keras & Python) - codebasics
What is a neural network?: Very simple explanation of a neural network using an analogy that even a high school student can understand it easily. what is a neural network exactly? I will discuss using a simple example various concepts such as what is neuron, error backpropogation algorithm, forward pass, backward pass, neural network training etc. 3b1b Video on neural net with some math: • But what is a neural network? | Chapt... 🔖 Hashtags 🔖 #neuralnetwork #artificialneuralnetwork #neuralnetworktutorial #deeplearning #deeplearningtensorflow #deeplearningtutorialpython #deeplearningtutorial Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. Next video: • Install tensorflow 2.0 | Deep Learnin... Previous video: • What is a neuron? | Deep Learning Tut... Entire Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Prerequisites for this series: 1: Python tutorials (first 16 videos): • Python 3 Programming Tutorials for Be... 2: Pandas tutorials(first 8 videos): • Pandas Tutorial (Data Analysis In Pyt... 3: Machine learning playlist (first 16 videos): • Machine Learning Tutorial Python | Ma... 🌎 My Website For Video Courses: https://codebasics.io/?utm_source=des...
-
Neural Networks 1: a 3-minute history - Victor Lavrenko
Artificial Neural Networks (ANNs) were pioneered in 1940s, received a lot of hype in the 1950s, were re-discovered in the 1980s with the Backpropagation algorithm, and are now transforming the field of Machine Learning
-
Neural Networks 2: machine learning = feature engineering - Victor Lavrenko
-
Neural Networks 3: axons, dendrites, synapses - Victor Lavrenko
-
Neural Networks 4: McCulloch & Pitts neuron - Victor Lavrenko
-
Neural Networks 5: feedforward, recurrent and RBM - Victor Lavrenko
-
Neural Networks 6: solving XOR with a hidden layer - Victor Lavrenko
-
Neural Networks 7: universal approximation - Victor Lavrenko
-
Neural Networks 8: hidden units = features - Victor Lavrenko
-
Neural Networks 9: derivatives we need for backprop - Victor Lavrenko
-
Neural Networkds 10: Backpropagation: how it works - Victor Lavrenko
-
Neural Networks 11: Backpropagation in detail - Victor Lavrenko
-
Neural Networks 12: multiclass classification - Victor Lavrenko
-
But what is a neural network? | Chapter 1, Deep learning - 3Blue1Brown
錯誤修正:在 14:45 處偏置向量寫的 n 應為 k。謝謝那些眼睛很尖的人發現了。
-
Gradient descent, how neural networks learn | Chapter 2, Deep learning - 3Blue1Brown
Enjoy these videos? Consider sharing one or two.Help fund future projects: https://www.patreon.com/3blue1brownSpecial thanks to these supporters: http://3b1...
-
What is backpropagation really doing? | Chapter 3, Deep learning - 3Blue1Brown
What's actually happening to a neural network as it learns?Help fund future projects: https://www.patreon.com/3blue1brownAn equally valuable form of support ...
-
Backpropagation calculus | Chapter 4, Deep learning - 3Blue1Brown
Help fund future projects: https://www.patreon.com/3blue1brownAn equally valuable form of support is to simply share some of the videos.Special thanks to the...
-
【機器學習2021】神經網路壓縮 (Network Compression) (一) - 類神經網路剪枝 (Pruning) 與大樂透假說 (Lottery Ticket Hypothesis) - Hung-yi Lee
https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/tiny_v7.pdf
-
【機器學習2021】神經網路壓縮 (Network Compression) (二) - 從各種不同的面向來壓縮神經網路 - Hung-yi Lee
https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/tiny_v7.pdf
-
[TA 補充課] Network Compression (2/2): Network Pruning (由助教劉俊緯同學講授)
slides: https://slides.com/arvinliu/model-compression
-
[TA 補充課] Network Compression (1/2): Knowledge Distillation (由助教劉俊緯同學講授)
slides: https://slides.com/arvinliu/model-compression
-
神经网络 : 梯度下降 (Gradient Descent in Neural Nets) - 莫烦Python
神经网络是当今为止最流行的一种深度学习框架, 他的基本原理也很简单, 就是一种梯度下降机制. 我们今天就来看看这神奇的优化模式吧.更多代码实践和相关内容: https://mofanpy.comTensorflow可视化梯度下降:https://youtu.be/ugGkewcKk9Q通过 "莫烦 Python"...
-
Neural Networks Pt. 1: Inside the Black Box - StatQuest
Neural Networks are one of the most popular Machine Learning algorithms, but they are also one of the most poorly understood. Everyone says Neural Networks a...
-
Neural Networks Pt. 2: Backpropagation Main Ideas - StatQuest
Backpropagation is the method we use to optimize parameters in a Neural Network. The ideas behind backpropagation are quite simple, but there are tons of det...
-
Backpropagation Details Pt. 1: Optimizing 3 parameters simultaneously. - StatQuest
The main ideas behind Backpropagation are super simple, but there are tons of details when it comes time to implementing it. This video shows how to optimize...
-
Backpropagation Details Pt. 2: Going bonkers with The Chain Rule - StatQuest
This StatQuest picks up right here Part 1 left off, and this time we're going to go totally bonkers with The Chain Rule and optimize every single parameter i...
-
Neural Networks Pt. 3: ReLU In Action!!! - StatQuest
The ReLU activation function is one of the most popular activation functions for Deep Learning and Convolutional Neural Networks. However, the function itsel...
-
Neural Networks Pt. 4: Multiple Inputs and Outputs - StatQuest
So far, this series has explained how very simple Neural Networks, with only 1 input and 1 output, function. This video shows how these exact same concepts g...
-
Neural Networks Part 5: ArgMax and SoftMax - StatQuest
When your Neural Network has more than one output, then it is very common to train with SoftMax and, once trained, swap SoftMax out for ArgMax. This video gi...
-
The SoftMax Derivative, Step-by-Step!!! - StatQuest
Here's step-by-step guide that shows you how to take the derivatives of the SoftMax function, as used as a final output layer in a Neural Networks.NOTE: This...
-
Neural Networks Part 6: Cross Entropy - StatQuest
When a Neural Network is used for classification, we usually evaluate how well it fits the data with Cross Entropy. This StatQuest gives you and overview of ...
-
Neural Networks Part 7: Cross Entropy Derivatives and Backpropagation - StatQuest
Here is a step-by-step guide that shows you how to take the derivative of the Cross Entropy function for Neural Networks and then shows you how to use that d...
-
Neural Networks Part 8: Image Classification with Convolutional Neural Networks (CNNs) - StatQuest
One of the coolest things that Neural Networks can do is classify images, and this is often done with a type of Neural Network called a Convolutional Neural Network (or CNN for short). In this StatQuest, we walk through how Convolutional Neural Networks work, one step at a time, and highlight the main ideas behind filters and pooling.
-
Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained!!! - StatQuest
In this video, we introduce the basics of how Neural Networks translate one language, like English, to another, like Spanish. The ideas is to convert one seq...
-
Attention for Neural Networks, Clearly Explained!!! - StatQuest
Attention is one of the most important concepts behind Transformers and Large Language Models, like ChatGPT. However, it's not that complicated. In this Stat...
-
Tensors for Neural Networks, Clearly Explained!!! - StatQuest
Tensors are super important for neural networks, but can be confusing because different people use the word "Tensor" differently. In this StatQuest, we clear this up and tell you what the big deal is. BAM!
-
Neural Networks - The Math of Intelligence #4 - Siraj Raval
Have you ever wondered what the math behind neural networks looks like? What gives them such incredible power? We're going to cover 4 different neural networ...
-
Backpropagation Explained - Siraj Raval
The most popular optimization strategy in machine learning is called gradient descent. When gradient descent is applied to neural networks, its called back-propagation. In this video, i'll use analogies, animations, equations, and code to give you an in-depth understanding of this technique. Once you feel comfortable with back-propagation, everything else becomes easier. It uses calculus to help us update our machine learning models. Enjoy!
-
Neural Network :: Motivation @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Neural Network :: Neural Network Hypothesis @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Neural Network :: Neural Network Learning @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Neural Network :: Optimization and Regularization @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Deep Learning :: Deep Neural Network @ Machine Learning Techniques (機器學習技法) - 林軒田
-
How the Perceptron Algorithm Works 1/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
How the Perceptron Algorithm Works 2/2 - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Java Implementation of the Perceptron Algorithm - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Transfer Functions in the Perceptron and Artificial Neural Networks - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Understanding Multi-Layer Perceptron (MLP) .. How it Works - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
Feedforward and Feedback Artificial Neural Networks - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
How to Make a Neural Network - Intro to Deep Learning #2 - Siraj Raval
How do we learn? In this video, I'll discuss our brain's biological neural network, then we'll talk about how an artificial neural network works. We'll create our own single layer feedforward network in Python, demo it, and analyze the implications of our results. This is the 2nd weekly video in my intro to deep learning series (Udacity nanodegree) The coding challenge for this video: https://github.com/llSourcell/Make_a_... Ludo's winning code: https://github.com/ludobouan/linear-r... Amanullah's runner up code: https://github.com/amanullahtariq/MLA...
-
Implement Neural Network In Python | Deep Learning Tutorial 13 (Tensorflow2.0, Keras & Python) - codebasics
In this video we will implement a simple neural network with single neuron from scratch in python. This is also an implementation of a logistic regression in python from scratch. You know that logistic regression can be thought of as a simple neural network. The pre requisite for this tutorial is the previous tutorial on gradient descent (link below). We will be using gradient descent python funciton written in previous video to implement our own custom neural network class. Watch previous video on gradient descent: • Gradient Descent For Neural Network |... Code of this tutorial: https://github.com/codebasics/deep-le... Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. 🔖 Hashtags 🔖 #NeuralNetworkPython #NeuralNetworkdeeplearning #NeuralNetwork #implementneuralnetwork #gradientdescentpython
-
Lecture 16: Dynamic Neural Networks for Question Answering - Stanford University School of Engineering
Lecture 16 addresses the question ""Can all NLP tasks be seen as question answering problems?"". Key phrases: Coreference Resolution, Dynamic Memory Networks for Question Answering over Text and Images
-
学习分享一年,对神经网络的理解全都在这40分钟里了 - 王木头学科学
-
How to Predict Stock Prices Easily - Intro to Deep Learning #7 - Siraj Raval
We're going to predict the closing price of the S&P 500 using a special type of recurrent neural network called an LSTM network. I'll explain why we use recurrent nets for time series data, and why LSTMs boost our network's memory power. Coding challenge for this video: https://github.com/llSourcell/How-to-... Vishal's winning code: https://github.com/erilyth/DeepLearni... Jie's runner up code: https://github.com/jiexunsee/Simple-I... More Learning Resources: http://colah.github.io/posts/2015-08-... http://deeplearning.net/tutorial/lstm... https://deeplearning4j.org/lstm.html https://www.tensorflow.org/tutorials/... http://machinelearningmastery.com/tim... https://blog.terminal.com/demistifyin...
-
Customer churn prediction using ANN | Deep Learning Tutorial 18 (Tensorflow2.0, Keras & Python) - codebasics
In this video we will build a customer churn prediction model using artificial neural network or ANN. Customer churn measures how and why are customers leaving the business. WE will use telecom customer churn dataset from kaggle (link below) and build a deep learning model for churn prediction. We will also understand precision,recalll and accuracy of this model by using confusion matrix and classification report #customerchurnprediction #customerchurn #customerchurnpredictionusingANN #ANN #deeplearningANN #deeplearningtutorial
-
-
卷積神經網絡(CNN)
卷積神經網絡(Convolutional Neural Networks, CNN)是深度學習中專門用於處理圖像數據的一種強大工具。這系列視頻旨在全面介紹CNN的基本概念、工作原理、以及在圖像識別和分類中的應用。從卷積層的基本原理、池化(Pooling)層的功能,到如何設計和訓練一個高效的CNN模型,這些視頻將帶領觀眾逐步深入了解這一關鍵技術。視頻內容涵蓋了卷積操作、激活函數、損失函數和優化策略,以及CNN在實際應用中的案例。適合對計算機視覺和深度學習感興趣的學生和專業人士。
-
Simple explanation of convolutional neural network | Deep Learning Tutorial 23 (Tensorflow & Python) - codebasics
A very simple explanation of convolutional neural network or CNN or ConvNet such that even a high school student can understand it easily. This video involves very less math and is perfect for total beginner who doesn't have any idea on what CNN is and how it works. We will cover different topics such as, 1. Why traditionally humans are better at image recognition than computers? 2. Disadvantages of using traditional artificial neural network (ANN) for image classification. 3. How human brain recognizes images? 4. How computers can use filters for feature detection 5. What is convolution operation and how it works 6. Importance of ReLU activation in CNN 7. Importance of pooling operation in CNN 8. How to handle rotation and scale in CNN 🔖 Hashtags 🔖 #convolutionalneuralnetwork #cnndeeplearning #cnntutorial #cnnmachinelearning #cnnalgorithm #cnnpython #cnntensorflow Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : • Machine Learning Tutorial Python | Ma... Here are some good articles on CNN, Is CNN scale/rotation invariant? https://stats.stackexchange.com/quest....
-
Convolution padding and stride | Deep Learning Tutorial 25 (Tensorflow2.0, Keras & Python) - codebasics
In this video we will cover what is padding and stride in convolution operation. Padding allows corner pixels in image to participate well in feature detection. Stride is by how much amount you are moving a filter window during convolution operation. Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... #paddingandstride #cnnpadding #cnnstride #deeplearningtutorial #tensorflowconv2dpadding #deeplearningpadding #deeplearningstride
-
【機器學習2021】卷積神經網路 (Convolutional Neural Networks, CNN) - Hung-yi Lee
ML2021 week3 3/12 Convolution Neural Network(CNN)slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/cnn_v4.pdf
-
卷积神经网络的底层是傅里叶变换,傅里叶变换的底层是希尔伯特空间坐标变换 - 王木头学科学
卷积神经网络(CNN)为什么可以识别特征?从傅里叶变换角度解释其中的原理 + 从希尔伯特空间坐标系变换解释傅里叶变换
-
从“卷积”、到“图像卷积操作”、再到“卷积神经网络”,“卷积”意义的3次改变 - 王木头学科学
在自学卷积神经网络的时候,想彻底搞明白卷积的意义。想从卷积的定义出发,一步一步地理解,结果没有想到这个想法给自己挖了坑。因为卷积从最初的定义,到卷积神经网络,它的含义有过3次变化。对最初含义了解越多,越容易进入理解的死胡同。20多分钟,让你避免和我走同样的弯路
-
什么是卷积神经网络 CNN (深度学习)? What is Convolutional Neural Networks (deep learning)? - 莫烦Python
卷积神经网络的简单介绍.卷积神经网络是近些年逐步兴起的一种人工神经网络结构, 因为利用卷积神经网络在图像和语音识别方面能够给出更优预测结果, 这一种技术也被广泛的传播可应用. 卷积神经网络最常被应用的方面是计算机的图像识别, 不过因为不断地创新, 它也被应用在视频分析, 自然语言处理, 药物发现, 等等. 近期最...
-
Convolutional Neural Networks - The Math of Intelligence (Week 4) - Siraj Raval
Convolutional Networks allow us to classify images, generate them, and can even be applied to other types of data. We're going to build one in numpy that can...
-
Lecture 1 | Introduction to Convolutional Neural Networks for Visual Recognition - Stanford University School of Engineering
Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017) Lecture 1 gives an introduction to the field of computer vision, discussing its history and key challenges. We emphasize that computer vision encompasses a wide variety of different tasks, and that despite the recent successes of deep learning we are still a long way from realizing the goal of human-level visual intelligence. Keywords: Computer vision, Cambrian Explosion, Camera Obscura, Hubel and Wiesel, Block World, Normalized Cut, Face Detection, SIFT, Spatial Pyramid Matching, Histogram of Oriented Gradients, PASCAL Visual Object Challenge, ImageNet Challenge Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 2 | Image Classification - Stanford University School of Engineering
Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017) Lecture 2 formalizes the problem of image classification. We discuss the inherent difficulties of image classification, and introduce data-driven approaches. We discuss two simple data-driven image classification algorithms: K-Nearest Neighbors and Linear Classifiers, and introduce the concepts of hyperparameters and cross-validation. Keywords: Image classification, K-Nearest Neighbor, distance metrics, hyperparameters, cross-validation, linear classifiers Slides: http://cs231n.stanford.edu/slides/201... -------------------------------------------------------------------------------------- Convolutional Neural Networks for Visual Recognition Instructors: Fei-Fei Li: http://vision.stanford.edu/feifeili/ Justin Johnson: http://cs.stanford.edu/people/jcjohns/ Serena Yeung: http://ai.stanford.edu/~syyeung/ Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. Website: http://cs231n.stanford.edu/
-
Lecture 3 | Loss Functions and Optimization - Stanford University School of Engineering
Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model’s predictions, and discuss two commonly used loss functions for image classification: the multiclass SVM loss and the multinomial logistic regression loss. We introduce the idea of regularization as a mechanism to fight overfitting, with weight decay as a concrete example. We introduce the idea of optimization and the stochastic gradient descent algorithm. We also briefly discuss the use of feature representations in computer vision. Keywords: Image classification, linear classifiers, SVM loss, regularization, multinomial logistic regression, optimization, stochastic gradient descent Slides: http://cs231n.stanford.edu/slides/201... -------------------------------------------------------------------------------------- Convolutional Neural Networks for Visual Recognition Instructors: Fei-Fei Li: http://vision.stanford.edu/feifeili/ Justin Johnson: http://cs.stanford.edu/people/jcjohns/ Serena Yeung: http://ai.stanford.edu/~syyeung/ Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. Website: http://cs231n.stanford.edu/ For additional learning opportunities please visit: http://online.stanford.edu/
-
Lecture 4 | Introduction to Neural Networks - Stanford University School of Engineering
In Lecture 4 we progress from linear classifiers to fully-connected neural networks. We introduce the backpropagation algorithm for computing gradients and briefly discuss connections between artificial neural networks and biological neural networks. Keywords: Neural networks, computational graphs, backpropagation, activation functions, biological neurons Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 5 | Convolutional Neural Networks - Stanford University School of Engineering
In Lecture 5 we move from fully-connected neural networks to convolutional neural networks. We discuss some of the key historical milestones in the development of convolutional networks, including the perceptron, the neocognitron, LeNet, and AlexNet. We introduce convolution, pooling, and fully-connected layers which form the basis for modern convolutional networks. Keywords: Convolutional neural networks, perceptron, neocognitron, LeNet, AlexNet, convolution, pooling, fully-connected layers Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 6 | Training Neural Networks I - Stanford University School of Engineering
In Lecture 6 we discuss many practical issues for training modern neural networks. We discuss different activation functions, the importance of data preprocessing and weight initialization, and batch normalization; we also cover some strategies for monitoring the learning process and choosing hyperparameters. Keywords: Activation functions, data preprocessing, weight initialization, batch normalization, hyperparameter search Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 7 | Training Neural Networks II - Stanford University School of Engineering
Lecture 7 continues our discussion of practical issues for training neural networks. We discuss different update rules commonly used to optimize neural networks during training, as well as different strategies for regularizing large neural networks including dropout. We also discuss transfer learning and finetuning. Keywords: Optimization, momentum, Nesterov momentum, AdaGrad, RMSProp, Adam, second-order optimization, L-BFGS, ensembles, regularization, dropout, data augmentation, transfer learning, finetuning Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 8 | Deep Learning Software - Stanford University School of Engineering
In Lecture 8 we discuss the use of different software packages for deep learning, focusing on TensorFlow and PyTorch. We also discuss some differences between CPUs and GPUs. Keywords: CPU vs GPU, TensorFlow, Keras, Theano, Torch, PyTorch, Caffe, Caffe2, dynamic vs static computational graphs Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 9 | CNN Architectures - Stanford University School of Engineering
In Lecture 9 we discuss some common architectures for convolutional neural networks. We discuss architectures which performed well in the ImageNet challenges, including AlexNet, VGGNet, GoogLeNet, and ResNet, as well as other interesting models. Keywords: AlexNet, VGGNet, GoogLeNet, ResNet, Network in Network, Wide ResNet, ResNeXT, Stochastic Depth, DenseNet, FractalNet, SqueezeNet Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 12 | Visualizing and Understanding - YouTube
In Lecture 12 we discuss methods for visualizing and understanding the internal mechanisms of convolutional networks. We also discuss the use of convolutional networks for generating new images, including DeepDream and artistic style transfer. Keywords: Visualization, t-SNE, saliency maps, class visualizations, fooling images, feature inversion, DeepDream, style transfer Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 13: Convolutional Neural Networks - Stanford University School of Engineering
Lecture 13 provides a mini tutorial on Azure and GPUs followed by research highlight "Character-Aware Neural Language Models." Also covered are CNN Variant 1 and 2 as well as comparison between sentence models: BoV, RNNs, CNNs.
-
#4.1 CNN 卷积神经网络 (PyTorch tutorial 神经网络 教学) - 莫烦Python
卷积神经网络目前被广泛地用在图片识别上, 已经有层出不穷的应用,我们就一步一步做一个分析手写数字的 CNN 吧.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/PyTorch-...
-
-
CNN 實作
這一系列視頻專注於卷積神經網絡(CNN)的實際應用和實作,提供了從基礎到進階的豐富案例。觀眾可以學習如何使用CNN進行圖像分類,包括對數據集的預處理、模型建立、訓練和優化。這些視頻不僅涉及理論知識,還包括實際操作指南,如使用Tensorflow和Python進行編程,構建完整的深度學習項目,並在不同的平台上部署模型。這些案例涉及多種應用,從一般的圖像分類到特定領域的應用,如心臟病預測和農作物病害分類,非常適合希望將深度學習應用到實際問題中的學習者。
-
Image Classification with Convolutional Neural Networks (CNNs) - StatQuest
One of the coolest things that Neural Networks can do is classify images, and this is often done with a type of Neural Network called a Convolutional Neural ...
-
Image classification using CNN (CIFAR10 dataset) | Deep Learning Tutorial 24 (Tensorflow & Python) - codebasics
In this video we will do small image classification using CIFAR10 dataset in tensorflow. We will use convolutional neural network for this image classification problem. First we will train a model using simple artificial neural network and then check how the performance looks like and then we will train a CNN and see how the model accuracy improves. This tutorial will help you understand why CNN is preferred over ANN for image classification. Code: https://github.com/codebasics/deep-le... Exercise: Scroll to the very end of above notebook. You will find exercise description and solution link Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... #cnn #cnnimageclassification #imageclassificationpython #cnnmodel #deeplearning #tensorflowimageclassification #pythonimageclassification
-
Image Classification using CNN | Deep Learning Tutorial | Machine Learning Project 9 | Edureka
This Edureka video on 'Image Classification using CNN' will give you an overview of Image Classification using Machine Learning and will help you understand various important concepts that concern Image Classification with ML. Following pointers are covered in this Image Classification using CNN: 1) Introduction 2) Tools and Frameworks 3) Project
-
Computer Vision and Perception for Self-Driving Cars (Deep Learning Course) - freeCodeCamp.org
Learn about Computer Vision and Perception for Self Driving Cars. This series focuses on the different tasks that a Self Driving Car Perception unit would be required to do. ✏️ Course by Robotics with Sakshay. / @roboticswithsakshay ⭐️ Course Contents and Links ⭐️ ⌨️ (0:00:00) Introduction ⌨️ (0:02:16) Fully Convolutional Network | Road Segmentation 🔗 Kaggle Dataset: https://www.kaggle.com/sakshaymahna/k... 🔗 Kaggle Notebook: https://www.kaggle.com/sakshaymahna/f... 🔗 KITTI Dataset: http://www.cvlibs.net/datasets/kitti/ 🔗 Fully Convolutional Network Paper: https://arxiv.org/abs/1411.4038 🔗 Hand Crafted Road Segmentation: • Udacity Self Driving Cars Advanced La... 🔗 Deep Learning and CNNs: • But what is a neural network? | Chapt... ⌨️ (0:20:45) YOLO | 2D Object Detection 🔗 Kaggle Competition/Dataset: https://www.kaggle.com/c/3d-object-de... 🔗 Visualization Notebook: https://www.kaggle.com/sakshaymahna/l... 🔗 YOLO Notebook: https://www.kaggle.com/sakshaymahna/y... 🔗 Playlist on Fundamentals of Object Detection: • CNN-Object Detection 🔗 Blog on YOLO: https://www.section.io/engineering-ed... 🔗 YOLO Paper: https://arxiv.org/abs/1506.02640 ⌨️ (0:35:51) Deep SORT | Object Tracking 🔗 Dataset: https://www.kaggle.com/sakshaymahna/k... 🔗 Notebook/Code: https://www.kaggle.com/sakshaymahna/d... 🔗 Blog on Deep SORT: / object-tracking-using-deepsort-in-tensorfl... 🔗 Deep SORT Paper: https://arxiv.org/abs/1703.07402 🔗 Kalman Filter: • Understanding Kalman Filters 🔗 Hungarian Algorithm: https://www.geeksforgeeks.org/hungari... 🔗 Cosine Distance Metric: https://www.machinelearningplus.com/n... 🔗 Mahalanobis Distance: https://www.machinelearningplus.com/s... 🔗 YOLO Algorithm: • YOLO | 2D Object Detection | Percepti... ⌨️ (0:52:37) KITTI 3D Data Visualization | Homogenous Transformations 🔗 Dataset: https://www.kaggle.com/garymk/kitti-3... 🔗 Notebook/Code: https://www.kaggle.com/sakshaymahna/l... 🔗 LIDAR: https://geoslam.com/what-is-lidar/ 🔗 Tesla doesn't use LIDAR: https://towardsdatascience.com/why-te... ⌨️ (1:06:45) Multi Task Attention Network (MTAN) | Multi Task Learning 🔗 Dataset: https://www.kaggle.com/sakshaymahna/c... 🔗 Notebook/Code: https://www.kaggle.com/sakshaymahna/m... 🔗 Data Visualization: https://www.kaggle.com/sakshaymahna/e... 🔗 MTAN Paper: https://arxiv.org/abs/1803.10704 🔗 Blog on Multi Task Learning: https://ruder.io/multi-task/ 🔗 Image Segmentation and FCN: • Fully Convolutional Network | Road Se... ⌨️ (1:20:58) SFA 3D | 3D Object Detection 🔗 Dataset: https://www.kaggle.com/garymk/kitti-3... 🔗 Notebook/Code: https://www.kaggle.com/sakshaymahna/s... 🔗 Data Visualization: https://www.kaggle.com/sakshaymahna/l... 🔗 Data Visualization Video: • KITTI 3D Data Visualization | Homogen... 🔗 SFA3D GitHub Repository: https://github.com/maudzung/SFA3D 🔗 Feature Pyramid Networks: / understanding-feature-pyramid-networks-for... 🔗 Keypoint Feature Pyramid Network: https://arxiv.org/pdf/2001.03343.pdf 🔗 Heat Maps: https://en.wikipedia.org/wiki/Heat_map 🔗 Focal Loss: / understanding-focal-loss-a-quick-read 🔗 L1 Loss: https://afteracademy.com/blog/what-ar... 🔗 Balanced L1 Loss: https://paperswithcode.com/method/bal... 🔗 Learning Rate Decay: / learning-rate-decay-and-methods-in-deep-le... 🔗 Cosine Annealing: https://paperswithcode.com/method/cos... ⌨️ (1:40:24) UNetXST | Camera to Bird's Eye View 🔗 Dataset: https://www.kaggle.com/sakshaymahna/s... 🔗 Dataset Visualization: https://www.kaggle.com/sakshaymahna/d... 🔗 Notebook/Code: https://www.kaggle.com/sakshaymahna/u... 🔗 UNetXST Paper: https://arxiv.org/pdf/2005.04078.pdf 🔗 UNetXST Github Repository: https://github.com/ika-rwth-aachen/Ca... 🔗 UNet: https://towardsdatascience.com/unders... 🔗 Image Transformations: https://kevinzakka.github.io/2017/01/... 🔗 Spatial Transformer Networks: https://kevinzakka.github.io/2017/01/...
-
Convolutional Networks for Heart Disease Prediction (AlphaCare: Episode 1) - Siraj Raval
AlphaCare is an open-source project that Keshav Boudaria and I have been working on for the past few weeks, and it's built entirely on top of freely available open-source data, algorithms, and compute. In this first video of the AlphaCare series, I'll explain how we can use it to classify ECG data from patient heartbeats to accurately predict the likelihood of different types of heart disease, mainly Arrhythmia. The goal of AlphaCare is to progressively improve it's capabilities as a community until it's able to be used as a tool to treat and prevent the top 10 major disease globally. Ultimately, we'd like to use it to treat the root cause of all diseases, Aging. AlphaCare is a work in progress, we have a lot of work to do together. I can't wait to learn and grow with all of you, let's make a massive positive impact together!
-
Deep learning project end to end | Potato Disease Classification Using CNN - 1 : Problem Statement - codebasics
This is the first video in end to end deep learning project series in agriculture domain. Farmers every year face economic loss and crop waste due to various diseases in potato plants. We will use image classification using CNN and built a mobile app using which a farmer can take a picture and app will tell you if the plant has a disease or not. Technology stack for this project will be, Model Building: tensorflow, CNN, data augmentation, tf dataset Backend Server and ML Ops: tf serving, FastAPI Model Optimization: Quantization, Tensorflow lite Frontend: React JS, React Native Deployment: GCP (Google cloud platform, GCF (Google cloud functions) Code: https://github.com/codebasics/potato-...
-
Deep learning project end to end | Potato Disease Classification - 2 :Data collection, preprocessing - codebasics
📺 End to end deep learning project for potato disease classification. In this session, we will cover, ⭐️ Timestamps ⭐️ 00:00 Three ways of collecting data 01:35 Coding begins 05:42 Load data into tf.Dataset 12:24 Data visualization 15:12 Train test split 27:10 Data augmentation Code: https://github.com/codebasics/potato-... Dataset is taken from Kaggle: https://www.kaggle.com/arjuntejaswi/p...
-
Deep learning project end to end | Potato Disease Classification - 3 : Model Building - codebasics
We will train a convolutional neural network in tensorflow using potato plant images. The goal of this model would be to classify these images as either healthy or early blight or late blight. We will cover following topics, ⭐️ Timestamps ⭐️ 00:00 Introduction 00:28 Build and train a CNN model 11:31 Plot training history on graph 15:26 Make predictions/inference on sample images 24:06 Export model to a file on disk 27:42 Exercise for you (very important!) :)
-
Deep learning project end to end | Potato Disease Classification - 4 : FastAPI/tf serving Backend - codebasics
tf serving is a convenient way to serve machine learning models. In this video we will first build a FastAPI web server and test it using postman application. We will then have alternate way of doing same thing but this time using tf serving + FastAPI. We will discuss some benefits of tf serving as well. Code: https://github.com/codebasics/potato-... What is CORS? • CORS in 100 Seconds tf serving tutorial: • tf serving tutorial | tensorflow serv... FastAPI tutorial: • FastAPI Tutorial | FastAPI vs Flask ⭐️ Timestamps ⭐️ 00:00 Introduction 01:36 Installation 02:53 Approach 1: FastAPI server 22:29 Approach 2: FastAPI + tf serving server
-
Deep learning project end to end | Potato Disease Classification - 5 : Website (In React JS) - codebasics
We will build a testpad website in React JS that can support drag and drop of potato plant leaf image. Upon dropping the image on website, it calls the FastAPI backend to perform the inference. In this video we will go over website code, talk a little bit about React JS and connect the website to backend and perform the inference. Frontend Code: https://github.com/codebasics/potato-... All Code: https://github.com/codebasics/potato-... What is CORS? • CORS in 100 Seconds ⭐️ Timestamps ⭐️ 00:00 Introduction 01:56 Installation 06:48 Website Code
-
Deep learning project end to end | Potato Disease Classification - 6 : ImageDataGenerator API - codebasics
In part 2 and 3, we used explicit data augmentation layer to train our model. In this video I want to show you an alternate way of training the model using a little bit more conventient api called ImageDataGenerator. Both the approaches are valid but ImageDataGenerator api allows you to load the images from the disk and augment them in simple 2 line code. Code in this video: https://github.com/codebasics/potato-... Code for entire project: https://github.com/codebasics/potato-... Project Playlist : • Potato Disease Classification: Deep l... Project Hindi Playlist : • HINDI Deep learning project end to en... Deep learning tutorials: • Deep Learning With Tensorflow 2.0, Ke... Python tutorials: • Python 3 Programming Tutorials for Be... All machine learning projects: • Data Science & Machine Learning Projects
-
Deep learning project end to end | Potato Disease Classification - 7 : Model Deployment To GCP - codebasics
We will deploy our trained model on Google Cloud Platform (GCP) in this video.
-
Deep learning project end to end | Potato Disease Classification - 8 : Mobile App in React Native - codebasics
We will build a mobile application in react native that can be used by farmers to take a picture of potato plant and detect if plant has a disease.
-
-
循環神經網絡 (RNN)
循環神經網絡(Recurrent Neural Networks, RNN)是深度學習中專門用於處理序列數據(如時間序列、自然語言文本等)的一種神經網絡架構。本系列視頻提供了RNN的全面介紹,包括其基本原理、結構、及其變體如長短期記憶網絡(LSTM)和門控循環單元(GRU)。這些視頻解釋了RNN是如何工作的,以及它們是如何被用於語言模型、機器翻譯和其他自然語言處理任務中。透過這些視頻,觀眾可以學習到RNN的基本概念、架構設計、以及如何使用流行的深度學習框架(如Tensorflow和PyTorch)來實作它們。這些視頻適合對序列數據分析和自然語言處理感興趣的學生和專業人士。
-
Bidirectional RNN | Deep Learning Tutorial 38 (Tensorflow, Keras & Python) - codebasics
Bi directional RNNs are used in NLP problems where looking at what comes in the sentence after a given word influences final outcome. In this short video we will look at bi directional RNN architecture using a very simple example of named entity recognition. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... 🔖Hashtags🔖 #bidirectional #bidirectionalrnn #bidirectionallstm #bidirectionalrecurrentneuralnetwork #bidirectionalnlp #bidirectionalrnnarchitecture #bidirectionalrnndeeplearning #bidirectionalrnn #bidirectionalRNNkeras #bidirectionalrnnexplained #tensorflowbidirectionalrnn
-
Recursive Network - 李宏毅
-
Highway Network & Grid LSTM - 李宏毅
-
Spatial Transformer Layer - 李宏毅
-
Conditional Generation by RNN & Attention - 李宏毅
-
Pointer Network - 李宏毅
Pointer network is given in MLDS (Fall, 2017) but not in MLDS (Spring, 2017), so I upload this video to make it complete.
-
Simple Explanation of GRU (Gated Recurrent Units) | Deep Learning Tutorial 37 (Tensorflow & Python) - codebasics
Simple Explanation of GRU (Gated Recurrent Units): Similar to LSTM, Gated recurrent unit addresses short term memory problem of traditional RNN. It was invented in 2014 and getting more popular compared to LSTM. In this video we will understand theory behind GRU using a very simple explanation and examples. Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. LSTM Video: • Simple Explanation of LSTM | Deep Lea... Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist: https://www.youtube.com/playlist?list... #gatedrecurrentunits #grudeeplearning #gruarchitecture #grulstm #grurnn
-
Simple Explanation of LSTM | Deep Learning Tutorial 36 (Tensorflow, Keras & Python) - codebasics
LSTM or long short term memory is a special type of RNN that solves traditional RNN's short term memory problem. In this video I will give a very simple explanation of LSTM using some real life examples so that you can understand this difficult topic easily. Also refer to following blogs to explore math and understand few more details. http://colah.github.io/posts/2015-08-... Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... #lstm #lstmmodel #lstmalgorithm #lstmneuralnetwork #lstmarchitecture #lstmdeeplearning #lstmkeras
-
Types of RNN | Recurrent Neural Network Types | Deep Learning Tutorial 34 (Tensorflow & Python) - codebasics
In this video we will discuss different types of RNN types such as, 1) One to many 2) Many to many 3) Many to one #typesofrnn #rnnindeeplearning #recurrentneuralnetworktypes #deeplearningtutorial #rnntypes #deeplarningrnn
-
What is Recurrent Neural Network (RNN)? Deep Learning Tutorial 33 (Tensorflow, Keras & Python) - codebasics
RNN or Recurrent Neural Network are also known as sequence models that are used mainly in the field of natural language processing as well as some other areas such as speech to text translation, video activity monitoring, etc. In this video we will understand the intuition behind RNN and see how RNN's work. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist: https://www.youtube.com/playlist?list... #recurrentneuralnetwork #rnn #rnndeeplearning #whatisrnn #deeplearningtutorial #rnnneuralnetwork #rnntutorial
-
Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) - Brandon Rohrer
Part of the End-to-End Machine Learning School Course 193, How Neural Networks Work at https://e2eml.school/193
-
Lecture 8: Recurrent Neural Networks and Language Models - Stanford University School of Engineering
Lecture 8 covers traditional language models, RNNs, and RNN language models. Also reviewed are important training problems and tricks, RNNs for other sequence tasks, and bidirectional and deep RNNs.
-
Lecture 9: Machine Translation and Advanced Recurrent LSTMs and GRUs - Stanford University School of Engineering
Lecture 9 recaps the most important concepts and equations covered so far followed by machine translation and fancy RNN models tackling MT. Key phrases: Language Models. RNN. Bi-directional RNN. Deep RNN. GRU. LSTM.
-
Lecture 10 | Recurrent Neural Networks - Stanford University School of Engineering
In Lecture 10 we discuss the use of recurrent neural networks for modeling sequence data. We show how recurrent neural networks can be used for language modeling and image captioning, and how soft spatial attention can be incorporated into image captioning models. We discuss different architectures for recurrent neural networks, including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU). Keywords: Recurrent neural networks, RNN, language modeling, image captioning, soft attention, LSTM, GRU Slides: http://cs231n.stanford.edu/slides/201...
-
Lecture 11: Gated Recurrent Units and Further Topics in NMT - Stanford University School of Engineering
Lecture 11 provides a final look at gated recurrent units like GRUs/LSTMs followed by machine translation evaluation, dealing with large vocabulary output, and sub-word and character-based models. Also includes research highlight ""Lip reading sentences in the wild."" Key phrases: Seq2Seq and Attention Mechanisms, Neural Machine Translation, Speech Processing
-
什么是循环神经网络 RNN (深度学习)? What is Recurrent Neural Networks (deep learning)? - 莫烦Python
循环神经网络简介.今天我们会来聊聊在语言分析, 序列化数据中穿梭自如的循环神经网络 RNN.Tensorflow RNN1: https://www.youtube.com/watch?v=i-cd3wzsHtw&index=23&list=PLXO45tsB95cKI5AIlf5TxxFPzb-0zeVZ8Te...
-
什么是 LSTM RNN 循环神经网络 (深度学习)? What is LSTM in RNN (deep learning)? - 莫烦Python
今天我们会来聊聊在普通RNN的弊端和为了解决这个弊端而提出的 LSTM 技术机器学习-简介系列 播放列表: https://www.youtube.com/playlist?list=PLXO45tsB95cIFm8Y8vMkNNPPXAtYXwKinTensorflow 20.2: https://www.yo...
-
Recurrent Neural Networks (RNNs), Clearly Explained!!! - StatQuest
When you don't always have the same amount of data, like when translating different sentences from one language to another, or making stock market predictions from different companies, Recurrent Neural Networks come to the rescue. In this StatQuest, we'll show you how Recurrent Neural Networks work, one step at a time, and then we'll show you their critical flaw that will lead us to understanding Long Short-Term Memory Networks.
-
Long Short-Term Memory (LSTM), Clearly Explained - StatQuest
Basic recurrent neural networks are great, because they can handle different amounts of sequential data, but even relatively small sequences of data can make...
-
Long Short-Term Memory with PyTorch + Lightning - StatQuest
In this StatQuest we'll learn how to code an LSTM unit from scratch and then train it. Then we'll do the same thing with the PyTorch function nn.LSMT(). Alon...
-
Recurrent Neural Network - The Math of Intelligence (Week 5) - Siraj Raval
Recurrent neural networks let us learn from sequential data (time series, music, audio, video frames, etc ). We're going to build one from scratch in numpy (...
-
LSTM Networks - The Math of Intelligence (Week 8) - Siraj Raval
Recurrent Networks can be improved to remember long range dependencies by using whats called a Long-Short Term Memory (LSTM) Cell. Let's build one using just...
-
Gated RNN and Sequence Generation (Recorded at Fall, 2017) - 李宏毅
-
-
RNN應用
循環神經網絡(Recurrent Neural Networks, RNN)在自然語言處理(NLP)和其他序列數據處理領域中有著廣泛的應用。本系列視頻專注於介紹RNN在實際應用中的使用,包括文本處理、自動文本生成、機器翻譯、以及使用注意力機制的進階技術。這些視頻涵蓋了從數據處理基礎到複雜模型如LSTM、多層RNN和雙向RNN的使用。此外,也探討了如何在實際項目中應用RNN,如用於比特幣交易的機器學習模型。適合對NLP和其他需要序列數據處理的領域感興趣的學生和專業人士。
-
#4.2 RNN 循环神经网络 分类 (PyTorch tutorial 神经网络 教学) - 莫烦Python
PyTorch 用 MNIST 和 RNN 来分类.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/PyTorch-Tutorial 详细的文字教程: https://morv...
-
#4.3 RNN 循环神经网络 回归 (PyTorch tutorial 神经网络 教学) - 莫烦Python
循环神经网络让神经网络有了记忆, 对于序列话的数据,循环神经网络能达到更好的效果, 这次我们来真的了, 用 RNN 来及时预测时间序列.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZ...
-
RNN模型与NLP应用(1/9):数据处理基础 - Shusen Wang
这节课的内容是数据处理,特别是对categorical feature做one-hot encoding。 课件:https://github.com/wangshusen/DeepLea...
-
RNN模型与NLP应用(2/9):文本处理与词嵌入 - Shusen Wang
这节课的内容是文本处理以及word embedding(词嵌入)。这节课用一个浅层神经网络来判断movie review是正面还是负面情感。 课件:https://github.com/wangshusen/DeepLea...
-
RNN模型与NLP应用(3/9):Simple RNN模型 - Shusen Wang
这节课的内容是RNN(循环神经网络)的基础以及Keras编程实现。 课件:https://github.com/wangshusen/DeepLea...
-
RNN模型与NLP应用(4/9):LSTM模型 - Shusen Wang
这节课的内容是Long Short-Term Memory (LSTM) 的原理以及Keras编程实现。 课件:https://github.com/wangshusen/DeepLea...
-
RNN模型与NLP应用(5/9):多层RNN、双向RNN、预训练 - Shusen Wang
这节课有3个内容:1. 多层RNN, 2. 双向RNN,3. 预训练。 主要内容: 0:16 Stacked RNN(多层RNN) 4:12 Bidirectional RNN(双向RNN) 8:15 Pretrain(预训练) 课件:https://github.com/wangshusen/DeepLea...
-
RNN模型与NLP应用(6/9):Text Generation (自动文本生成) - Shusen Wang
这节课的主要内容是Text Generation (自动文本生成)。我们训练一个文本生成器,用来自动生成文本。 课件:https://github.com/wangshusen/DeepLea... 文本生成器代码:François Chollet 的书 Deep Learning with Python 第8.1节。
-
RNN模型与NLP应用(7/9):机器翻译与Seq2Seq模型 - Shusen Wang
这节课介绍Sequence-to-Sequence模型,并用这种模型做machine translation (机器翻译)。 课件:https://github.com/wangshusen/DeepLea...
-
RNN模型与NLP应用(8/9):Attention (注意力机制) - Shusen Wang
这节课介绍Attention (注意力机制)。Attention第一篇论文发表在2015年,用于改进Sequence-to-Sequence (Seq2Seq) 模型,可以大幅提高机器翻译的准确率。Attention可以避免RNN遗忘的问题,而且可以让RNN关注最相关的信息。这节课详细讲解Attention如何与Seq2Seq模型结合。Attention的用途远不止Seq2Seq模型。之后课程会介绍Self-Attention、以及Transformer模型。 课件:https://github.com/wangshusen/DeepLea...
-
RNN模型与NLP应用(9/9):Self-Attention (自注意力机制) - Shusen Wang
这节课介绍Self-Attention (自注意力机制)。Self-Attention也叫做intra-attention,与Attention非常类似。但是Self-Attention不局限于Seq2Seq模型,可以用在任何RNN上。实验证明Self-Attention对多种机器学习和自然语言处理的任务都有帮助。Transformer模型的主要原理就是Attention和Self-Attention。 课件:https://github.com/wangshusen/DeepLea...
-
Bitcoin Trading Bot use LSTM (Tutorial) - Siraj Raval
Cryptocurrency can be a high-risk, high-reward game for those willing to deal with the volatility. Can we use AI to help us make predictions about Bitcoin's ...
-
-
徑向基函數網絡 (RBF)
徑向基函數網絡(Radial Basis Function Network, RBF Network)是一種特殊類型的人工神經網絡,廣泛應用於模式識別、函數近似等領域。這一系列視頻深入探討了RBF網絡的理論基礎、學習方法和實際應用。視頻涵蓋了RBF網絡的基本假設、學習策略,以及如何使用k-Means算法進行網絡初始化和訓練。此外,還展示了RBF網絡在實際案例中的應用。這些視頻適合對RBF網絡和其在機器學習中的應用感興趣的學生和研究人員。
-
Radial Basis Function Network :: RBF Network Hypothesis @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Radial Basis Function Network :: RBF Network Learning @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Radial Basis Function Network :: k-Means Algorithm @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Radial Basis Function Network :: k-Means and RBFNet in Action @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Radial Basis Function Artificial Neural Networks - Noureddin Sadawi
My web page:www.imperial.ac.uk/people/n.sadawi
-
-
電腦視覺(Computer Vision, CV)
電腦視覺(Computer Vision, CV)與圖像分類是深度學習中的關鍵應用領域,專注於使計算機能夠從圖像或視頻中識別和處理視覺信息。這一系列視頻提供了對電腦視覺基礎概念的全面介紹,包括圖像分類、物體檢測和圖像分割等重要主題。視頻中介紹了從簡單的圖像分類技術到複雜的物體檢測方法,如滑動窗口技術和深度學習在電腦視覺中的應用。這些內容非常適合希望深入了解如何使用深度學習技術來處理和分析圖像數據的學生和專業人士。
-
Lecture 11 | Detection and Segmentation - YouTube
In Lecture 11 we move beyond image classification, and show how convolutional networks can be applied to other core computer vision tasks. We show how fully convolutional networks equipped with downsampling and upsampling layers can be used for semantic segmentation, and how multitask losses can be used for localization and pose estimation. We discuss a number of methods for object detection, including the region-based R-CNN family of methods and single-shot methods like SSD and YOLO. Finally we show how ideas from semantic segmentation and object detection can be combined to perform instance segmentation. Keywords: Semantic segmentation, fully convolutional networks, unpooling, transpose convolution, localization, multitask losses, pose estimation, object detection, sliding window, region proposals, R-CNN, Fast R-CNN, Faster R-CNN, YOLO, SSD, DenseCap, instance segmentation, Mask R-CNN Slides: http://cs231n.stanford.edu/slides/201...
-
Applications of computer vision | Deep Learning Tutorial 22 (Tensorflow2.0, Keras & Python) - codebasics
Advancements in deep learning (especially invention of convolutional neural network or CNN or ConvNet) has made possible many amazing things in the field of computer vision. In this video we will be looking at application of deep learning and computer vision in following industries, 00:00 Overview of computer vision 00:26 Personal photo management 02:00 Banking ( • How to Deposit Checks with the Mobile... ) 03:00 Agriculture ( • AI and the future of agriculture ) 04:47 Autonomus cars ( • Full Self-Driving ) 06:25 Retail (Amazon Go) ( • Introducing Amazon Go and the world’s... )
-
Image classification vs Object detection vs Image Segmentation | Deep Learning Tutorial 28 - codebasics
Using a simple example I will explain the difference between image classification, object detection and image segmentation in this video. Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist: https://www.youtube.com/playlist?list... #Imageclassification #Objectdetection #Imagesegmentation #DeepLearningTutorial #DeepLearning
-
Sliding Window Object Detection | Deep Learning Tutorial 30 (Tensorflow, Keras & Python) - codebasics
Sliding window object detection is a technique that allows you to detect objects in a picture. This technique is not very efficient as it is very compute intensive. Recently new techniques has been discovered that tried to improve performance such as R CNN, Fast R CNN, Faster R CNN etc. YOLO (You only look once) is a state of the art most modern technique that outperforms all other previous techniques such as sliding window object detection, R CNN, Fast and Faster R CNN etc. We will cover YOLO in future videos. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... #objectdetection #deeplearningobjectdetection #slidingwindowobjectdetection #deeplearningtutorial
-
Popular datasets for computer vision: ImageNet, Coco and Google Open images | Deep Learning 29 - codebasics
Imagenet, Coco and google open images datasets are 3 most popular image datasets for computer vision. These datasets provides millions of hand annotated images with classification labels, bounding boxes for object detections and image segmentation masks. Data collection is the most difficult part in any supervised machine learning problem and availability of such datasets makes training CNNs very easy. #computervisiondatasets #deeplearningdatasets #imagenetdataset #cocodataset #googleopenimagesdatasets #deeplearningdatasets
-
[深度學習] 如何建構深度學習模型分辨 誰是屈中恆、宋少卿、鈕承澤 (1)? - 大數軟體有限公司
最近因爲鈕承澤一案導致一個需要分辯屈中恆、宋少卿、鈕承澤的驗證碼被頻頻瘋傳。因此我們就想要利用深度學習中的卷積神經網路,讓電腦能夠自動辨別圖片中的人物!當然要建構模型先要有素材,因此我們就先撰寫了一個Python 網路爬蟲,嘗試先把這三個明星的圖片從Google 的圖片搜尋中爬取下來,並加以存檔,我們之後便可以利用這些素材來建構我們的人臉識別模型! 程式碼: https://github.com/ywchiu/largitdata/... https://www.largitdata.com/course/110/
-
[深度學習] 如何建構深度學習模型分辨誰是屈中恆、宋少卿、鈕承澤 (2)? - 大數軟體有限公司
繼抓取屈中恆、宋少卿、鈕承澤等三位明星照片後,還是需先擷取出圖片中的臉部圖片,方能建構人物識別模型。因此我們先在作業系統上安裝opencv3,接者透過opencv 所提供的 haar 分類器偵測臉部特徵,程式便能裁切出臉部圖片,並將裁切圖片存入目標資料夾,以備後續建模之用! 程式碼: https://github.com/ywchiu/largitdata/... https://www.largitdata.com/course/111 #大數軟體 #鈕承澤 #卷積神經網路 #Python網路爬蟲 #深度學習
-
[深度學習] 如何建構深度學習模型分辨誰是屈中恆、宋少卿、鈕承澤 (3)? - 大數軟體有限公司
當我們能夠抓取到屈中恆、宋少卿、鈕承澤的圖片,並把他們的臉部特徵用OpenCV 擷取出來後,我們便可以使用卷積神經網路(Convolution Neural Network) ,透過卷積(Convolution),最大池化(Max Pooling),平化(Flattening)與全連結(Fully Connected) 訓練一個模型,識別出三人的圖片,並透過OpenCV 標示出所有人的臉部,並在上面加上注釋文字! 如果有志學習更多相關深度學習知識的同學 不妨參考我的線上課程: 1. 手把手教你用Python 实践深度学习 https://edu.hellobi.com/course/278 2. 人人都爱数据科学家!Python数据科学精华实战课程 https://edu.hellobi.com/course/159 程式碼: https://github.com/ywchiu/largitdata/... https://www.largitdata.com/course/112 #大數軟體 #鈕承澤 #卷積神經網路 #OpenCV #Python網路爬蟲 #深度學習
-
[深度學習] 如何使用 DeepFakes 技術移花接木影片人物的臉? - 大數軟體有限公司
DeepFakes 技術已經讓影片造假變成是再容易不過的一件事!我們這次即透過DeepFaceLab 的程式碼實作 DeepFakes,嘗試將鋼鐵人的臉移花接木到我的臉上,讓所有人知道,即使你不會Photoshop,也可以偽造出真實度超高的影片出來。 p.s. 由於Google Colab 有免費提供Tesla P100 的 GPU,為了加速深度模型的訓練與實做,這次我們即運用Google 的 Colab 完成我們的模型訓練 影片: https://largitdata.com/course/125/ 程式碼:https://github.com/ywchiu/largitdata/... #DeepFakes #DeepFaceLab #DeepLearning #深度偽造 #鋼鐵人
-
[深度學習] 如何使用 DeepFakes 技術移花接木影片人物的臉(一)? - 大數軟體有限公司
2017 年,一個工程師利用DeepFakes技術將神力女超人Gal Gadot 的臉移花接木到成人片女星上,引起一陣轟動,也預告利用深度技術偽造影片的時代即將到來。本影片將簡介什麼DeepFakes 技術的原理,以及其背後用到的深度學習模型 AutoEncoder。 影片: https://largitdata.com/course/123/ 參考資料:https://www.alanzucconi.com/2018/03/1... #深度偽造 #DeepFakes #AutoEncoder
-
[深度學習] 如何使用 DeepFakes 技術移花接木影片人物的臉(二)? - 大數軟體有限公司
能實作DeepFakes 的工具有很多,但其中最知名的便是DeepFaceLab。我們在這個章節先講述DeepFaceLab 的流程,我們就能套用這流程,實做DeepFakes 變臉技術 影片: https://largitdata.com/course/124/ 參考資料:https://github.com/iperov/DeepFaceLab #深度偽造 #DeepFakes #DeepFaceLab
-
[深度學習] 如何使用 DeepFakes 技術移花接木影片人物的臉(三)? - 大數軟體有限公司
DeepFakes 技術已經讓影片造假變成是再容易不過的一件事!我們這次即透過DeepFaceLab 的程式碼實作 DeepFakes,嘗試將鋼鐵人的臉移花接木到我的臉上,讓所有人知道,即使你不會Photoshop,也可以偽造出真實度超高的影片出來。 p.s. 由於Google Colab 有免費提供Tesla P100 的 GPU,為了加速深度模型的訓練與實做,這次我們即運用Google 的 Colab 完成我們的模型訓練 影片: https://largitdata.com/course/125/ 程式碼:https://github.com/ywchiu/largitdata/... #DeepFakes #DeepFaceLab #DeepLearning #深度偽造 #鋼鐵人
-
-
YOLO與物體檢測實作
這一系列視頻專注於介紹YOLO(You Only Look Once)算法及其在物體檢測領域的應用。從基本概念到實際實作,視頻涵蓋了使用YOLO進行圖像中物體檢測的全過程。包括如何訓練YOLO模型、進行環境配置、數據預處理、模型訓練和部署。此外,系列還包括如何在特定應用中使用YOLO,如口罩檢測系統的建立。這些視頻適合對深度學習中的物體檢測技術感興趣的學生和專業人士,特別是那些希望深入了解YOLO算法的人。
-
Object detection using YOLO v4 and pre trained model | Deep Learning Tutorial 32 (Tensorflow) - codebasics
n this video we will use YOLO V4 and use pretrained weights to detect object boundaries in an image. The model was trained on COCO dataset using YOLO V4. Watch this to understand how yolo algorithm works: • What is YOLO algorithm? | Deep Learni... Windows setup instructions: https://github.com/AlexeyAB/darknet#h... Above, I was getting errors when I used .\build.ps1 command but using following command instead worked: powershell -ExecutionPolicy Bypass -File .\build.ps1 Make sure you are installing a compatible version of CUDA. For me it was CUDA 10.1, when I installed 11.x version I was getting all kind of errors so had to downgrade it to 10.1 Based on your system you might have to use a different version download yolov4.weights from https://github.com/AlexeyAB/darknet#h... COCO labels: https://tech.amikelive.com/node-718/w... YOLO research papers YOLO v1: https://arxiv.org/abs/1506.02640 YOLO v2: https://arxiv.org/abs/1612.08242 YOLO v3: https://arxiv.org/abs/1804.02767 Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. #objectdetectionusingyolo #yoloobjectdetection #yolov4objectdetection #yoloalgorithm #yolov4 #yolodeeplearning
-
What is YOLO algorithm? | Deep Learning Tutorial 31 (Tensorflow, Keras & Python) - codebasics
YOLO (You only look once) is a state of the art object detection algorithm that has become main method of detecting objects in the field of computer vision. Previously people used techniques such as sliding window object detection, R CNN, Fast R CNN and Faster R CNN. But after its invention in 2015, YOLO has become an industry standard for object detection due to its speed and accuracy. In this video we will understand the theory behind how exactly YOLO algorithm works. In next video we will write code to detect objects using YOLO framework. 🔖 Hashtags 🔖 #yoloalgorithm #yolodeeplearning #yoloobjectdetection #yolopython #yoloobjectdetection #yoloopencv
-
課程介紹 | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
來講講 YOLO v5 項目的使用啦
-
項目介紹及環境配置 | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
拿到項目的第一步,就是環境的配置,這是最關鍵的一步,也是最容易出現問題的一步
-
如何利用YOLOv5進行預測(一) | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
我們來説説如何利用 YOLO v5 來進行實際的預測,用於圖片和視頻的預測。來感受下 YOLOv5 驚人的效果吧。
-
如何利用YOLOv5進行預測(二) | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
我們來説説如何利用 YOLO v5 來進行實際的預測,用於圖片和視頻的預測。來感受下 YOLOv5 驚人的效果吧。
-
一點點小補充 | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
做一點點小補充~
-
解決Windows平臺下pycocotools錯誤 | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
這一集,主要講解解決Windows平臺下pycocotools錯誤
-
訓練YOLOv5模型(本地)(一) | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
這一集我們來講解如何在本地環境訓練 YOLO v5 模型。
-
训练YOLOv5模型(本地)(二) | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
這一集我們就來講講如何在本地訓練 YOLO v5 模型。
-
訓練YOLOv5模型(雲端GPU) | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
這一集主要講解如何利用 Google Colab 的免費 GPU 來訓練 YOLO v5 模型。通過這一集可以知道 如何使用 Google Colab
-
自製數據集及訓練 | 目標檢測 YOLO v5項目調試與實戰講解 PyTorch 教程 - 我是土堆
希望大家可以訂閱壹波愛好尋找有趣或更有效率的事、工具。同時,喜歡做教程,想做出更適合妳的教程。這一集主要是展示如何將 YOLO V5 模型用於自己的數據集上,我視頻中的訓練次數相對較少,只是做個演示。大家如果實際應用中,需要進行更多次數的訓練,才能取得良好的效果。
-
[深度學習] 如何使用 YOLO 製作即時口罩檢測系統(一) - YOLO簡介? - 大數軟體有限公司
新冠肺炎持續延燒,為了能夠確保大家的健康,各個機關或學校都動用了大量的人力來檢測是否每人都有配戴口罩,為了能夠減少檢測人力,我們要使用YOLO (You only look once)來搭建一個口罩檢測系統,讓人工智慧快速幫我們檢測是否每個人都有乖乖配戴口罩,保障大家的健康!在教學的第一個步驟,我們將先介紹什麼是YOLO,以及在Colab 上如何安裝YOLO。 影片: https://largitdata.com/course/126/ 程式碼:https://github.com/ywchiu/largitdata/... #DeepLearning #YOLO #COVID19 #新冠肺炎 #口罩檢測
-
[深度學習] 如何使用 YOLO 製作即時口罩檢測系統(二) – 建立口罩檢測模型? - 大數軟體有限公司
在安裝完YOLOv3的安裝程序後,我們開始要建立我們的口罩檢測模型。要做人工智慧前,必定先要有人工標記的資料集,因此我們先下載Kaggle 上的口罩資料集 (https://www.kaggle.com/vtech6/medical... Drive,接者將標注好的Label XML ,轉換為YOLOv3 可以接受的輸入格式。設定好模型所需之設定檔案(obj.data, obj.name, train.txt, test.txt, yolov3-tiny.cfg)並下載預訓練模型 darknet53.conv.74後,我們便可以開始訓練我們的口罩檢測模型了! 影片: https://largitdata.com/course/127/ 程式碼:https://github.com/ywchiu/largitdata/... #DeepLearning #YOLO #COVID19 #新冠肺炎 #口罩檢測
-
[深度學習] 如何使用 YOLO 製作即時口罩檢測系統(三) – 建立即時口罩檢測系統 - 大數軟體有限公司
在訓練好YOLOv3口罩檢測模型後,我們便可以結合模型與攝影機畫面,建立即時口罩檢測系統。但原本的模型是使用darknet 所調動的,所以我們改透過opencv讀取模型,再結合opencv 提供的攝影機擷取功能,便可以建立一個即時口罩檢測系統,馬上來看看鏡頭下的人是否都有戴好口罩! 影片: https://largitdata.com/course/128/ 程式碼:https://github.com/ywchiu/largitdata/... #DeepLearning #YOLO #COVID19 #新冠肺炎 #口罩檢測
-
[深度學習] 如何在Google Colab上安裝與使用 YOLOv4 ? - 大數軟體有限公司
YOLO 回來了!YOLO 之父 Joseph Redmon 在今年二月時表示,由於無法忽視自己工作所帶來的的負面影響,宣布退出電腦視覺領域。原本以為YOLOv4應該是不會問世了,沒想到YOLOv4 即橫空出世。 更令人驚喜的是, YOLOv4 在取得與 EfficientDet 同等的效果下,速度是 EfficientDet 的二倍,聽起來是不是非常吸引人?讓我們馬上學習該如何在Google Colab 上安裝並調用 YOLOv4 吧! 影片: https://largitdata.com/course/130/ 程式碼:https://github.com/ywchiu/largitdata/... #DeepLearning #GoogleColab #YOLOv4 #大數學堂
-
[深度學習] 如何不花一毛錢就可以透過 DeepFakes 技術出演魷魚遊戲 (一)? - 大數軟體有限公司
DeepFakes (深偽) 技術近期遭到有心人濫用,將政治人物、演藝明星的臉合成於不雅成人影片,引起社會動盪不安,但不代表深偽技術應該被責難,我們還是可以運用深偽技術產生很多有趣的應用。就像你如果想要看到自己演出近期轟動全球的魷魚遊戲,我們這時可以透過深偽技術變臉,讓自己能夠身歷其境!本教學影片中,將示範什麼是 DeepFakes (深偽) 技術,並解說其中的原理。 教學影片: https://largitdata.com/course/149/ #Deepfakes #Autoencoder #DeepLearning #深偽技術 #深度學習 #魷魚遊戲
-
-
自督導式學習 (Self-supervised Learning)
自監督式學習(Self-supervised Learning)是機器學習中一種新興的學習範式,其核心思想是讓模型從數據本身學習特徵而無需外部標記。這一系列視頻深入介紹了自監督式學習的概念、方法及其在不同領域如語音、影像和自然語言處理中的應用。視頻內容涉及自編碼器(Auto-encoder)、BERT、GPT等先進模型的基本原理和實際案例。這些視頻非常適合對於最新機器學習技術感興趣的學生和專業人士,幫助他們了解自監督學習在當前人工智能領域的重要性和應用前景。
-
[TA 補充課] Self-supervised Learning (由助教劉記良同學講授)
slides: https://docs.google.com/presentation/d/1qq8t8a3decJfJyA6t9wCRJfNDkvAnbg1WxTFvx0hG4g/edit?usp=sharing
-
【機器學習2021】自編碼器 (Auto-encoder) (下) – 領結變聲器與更多應用 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/auto_v8.pdf
-
【機器學習2021】自編碼器 (Auto-encoder) (上) – 基本概念 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/auto_v8.pdf
-
【機器學習2021】自督導式學習 (Self-supervised Learning) (四) – GPT的野望 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/bert_v8.pdf
-
【機器學習2021】自督導式學習 (Self-supervised Learning) (三) – BERT的奇聞軼事 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/bert_v8.pdf
-
【機器學習2021】自督導式學習 (Self-supervised Learning) (二) – BERT簡介 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/bert_v8.pdf
-
【機器學習2021】自督導式學習 (Self-supervised Learning) (一) – 芝麻街與進擊的巨人 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/bert_v8.pdfThe cover image is from https://leemeng.tw/attack_on_bert_transfer_learning_in_n...
-
【機器學習 2022】惡搞自督導式學習模型 BERT 的三個故事 - Hung-yi Lee
-
語音與影像上的神奇自督導式學習 (Self-supervised Learning) 模型 - Hung-yi Lee
自督導式學習 (Self-supervised Learning)- Hung-yi Lee
-
-
對抗性機器學習 (Adversarial machine learning)
對抗性攻擊(Adversarial Attack)和對抗性訓練是機器學習領域中的重要議題,特別是在深度學習的安全性和魯棒性研究上。這一系列視頻專注於解釋對抗性攻擊的基本概念、如何生成對抗性範例,以及如何通過對抗性訓練增強模型的魯棒性。這些內容對於理解深度學習模型面臨的安全威脅和應對策略至關重要。視頻涵蓋了從對抗性範例的生成方法到防禦技術的介紹,並討論了當前在這一領域的研究動態和挑戰。適合對機器學習模型安全性感興趣的學生、研究人員和專業人士。
-
【機器學習2021】來自人類的惡意攻擊 (Adversarial Attack) (上) – 基本概念 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/attack_v2.pdf
-
【機器學習2021】來自人類的惡意攻擊 (Adversarial Attack) (下) – 類神經網路能否躲過人類深不見底的惡意? - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/attack_v3.pdf
-
Lecture 16 | Adversarial Examples and Adversarial Training - Stanford University School of Engineering
In Lecture 16, guest lecturer Ian Goodfellow discusses adversarial examples in deep learning. We discuss why deep networks and other machine learning models are susceptible to adversarial examples, and how adversarial examples can be used to attack machine learning systems. We discuss potential defenses against adversarial examples, and uses for adversarial examples for improving machine learning systems even without an explicit adversary. Keywords: Adversarial examples, Fooling images, fast gradient sign method, Clever Hans, adversarial defenses, adversarial examples in the physical world, adversarial training, virtual adversarial training, model-based optimization Slides: http://cs231n.stanford.edu/slides/201...
-
[TA 補充課] More about Adversarial Attack (1/2) (由助教黃冠博同學講授)
slides: https://docs.google.com/presentation/d/1uK9WBUsZtmux-GH5GjqJx8vUKwAY-NrjLHMR0FkSV7Y/edit?usp=sharing
-
[TA 補充課] More about Adversarial Attack (2/2) (由助教黃冠博同學講授)
slides: https://docs.google.com/presentation/d/1uK9WBUsZtmux-GH5GjqJx8vUKwAY-NrjLHMR0FkSV7Y/edit?usp=sharing
-
-
生成對抗網路(GAN)
生成對抗網路(Generative Adversarial Networks, GANs)是深度學習領域中一種創新的生成模型,主要用於生成逼真的圖像、聲音或文本數據。這一系列視頻提供了對GAN的全面介紹,從基本概念、架構到不同類型的GAN,如WGAN、EBGAN、InfoGAN等。這些視頻還探討了GAN在風格轉換、圖像編輯、序列生成等方面的應用,並涵蓋了生成模型的評估方法和理論基礎。透過這些內容,觀眾可以深入了解GAN的工作原理和應用場景,對於想要在自己的項目中使用GAN的研究人員和開發者來說,這些視頻是寶貴的學習資源。
-
什么是 GAN 生成对抗网络 (深度学习)? What is Generative Adversarial Nets GAN (deep learning)? - 莫烦Python
今天我们会来说说现在最流行的一种生成网络, 叫做 GAN, 又称生成对抗网络, 也是 Generative Adversarial Nets 的简称.莫烦 Python: http://morvanzhou.github.io/tutorials/GAN 代码实现: https://morvanzhou.gith...
-
Lecture 13 | Generative Models - Stanford University School of Engineering
In Lecture 13 we move beyond supervised learning, and discuss generative modeling as a form of unsupervised learning. We cover the autoregressive PixelRNN and PixelCNN models, traditional and variational autoencoders (VAEs), and generative adversarial networks (GANs). Keywords: Generative models, PixelRNN, PixelCNN, autoencoder, variational autoencoder, VAE, generative adversarial network, GAN Slides: http://cs231n.stanford.edu/slides/201...
-
#4.6 GAN 生成对抗网络 (PyTorch tutorial 神经网络 教学) - 莫烦Python
GAN 是一个近几年比较流行的生成网络形式. 对比起传统的生成模型, 他减少了模型限制和生成器限制, 他具有有更好的生成能力. If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/...
-
【機器學習2021】生成式對抗網路 (Generative Adversarial Network, GAN) (一) – 基本概念介紹 - Hung-yi Lee
https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/gan_v10.pdf
-
【機器學習2021】生成式對抗網路 (Generative Adversarial Network, GAN) (二) – 理論介紹與WGAN - Hung-yi Lee
https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/gan_v10.pdf
-
【機器學習2021】生成式對抗網路 (Generative Adversarial Network, GAN) (三) – 生成器效能評估與條件式生成 - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/gan_v10.pdf
-
【機器學習2021】生成式對抗網路 (Generative Adversarial Network, GAN) (四) – Cycle GAN - Hung-yi Lee
slides: https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/gan_v10.pdf
-
[TA 補充課] SAGAN, BigGAN, SinGAN, GauGAN, GANILLA, NICE-GAN (由助教吳宗翰同學講授)
slides: https://docs.google.com/presentation/d/1ij3aOHl4Jf5zKwL6NewXCZvC5kVsiU-pMHFBNXsqyYA/edit?usp=sharing
-
Generative Adversarial Network - 李宏毅
-
Improved Generative Adversarial Network - 李宏毅
-
RL and GAN for Sentence Generation and Chat-bot - 李宏毅
-
Ensemble of GAN - 李宏毅
-
Evaluation of Generative Models - 李宏毅
-
Energy-based GAN - 李宏毅
-
Video Generation by GAN - 李宏毅
-
GAN Lecture 1 (2018): Introduction - Hung-yi Lee
-
GAN Lecture 2 (2018): Conditional Generation - Hung-yi Lee
-
GAN Lecture 3 (2018): Unsupervised Conditional Generation - Hung-yi Lee
-
GAN Lecture 4 (2018): Basic Theory - Hung-yi Lee
-
GAN Lecture 5 (2018): General Framework - Hung-yi Lee
-
GAN Lecture 6 (2018): WGAN, EBGAN - Hung-yi Lee
-
GAN Lecture 7 (2018): Info GAN, VAE-GAN, BiGAN - Hung-yi Lee
-
GAN Lecture 8 (2018): Photo Editing - Hung-yi Lee
-
GAN Lecture 9 (2018): Sequence Generation - Hung-yi Lee
-
GAN Lecture 10 (2018): Evaluation & Concluding Remarks - Hung-yi Lee
-
Generative Adversarial Networks (LIVE) - Siraj Raval
We're going to build a GAN to generate some images using Tensorflow. This will help you grasp the architecture and intuition behind adversarial approaches to machine learning. We're building a Deep Convolutional GAN to generate MNIST digits. Code for this video: https://github.com/llSourcell/Generat...
-
Generative Adversarial Networks for Style Transfer (LIVE) - Siraj Raval
Generative Adversarial Nets are such a rich topic for exploration, we're going to build one that was released just 2 months ago called the "DiscoGAN" that lets us transfer the style between 2 datasets. And I'll be building this using Tensorflow. Code for this video: https://github.com/llSourcell/GANS-fo...
-
-
深度強化學習(DRL)
深度強化學習(Deep Reinforcement Learning, DRL)結合了深度學習和強化學習的技術,用於解決複雜的決策和控制問題。這一系列視頻涵蓋了DRL的基礎知識、主要算法及其應用。從策略梯度(Policy Gradient)、Q學習(Q-learning)到演員-評論家(Actor-Critic)方法,這些視頻提供了對DRL核心概念和技術的深入介紹。視頻還包括對AlphaGo和模型基強化學習的討論,以及如何在實際場景中應用DRL。這些視頻適合對強化學習和人工智能領域有深入興趣的學生、研究人員和專業人士。
-
DRL Lecture 1: Policy Gradient (Review) - 李宏毅
DRL Lecture 1: Policy Gradient (Review)
-
DRL Lecture 2: Proximal Policy Optimization (PPO) - 李宏毅
-
DRL Lecture 3: Q-learning (Basic Idea) - 李宏毅
-
DRL Lecture 4: Q-learning (Advanced Tips) - 李宏毅
-
DRL Lecture 5: Q-learning (Continuous Action) - 李宏毅
-
DRL Lecture 6: Actor-Critic - 李宏毅
-
DRL Lecture 7: Sparse Reward - 李宏毅
-
DRL Lecture 8: Imitation Learning - 李宏毅
-
#4.5 DQN 强化学习 (PyTorch tutorial 神经网络 教学) - 莫烦Python
你同样也可以用 PyTorch 来实现, 这次我们就举 DQN 的例子, 我对比了我的 Tensorflow DQN 的代码, 发现 PyTorch 写的要简单很多. If you like this, please star my Tutorial code on Github.Code: https://gi...
-
#1 Reinforcement Learning Tutorials (Eng) - 莫烦Python
I recieved many requests about making my RL tutorials (Chinese) available in English. Here we go, let's get started.If you like this, please like my code on ...
-
#2 Before learning Reinforcement learning (Eng python tutorial) - 莫烦Python
Some recommendation before get started the RL.If you like this, please like my code on Github as well.Code: https://github.com/MorvanZhou/Reinforcement-learn...
-
#3 Simplest Reinforcement Learning example (Eng python tutorial) - 莫烦Python
Demostrating the simplest reinforcement learning example. Let you quickly understand what is RL and how RL doing their job.If you like this, please like my c...
-
#4 Q Learning Reinforcement Learning (Eng python tutorial) - 莫烦Python
A maze example using Q learning. Introducing the updating rule in Q learning. If you like this, please like my code on Github as well.Code: https://github.co...
-
#5 Sarsa & Sarsa(lambda) Reinforcement Learning (Eng python tutorial) - 莫烦Python
Discuss the on policy algorithm Sarsa and Sarsa(lambda) with eligibility trace. Take about why he Sarsa(lambda) is more efficient.If you like this, please li...
-
#6 DQN using Tensorflow Reinforcement Learning (Eng tutorial) - 莫烦Python
Deep Q Networks introduction and realize it by coding.If you like this, please like my code on Github as well.Code: https://github.com/MorvanZhou/Reinforceme...
-
#7 OpenAI Gym using Tensorflow Reinforcement Learning (Eng tutorial) - 莫烦Python
Using gym for your RL environment.If you like this, please like my code on Github as well.Code: https://github.com/MorvanZhou/Reinforcement-learning-with-ten...
-
Lecture 14 | Deep Reinforcement Learning - Stanford University School of Engineering
In Lecture 14 we move from supervised learning to reinforcement learning (RL), in which an agent must learn to interact with an environment in order to maximize its reward. We formalize reinforcement learning using the language of Markov Decision Processes (MDPs), policies, value functions, and Q-Value functions. We discuss different algorithms for reinforcement learning including Q-Learning, policy gradients, and Actor-Critic. We show how deep reinforcement learning has been used to play Atari games and to achieve super-human Go performance in AlphaGo. Keywords: Reinforcement learning, RL, Markov decision process, MDP, Q-Learning, policy gradients, REINFORCE, actor-critic, Atari games, AlphaGo Slides: http://cs231n.stanford.edu/slides/201...
-
经验回放 Experience Replay (价值学习高级技巧 1/3) - Shusen Wang
这节课的主要内容是 Experience Replay (经验回放) 和 Prioritized Experience Replay (优先经验回放)。经验回放有两个好处:1. 重复利用收集到的奖励;2. 打破两条 transitions 之间的相关系。 0:30 复习 DQN 和 TD 算法 4:05 原始的 TD 算法的缺点 5:26 经验回放 8:10 优先经验回放
-
高估问题、Target Network、Double DQN (价值学习高级技巧 2/3) - Shusen Wang
这节课介绍 DQN 的高估问题以及两种解决方案:Target Network 和 Double DQN。主要内容: 0:12 Boostrapping (自举) 2:23 DQN 的高估问题以及造成高估问题的原因(最大化和自举) 11:36 使用 Target Network 缓解高估。 14:23 使用 Double DQN 缓解高估。
-
Dueling Network (价值学习高级技巧 3/3) - Shusen Wang
这节课介绍 Dueling Network,它是 DQN 网络结构的改进。它把动作价值 Q 分解成状态价值 V 和优势函数 A。
-
深度强化学习(1/5):基本概念 Deep Reinforcement Learning (1/5) - Shusen Wang
我將用5節課的時間講解深度強化學習。這節課的內容是強化學習中的基本概念:Agent (智能體)、Environment (環境)、State (狀態)、Action (動作)、Reward (獎勵)、Policy (策略)、State Transition (狀態轉移) 、Return (回報)、Value Functions (價值函數)。 這節課的主要內容: 0:30 概率論基礎知識 6:56 強化學習基本術語 12:54 Agent (智能體) 與 Environment (環境) 的交互 13:39 強化學習中的隨機性 16:18 Reward (獎勵) 與 Return (回報) 20:31 Value functions (價值函數) 27:51 用強化學習打遊戲,以及OpenAI Gym的使用 34:53 總結這節課的內容
-
深度强化学习(2/5):价值学习 Value-Based Reinforcement Learning - Shusen Wang
這節課講Value-Based Reinforcement Learning (價值學習)。這節課的主要內容是Deep Q Network (DQN)和Temporal Different (TD)算法。 這節課的主要內容: 0:12 複習Value Functions (價值函數) 3:05 Deep Q Network (DQN) 8:22 用個簡單的例子講解Temporal Different (TD)算法 15:49 用TD算法訓練DQN 23:40 總結本節課內容
-
深度强化学习(3/5):策略学习 Policy-Based Reinforcement Learning - Shusen Wang
這節課講Policy-Based Reinforcement Learning (策略學習)。主要內容是Policy Network (策略網絡)和Policy Gradient (策略梯度)算法。 這節課主要內容: 0:22 Policy Network (策略網絡) 3:52 State-Value Function (狀態價值函數) 6:12 Policy-Based Learning (策略學習) 8:51 Policy Gradient (策略梯度) 17:20 用策略梯度學習策略網絡 21:05 總結
-
深度强化学习(4/5):Actor-Critic Methods - Shusen Wang
這節課講Actor-Critic Methods。 這節課主要內容: 0:33 策略網絡和價值網絡的架構 5:30 訓練兩個神經網絡 12:21 理解Actor-Critic方法 15:04 算法實現 19:43 總結
-
深度强化学习(5/5):AlphaGo & Model-Based RL - Shusen Wang
這節課分析AlphaGo的技術細節,並且介紹Imitation Learning(模仿學習)、Monte Carlo Tree Search(蒙特卡洛樹搜索)等方法。 這節課主要內容: 0:27 圍棋遊戲 2:52 AlphaGo主要原理 7:45 訓練的第一步:Behavior Cloning 16:13 訓練的第二步:策略學習 23:21 訓練的第三步:價值學習 27:59 實戰:蒙特卡洛樹搜索(Monte Carlo Tree Search) 45:07 總結 47:50 新版AlphaGo Zero 與 舊版AlphaGo的主要區別
-
Sarsa算法 (TD Learning 1/3) - Shusen Wang
这节课介绍 State-Action-Reward-State-Action (SARSA) 算法,它属于 TD Learning (时间差分法)。可以拿它来学习 action-value (动作价值) 。这节课的主要内容: 0:23 推导 TD Target 5:09 表格形式的 Sarsa 算法 7:35 神经网络形式的 Sarsa 算法。
-
Q-Learning算法 (TD Learning 2/3) - Shusen Wang
这节课介绍 Q-learning 算法,它属于 TD Learning (时间差分法)。可以拿它来学习 optimal action-value (最优动作价值) 。它是训练 DQN 的标准算法。这节课的主要内容: 1:30 推导 TD Target 4:42 表格形式的 Q-learning 算法 5:58 神经网络形式的 Q-learning 算法。
-
Multi-Step TD Target (TD Learning 3/3) - Shusen Wang
这节课介绍“多步 TD target”,它是对标准的 TD target 的推广。它是训练 DQN 和价值网络的常用技巧,它可以让 Sarsa 和 Q-learning 算法效果更好。 Temporal Difference (TD) Learning (时间差分法): 1. Sarsa 算法: • Sarsa算法 (TD Learning 1/3) 2. Q-learning 算法: • Q-Learning算法 (TD Learning 2/3) 3. Multi-step TD target: • Multi-Step TD Target (TD Learning 3/3)
-
A3C - 李宏毅
-
How to Beat Pong Using Policy Gradients (LIVE) - Siraj Raval
We're going to use the policy gradient technique from reinforcement learning to beat the game of Pong. We'll use OpenAI's Universe as an environment for our agent and I'll go over the process of setting it up as well as the math behind the PG method in detail. Microphone popping issues end at 11:15 . That cannot happen again. Udacity is aware of this and will be more prepared next time. Code for this video: https://github.com/llSourcell/Policy_... Join us in the Wizards Slack channel: http://wizards.herokuapp.com/ More Learning resources: http://www.scholarpedia.org/article/P... http://proceedings.mlr.press/v32/silv... http://karpathy.github.io/2016/05/31/rl/ http://home.deib.polimi.it/restelli/M... http://www0.cs.ucl.ac.uk/staff/D.Silv... https://github.com/dennybritz/reinfor...
-
AlphaZero from Scratch – Machine Learning Tutorial - freeCodeCamp.org
In this machine learning course, you will learn how to build AlphaZero from scratch. AlphaZero is a game-playing algorithm that uses artificial intelligence and machine learning techniques to learn how to play board games at a superhuman level. 🔗 Trained Models + Code for each Chapter: https://github.com/foersterrobert/Alp... 🔗 AlphaZero-Paper: https://arxiv.org/pdf/1712.01815.pdf ✏️ Robert Förster created this course. Website: https://robertfoerster.com/ ⭐️ Contents ⭐️ ⌨️ (0:00:00) Introduction ⌨️ (0:01:35) Overview – Part 1 ⌨️ (0:05:43) MCTS-Explained ⌨️ (0:27:03) AlphaMCTS-Explained ⌨️ (0:39:05) Overview – Part 2 ⌨️ (0:45:14) Chapter 1: TicTacToe ⌨️ (1:00:32) Chapter 2: MCTS ⌨️ (1:34:54) Chapter 3: Model ⌨️ (2:03:09) Chapter 4: AlphaMCTS ⌨️ (2:16:39) Chapter 5: AlphaSelfPlay ⌨️ (2:35:13) Chapter 6: AlphaTrain ⌨️ (2:47:15) Chapter 7: AlphaTweaks ⌨️ (3:08:18) Chapter 8: ConnectFour ⌨️ (3:21:48) Chapter 9: AlphaParallel ⌨️ (3:55:59) Chapter 10: Eval 🎉 Thanks to our Champion and Sponsor supporters: 👾 Nattira Maneerat 👾 Heather Wcislo 👾 Serhiy Kalinets 👾 Erdeniz Unvan 👾 Justin Hual 👾 Agustín Kussrow 👾 Otis Morgan
-
-
TensorFlow深度學習框架
TensorFlow是一個開源深度學習框架,由Google開發,廣泛應用於機器學習和神經網絡的研究與開發。這一系列視頻涵蓋了TensorFlow的基礎知識、安裝指南、編程結構、以及如何構建和訓練各種神經網絡。從簡單的回歸和分類問題到更複雜的卷積神經網絡(CNNs)和循環神經網絡(RNNs),這些視頻提供了豐富的實踐案例。此外,還介紹了TensorFlow中的進階特性,如TensorBoard可視化、模型的保存和加載、以及如何在GPU上進行計算。這些視頻適合對深度學習有基本了解並希望進一步學習如何使用TensorFlow進行項目開發的學生和專業人士。
-
Tensorflow 1 Why? (neural network tutorials) - 莫烦Python
Tensorflow is a neural network module build in Python. Developed by Google. This module can make you a master of neural network. Just follow the tutorial, you will learn to play with tensorflow. Tensorflow website: https://www.tensorflow.org/ What is neural network: • 什么是神经网络 (机器学习) what is neural network...
-
Tensorflow 2 Install (neural network tutorials) - 莫烦Python
Please note when install: 1. make sure your Python version; 2. If your GPU is made by NVIDIA, you can download GPU supported tensorflow version, otherwise, select CPU supported tensorflow Tensorflow download: https://www.tensorflow.org/versions/r...
-
Tensorflow 3 example1 (neural network tutorials) - 莫烦Python
Machine learning let computer to match the prediction line with the data. ML knows how much error between prediction and real data, so it minimise the error to improve the weights and biases in order to improve the predicted accuracy.
-
Tensorflow 4 tf coding structure (neural network tutorials) - 莫烦Python
We must built the network structure first, we can then put our data inside this build structure, let the data or tensor flows through this network. Play list: • Tensorflow tutorials
-
Tensorflow 5 example2 (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... This tutorial tells about the basic usage of tensorflow module. To understand how to code to structure of tensorflow neural net.
-
Tensorflow 6 Session (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... Define Session is one of the most important steps in coding tensorflow. Simply type session.run() to run the part of neural net that you want to.
-
Tensorflow 7 Variable (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... Pleasee note once define init=tf.initialize_all_variables(). You have to run this line: sess.run(init). Otherwise, all variables will not be set in network. Play list: • Tensorflow tutorials
-
Tensorflow 8 placeholder (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... Tensorflow allow you to send different data to your network through the tf.placeholder(), then use sess.run(***, feed_dict={input: **}) to pass your placeholder data into it.
-
Tensorflow 9 activation function (neural network tutorials) - 莫烦Python
When dealing with complex problem using neural network, we may use activation function to simulate the activated neuron. This video talked about what is activation function and where should we place this activation function in network.
-
Tensorflow 10 example3 def add_layer() function (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... After def an add layer function in your tensorflow code will reduce your working load whenever you want to add a new layer. Save more time for you to design other thing.
-
Tensorflow 11 example3 build a network (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... This time we will talk about how to build a whole network, calculate the error, training and determine if it is really learning something.
-
Tensorflow 12 example3 visualize result (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... We have built a complete network together. If we can visualize the result, we could understand how does the network work. We will use matplotlib module to plot the result, and to see how the model fit the data.
-
Tensorflow 13 Optimizers (neural network tutorials) - 莫烦Python
There are many options for optimizer in Tensorflow. Optimizers are the tool to minimise loss between prediction and real value. We have mentioned GradientDescentOptimizer in last few of tutorials, but there are more, such as AdamOptimizer. You can try all the available optimizer in here:https://www.tensorflow.org/versions/r.... Also to know more about when to use which optimizer, please read this: http://cs231n.github.io/neural-networ...
-
Tensorflow 14 Visualization Tensorboard 1 (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... To learn the visualisation tool "tensorboard" that provided by tf will show you the whole network structure. This is the better way to understand what you have built, and how you can improve your network structure.
-
Tensorflow 15 Visualization Tensorboard 2 (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... Tensorboard can not only show you the network graph, but also show how good is your training, plot the changes of your loss and weights etc. We can use this to determine which part should be improved to have a better result. This is the way to a higher level.
-
Tensorflow 16 Classification (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... In machine learning, we have supervised learning, and this this supervised learning can be divided into Regression and Classification problem. Regression problem is to predict a continuous value, such as the house price, hight of flight. While the classification problems is to distinguish from class to class, such as tell the difference between dogs and cats. All the practice we did before are the regression problems, so I will show you how to do classification this time.
-
Tensorflow 17 Regularization dropout (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... The problem in real life is complicated, and we always face to another issue so called overfitting. We will talk about what is overfitting and how to use dropout to solve this issue in this tutorial.
-
Tensorflow 18 Saver (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... Once you have built a network and trained this network using tensorflow, you can actually save all the parameters you have trained for the usage next time. Let's see how do you save and restore them in this tutorial.
-
Tensorflow 19 CNN example using MNIST (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... This tutorial is to use tensorflow to do the CNN.
-
Tensorflow 20.1 RNN example using MNIST (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... This tutorial is to use tensorflow to do the RNN classifier using MNIST dataset.
-
Tensorflow 20.2 RNN example with visualization (neural network tutorials) - 莫烦Python
This tutorial code: https://github.com/MorvanZhou/tutoria... This tutorial is to use tensorflow to do the RNN regressor. Predicting a sequence of data. Using Tensorboard to visualize RNN structure. Using plt to plot learning process. Google RNN introduction: https://classroom.udacity.com/courses... The tensorflow RNN bptt style: http://r2rt.com/styles-of-truncated-b...
-
TensorFlow 2.0 Crash Course - freeCodeCamp.org
Learn how to use TensorFlow 2.0 in this crash course for beginners. This course will demonstrate how to create neural networks with Python and TensorFlow 2.0. If you want a more comprehensive TensorFlow 2.0 course, check out this 7 hour course: • TensorFlow 2.0 Complete Course - Pyth... 🎥 Course created by Tech with Tim. Check out his YouTube channel: / @techwithtim ⭐️ Course Contents ⭐️ ⌨️ (0:00:00) What is a Neural Network? ⌨️ (0:26:34) How to load & look at data ⌨️ (0:39:38) How to create a model ⌨️ (0:56:48) How to use the model to make predictions ⌨️ (1:07:11) Text Classification (part 1) ⌨️ (1:28:37) What is an Embedding Layer? Text Classification (part 2) ⌨️ (1:42:30) How to train the model - Text Classification (part 3) ⌨️ (1:52:35) How to saving & loading models - Text Classification (part 4) ⌨️ (2:07:09) How to install TensorFlow GPU on Linux
-
TensorFlow 2.0 Complete Course - Python Neural Networks for Beginners Tutorial - freeCodeCamp.org
Learn how to use TensorFlow 2.0 in this full tutorial course for beginners. This course is designed for Python programmers looking to enhance their knowledge and skills in machine learning and artificial intelligence. Throughout the 8 modules in this course you will learn about fundamental concepts and methods in ML & AI like core learning algorithms, deep learning with neural networks, computer vision with convolutional neural networks, natural language processing with recurrent neural networks, and reinforcement learning. Each of these modules include in-depth explanations and a variety of different coding examples. After completing this course you will have a thorough knowledge of the core techniques in machine learning and AI and have the skills necessary to apply these techniques to your own data-sets and unique problems. ⭐️ Google Colaboratory Notebooks ⭐️ 📕 Module 2: Introduction to TensorFlow - https://colab.research.google.com/dri... 📗 Module 3: Core Learning Algorithms - https://colab.research.google.com/dri... 📘 Module 4: Neural Networks with TensorFlow - https://colab.research.google.com/dri... 📙 Module 5: Deep Computer Vision - https://colab.research.google.com/dri... 📔 Module 6: Natural Language Processing with RNNs - https://colab.research.google.com/dri... 📒 Module 7: Reinforcement Learning - https://colab.research.google.com/dri... ⭐️ Course Contents ⭐️ ⌨️ (00:03:25) Module 1: Machine Learning Fundamentals ⌨️ (00:30:08) Module 2: Introduction to TensorFlow ⌨️ (01:00:00) Module 3: Core Learning Algorithms ⌨️ (02:45:39) Module 4: Neural Networks with TensorFlow ⌨️ (03:43:10) Module 5: Deep Computer Vision - Convolutional Neural Networks ⌨️ (04:40:44) Module 6: Natural Language Processing with RNNs ⌨️ (06:08:00) Module 7: Reinforcement Learning with Q-Learning ⌨️ (06:48:24) Module 8: Conclusion and Next Steps ⭐️ About the Author ⭐️ The author of this course is Tim Ruscica, otherwise known as “Tech With Tim” from his educational programming YouTube channel.
-
Install tensorflow 2.0 | Deep Learning Tutorial 5 (Tensorflow Tutorial, Keras & Python) - codebasics
I will show how to install tensorflow 2.0 on windows computer. I will be installing it on top of anaconda. Video to install anaconda on windows: • What is Anaconda? Install Anaconda On... 🔖 Hashtags 🔖 #installtensorflow #installtensorflowwindows #tensorflowinjupyternotebook #tensorflowonwindows
-
How to Make a Tensorflow Neural Network (LIVE) - Siraj Raval
In this live stream, we're going to use Tensorflow to build a convolutional neural network capable of classifying images. You'll need 'tensorflow' and the 'future' python libraries installed. The connection was laggy for the live stream and that won't happen again. 4:09-5:50 (The connection drops out) The code for this video is here: https://github.com/llSourcell/tensorf...
-
How to Make a Simple Tensorflow Speech Recognizer - Siraj Raval
In this video, we'll make a super simple speech recognizer in 20 lines of Python using the Tensorflow machine learning library. I go over the history of speech recognition research, then explain (and rap about) how we can build our own speech recognition system using the power of deep learning. The code for this video is here: https://github.com/llSourcell/tensorf... Mick's winning code: https://github.com/mickvanhulst/tf_ch...
-
Deep Dream in TensorFlow - Learn Python for Data Science #5 - Siraj Raval
In this video, we replicate Google's Deep Dream code in 80 lines of Python using the Tensorflow machine learning library. Then we visualize it at the end. The challenge for this video is here: https://github.com/llSourcell/deep_dr... Avhirup's winning stock prediction code: https://github.com/Avhirup/Stock-Mark... Victor's runner-up code: https://github.com/ciurana2016/predic...
-
TensorFlow in 5 Minutes (tutorial) - Siraj Raval
This video is all about building a handwritten digit image classifier in Python in under 40 lines of code (not including spaces and comments). We'll use the popular library TensorFlow to do this. Please subscribe! That would make me the happiest, and encourage me to output similar content. The source code for this video is here: https://github.com/llSourcell/tensorf... Here are some great links on TensorFlow: Tensorflow setup: https://www.tensorflow.org/versions/r... A similar written tutorial by Google: https://www.tensorflow.org/versions/r... Tensorflow Course: https://www.udacity.com/course/deep-l... Awesome intro to Tensorflow: https://www.oreilly.com/learning/hell...
-
Generate Music in TensorFlow - Siraj Raval
In this video, I go over some of the state of the art advances in music generation coming out of DeepMind. Then we build our own music generation script in Python using Tensorflow and a type of neural network called a Restricted Boltzmann Machine. Congrats to Rohan Verma (Winner) and Chih-Cheng Liang (runner-up) for their classifiers for scientists. The challenge for this video is to generate a happy/upbeat song using the RBM Script. The code for this video is here: https://github.com/llSourcell/Music_G... I created a Slack channel for us, sign up here: https://wizards.herokuapp.com/ The WaveNet blogpost with audio samples: https://deepmind.com/blog/wavenet-gen... More on RBMs: http://deeplearning4j.org/restrictedb... Another write up on music generation with Neural Networks: http://www.hexahedria.com/2015/08/03/... Interesting Machine Music Generation Project by Google: https://magenta.tensorflow.org/welcom... TensorFlow course on Udacity: https://www.udacity.com/course/deep-l... Rohan's Classifier (Winner): https://github.com/rhnvrm/galaxy-imag... Chih-Cheng's Classifier (Runner-up): https://github.com/ChihChengLiang/ten...
-
The Best Way to Prepare a Dataset Easily - Siraj Raval
In this video, I go over the 3 steps you need to prepare a dataset to be fed into a machine learning model. (selecting the data, processing it, and transforming it). The example I use is preparing a dataset of brain scans to classify whether or not someone is meditating. The challenge for this video is here: https://github.com/llSourcell/prepare... Carl's winning code: https://github.com/av80r/coaster_race... Rohan's runner-up code: https://github.com/rhnvrm/universe-co... Come join other Wizards in our Slack channel: http://wizards.herokuapp.com/ Dataset sources I talked about: https://github.com/caesar0301/awesome... https://www.kaggle.com/datasets
-
Build a TensorFlow Image Classifier in 5 Min - Siraj Raval
在這一集中,我們將訓練自己的圖像分類器來檢測達斯維德圖像。 這個倉庫的代碼在這裡: https://github.com/llSourcell/tensorf... 我為我們創建了一個Slack頻道,請在這裡註冊: https://wizards.herokuapp.com/ 挑戰: 這一集的挑戰是創建自己的圖像分類器,這對科學家來說是一個有用的工具。只需發布一個包含您的再培訓Inception模型的回購的克隆(將其標記為output_graph.pb)即可。如果它對於GitHub來說太大了,只需將它上傳到DropBox並將鏈接發佈到GitHub自述文件中即可。我將審判所有人,獲獎者將在未來的視頻中向我發出呼籲,以及我的書“分散式應用程序”的簽名副本。 Google的這個CodeLab在學習這些東西方面非常有用:
-
Build a Neural Network (LIVE) - Siraj Raval
In this video, I'll be building and training an LSTM Neural Network on a dataset of city names. Then it'll be able to generate new city names from scratch. Code for this video: https://github.com/llSourcell/build_a... I created a Slack channel for us, sign up here: https://wizards.herokuapp.com/ Read up more on TFLearn: https://github.com/tflearn/tflearn Incredible article on LSTMs: http://colah.github.io/posts/2015-08-...
-
How to Make a Tensorflow Image Classifier (LIVE) - Siraj Raval
We're going to build an image classifier using just Tensorflow (no Keras). This will be in depth, the goal for this video is for you to fully understand how a Convolutional Neural Network works. We'll visualize the filters we create along the way as well. Code for this video: https://github.com/llSourcell/How_to_... More CNN learning resources: http://ufldl.stanford.edu/tutorial/su... https://adeshpande3.github.io/adeshpa... http://cs231n.github.io/convolutional... http://deeplearning.net/tutorial/lene... http://neuralnetworksanddeeplearning.... http://machinelearningmastery.com/cra... https://ujjwalkarn.me/2016/08/11/intu...
-
Make Money with Tensorflow 2.0 - Siraj Raval
I've built an app called NeuralFund that uses Tensorflow 2.0 to make automated investment decisions. I used Tensorflow 2.0 to train a transformer network on time series data that i downloaded using the Yahoo Finance API. Then, I used Tensorflow Serving + Flask to create a simple web app around it. I'll explain what the important parts you should know in Tensorflow 2.0 are, then I'll guide you through my code & thought process of building an AI startup using it. Enjoy! Code for this video: https://github.com/llSourcell/Make_Mo...
-
How to Use Tensorflow for Classification (LIVE) - Siraj Raval
In this live session I'll introduce & give an overview of Google's Deep Learning library, Tensorflow. Then we'll use it to build a neural network capable of predicting housing prices, with me explaining every step along the way. Code for this video: https://github.com/llSourcell/How_to_...
-
Neural Network For Handwritten Digits Classification | Deep Learning Tutorial 7 (Tensorflow2.0) - codebasics
In this video we will build our first neural network in tensorflow and python for handwritten digits classification. We will first build a very simple neural network with only input and output layer. After that we will add a hidden layer and check how the performance of our model changes. 🔖 Hashtags 🔖 #handwrittendigitrecognition #tensorflowtutorial #handwritingrecognition #mnisttensorflowtutorial
-
GPU bench-marking with image classification | Deep Learning Tutorial 17 (Tensorflow2.0, Python) - codebasics
This video shows performance comparison of using a CPU vs NVIDIA TITAN RTX GPU for deep learning. We are using 60000 small images for classification. These images can be classified in one of the 10 categories below, classes = ["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"] Here is the dataset link: https://www.cs.toronto.edu/~kriz/cifa... We will use simple artificial neural network (we are not using CNN, usually CNN is preferred for image classification but since we have not covered that in our deep learning playlist so far we will be happy with simple ANN that still gives pretty high accuracy). #gpuperformance #gpuperformancetest #GPUbenchmarking #imageclassification #DeepLearningTutorial #deeplearning Code link: https://github.com/codebasics/deep-le... Exercise: https://github.com/codebasics/deep-le...
-
Tensorflow Input Pipeline | tf Dataset | Deep Learning Tutorial 44 (Tensorflow, Keras & Python) - codebasics
Tensorflow tf.Data api allows you to build a data input pipeline. Using this you can handle large dataset for your deep learning training by streaming training samples from hard disk or S3 storage. tf.data.Dataset is the main class in tf.data api. In this video we see how tf pipeline allows not only to stream the data for training but you can peform various transformations easily by writing a single line of code. Code: https://github.com/codebasics/deep-le... Exercise: https://github.com/codebasics/deep-le... Stackoverflow article: https://stackoverflow.com/questions/5... ⭐️ Timestamps ⭐️ 00:00 Introduction 00:21 Theory 07:58 Coding 31:34 Exercise
-
Optimize Tensorflow Pipeline Performance: prefetch & cache | Deep Learning Tutorial 45 (Tensorflow) - codebasics
It is important to make optimal use of your hardware resources (CPU and GPU) while training a deep learning model. You can use tf.data.Dataset.prefetch(AUTOTUNE) and tf.data.Dataset.cache() methods for this purpose. They help you optimize tensorflow input pipeline performance. In this video we will go over how these two methods work and will write some code as well. Code: https://github.com/codebasics/deep-le... Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist: https://www.youtube.com/playlist?list... 🔖Hashtags🔖 #tensorflowpipeline #tensorflowprefetchdataset #tensorflowprefetchautotune #prefetchautotune #tensorflowinputpipeline #tensorflowprefetch #tensorflowdatapipeline
-
tf serving tutorial | tensorflow serving tutorial | Deep Learning Tutorial 48 (Tensorflow, Python) - codebasics
Are you using flask or Fast API to serve your machine learning models? tf serving is a tool that allows you to bring up a model server with single command. It also allows to do model version management, loading of models dynamically. It supports features such as version labels, configurable version policy etc. In this video, I will explain you everything in a very easy language. Code: https://github.com/codebasics/deep-le... ⭐️ Timestamps ⭐️ 00:00 Introduction 00:24 What problem tf serving solves? 04:44 tf serving installation 09:23 tf serving using model_base_path 14:05 serve different versions using model config file 15:35 version labels Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist: https://www.youtube.com/playlist?list... 🔖Hashtags🔖 #tfservingexample #tfservingdockerfile #tfservingvsflask #tfservingmodel #tfserving #tfservingdeeplearning #tfservingmodeldeeplearning #deeplearningtfserving #deeplearningtfservingmodel #tensorflowservingtutorial
-
Tensorboard Introduction | Deep Learning Tutorial 16 (Tensorflow2.0, Keras & Python) - codebasics
Often it becomes necessary to see what's going on inside your neural network. Tensorboard is a tool that comes with tensorflow and it allows you to visualize neural network as well as how it trains itself. This tool is very helpful in debugging issues too. We will use the notebook that we created in previous videos for recognizing handwritten digits and visualize accuracy,loss with every epoch using tensorboard. We will also look at the visual graphs of neural network along with some internal computations. Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. 🔖 Hashtags 🔖 #tensorboard #tensorboardtutorial #tensorboardgraph #deeplearningtutorial #tensorboardpytorch #pytorchtensorboard
-
Tensorboard Explained in 5 Min - Siraj Raval
In this video, we first go through the code for a simple handwritten character classifier in Python, then visualize it in Tensorboard. The point of this video was to showcase Tensorboard as a data visualization tool. We also use a more complex handwritten character classifier to further showcase all of Tensorboard's features. This was the hardest video I've ever had to make in terms of timing. It was really difficult to fit this many TB features into this time frame. The code for this video is here: https://github.com/llSourcell/Tensorb...
-
How to Use Tensorboard (LIVE) - Siraj Raval
We're going to learn how the visualizer that comes with Tensorflow works in this live stream. We'll go through a bunch of different features and test out its functionality both programmatically and visually. 4:41 code begins 37:07 tensorboard visualization begins Code for this video: https://github.com/llSourcell/how_to_...
-
-
PyTorch深度學習框架
PyTorch是一個流行的深度學習框架,以其靈活性和動態計算圖而聞名。這一系列視頻提供了從基礎到進階的PyTorch教程,涵蓋了各種深度學習概念和技術的實現。視頻內容包括PyTorch的基礎知識、數據處理、神經網絡的構建和訓練、回歸和分類問題的解決方法、使用GPU加速計算、以及更高級的主題如卷積神經網絡(CNNs)、生成對抗網路(GANs)等。這些視頻是學習如何使用PyTorch進行深度學習的絕佳資源,適合初學者和有一定基礎想深入了解PyTorch的學習者。
-
Pytorch vs Tensorflow vs Keras | Deep Learning Tutorial 6 (Tensorflow Tutorial, Keras & Python) - codebasics
We will go over what is the difference between pytorch, tensorflow and keras in this video. Pytorch and Tensorflow are two most popular deep learning frameworks. Pytorch is by facebook and Tensorflow is by Google. Keras is not a full fledge deep learning framework, it is just a wrapper around Tensorflow that provides some convenient APIs. 🔖 Hashtags 🔖 #pytorch #tensorflow #keras #tensorflowtutorial #keratutorial #pytorchtutorial
-
Deep Learning Frameworks Compared - Siraj Raval
In this video, I compare 5 of the most popular deep learning frameworks (SciKit Learn, TensorFlow, Theano, Keras, and Caffe). We go through the pros and cons of each, as well as some code samples, eventually coming to a definitive conclusion. The code for the TensorFlow vs Theano part of the video is here: https://github.com/llSourcell/tensorf... An article that explains the differences in more detail: https://medium.com/@sentimentron/face... I created a Slack channel for us, sign up here: https://wizards.herokuapp.com/ Learn more about TF Learn here: https://github.com/tflearn/tflearn and here: https://www.tensorflow.org/versions/r... Learn more about TensorFlow here: https://www.oreilly.com/learning/hell... More on Keras here: http://machinelearningmastery.com/tut... More on SciKit Learn here: http://scikit-learn.org/stable/tutorial/ More on Caffe here: http://christopher5106.github.io/deep... More on Theano here: https://github.com/Newmu/Theano-Tutor...
-
PyTorch for Deep Learning & Machine Learning – Full Course - freeCodeCamp.org
Learn PyTorch for deep learning in this comprehensive course for beginners. PyTorch is a machine learning framework written in Python. ✏️ Daniel Bourke developed this course. Check out his channel: / @mrdbourke 🔗 Code: https://github.com/mrdbourke/pytorch-... 🔗 Ask a question: https://github.com/mrdbourke/pytorch-... 🔗 Course materials online: https://learnpytorch.io 🔗 Full course on Zero to Mastery (20+ hours more video): https://dbourke.link/ZTMPyTorch Some sections below have been left out because of the YouTube limit for timestamps. 0:00:00 Introduction 🛠 Chapter 0 – PyTorch Fundamentals 0:01:45 0. Welcome and "what is deep learning?" 0:07:41 1. Why use machine/deep learning? 0:11:15 2. The number one rule of ML 0:16:55 3. Machine learning vs deep learning 0:23:02 4. Anatomy of neural networks 0:32:24 5. Different learning paradigms 0:36:56 6. What can deep learning be used for? 0:43:18 7. What is/why PyTorch? 0:53:33 8. What are tensors? 0:57:52 9. Outline 1:03:56 10. How to (and how not to) approach this course 1:09:05 11. Important resources 1:14:28 12. Getting setup 1:22:08 13. Introduction to tensors 1:35:35 14. Creating tensors 1:54:01 17. Tensor datatypes 2:03:26 18. Tensor attributes (information about tensors) 2:11:50 19. Manipulating tensors 2:17:50 20. Matrix multiplication 2:48:18 23. Finding the min, max, mean & sum 2:57:48 25. Reshaping, viewing and stacking 3:11:31 26. Squeezing, unsqueezing and permuting 3:23:28 27. Selecting data (indexing) 3:33:01 28. PyTorch and NumPy 3:42:10 29. Reproducibility 3:52:58 30. Accessing a GPU 4:04:49 31. Setting up device agnostic code 🗺 Chapter 1 – PyTorch Workflow 4:17:27 33. Introduction to PyTorch Workflow 4:20:14 34. Getting setup 4:27:30 35. Creating a dataset with linear regression 4:37:12 36. Creating training and test sets (the most important concept in ML) 4:53:18 38. Creating our first PyTorch model 5:13:41 40. Discussing important model building classes 5:20:09 41. Checking out the internals of our model 5:30:01 42. Making predictions with our model 5:41:15 43. Training a model with PyTorch (intuition building) 5:49:31 44. Setting up a loss function and optimizer 6:02:24 45. PyTorch training loop intuition 6:40:05 48. Running our training loop epoch by epoch 6:49:31 49. Writing testing loop code 7:15:53 51. Saving/loading a model 7:44:28 54. Putting everything together 🤨 Chapter 2 – Neural Network Classification 8:32:00 60. Introduction to machine learning classification 8:41:42 61. Classification input and outputs 8:50:50 62. Architecture of a classification neural network 9:09:41 64. Turing our data into tensors 9:25:58 66. Coding a neural network for classification data 9:43:55 68. Using torch.nn.Sequential 9:57:13 69. Loss, optimizer and evaluation functions for classification 10:12:05 70. From model logits to prediction probabilities to prediction labels 10:28:13 71. Train and test loops 10:57:55 73. Discussing options to improve a model 11:27:52 76. Creating a straight line dataset 11:46:02 78. Evaluating our model's predictions 11:51:26 79. The missing piece – non-linearity 12:42:32 84. Putting it all together with a multiclass problem 13:24:09 88. Troubleshooting a mutli-class model 😎 Chapter 3 – Computer Vision 14:00:48 92. Introduction to computer vision 14:12:36 93. Computer vision input and outputs 14:22:46 94. What is a convolutional neural network? 14:27:49 95. TorchVision 14:37:10 96. Getting a computer vision dataset 15:01:34 98. Mini-batches 15:08:52 99. Creating DataLoaders 15:52:01 103. Training and testing loops for batched data 16:26:27 105. Running experiments on the GPU 16:30:14 106. Creating a model with non-linear functions 16:42:23 108. Creating a train/test loop 17:13:32 112. Convolutional neural networks (overview) 17:21:57 113. Coding a CNN 17:41:46 114. Breaking down nn.Conv2d/nn.MaxPool2d 18:29:02 118. Training our first CNN 18:44:22 120. Making predictions on random test samples 18:56:01 121. Plotting our best model predictions 19:19:34 123. Evaluating model predictions with a confusion matrix 🗃 Chapter 4 – Custom Datasets 19:44:05 126. Introduction to custom datasets 19:59:54 128. Downloading a custom dataset of pizza, steak and sushi images 20:13:59 129. Becoming one with the data 20:39:11 132. Turning images into tensors 21:16:16 136. Creating image DataLoaders 21:25:20 137. Creating a custom dataset class (overview) 21:42:29 139. Writing a custom dataset class from scratch 22:21:50 142. Turning custom datasets into DataLoaders 22:28:50 143. Data augmentation 22:43:14 144. Building a baseline model 23:11:07 147. Getting a summary of our model with torchinfo 23:17:46 148. Creating training and testing loop functions 23:50:59 151. Plotting model 0 loss curves 24:00:02 152. Overfitting and underfitting 24:32:31 155. Plotting model 1 loss curves 24:35:53 156. Plotting all the loss curves 24:46:50 157. Predicting on custom data
-
Deep Learning with PyTorch Live Course - Tensors, Gradient Descent & Linear Regression (Part 1 of 6) - freeCodeCamp.org
This is a beginner-friendly coding-first online course on PyTorch - one of the most widely used and fastest growing frameworks for machine learning. This video covers the basic concepts in PyTorch viz. tensors & gradients, and walks through the process of implementing linear regression and gradient descent - the foundational algorithms in machine learning. Resources: 🔗 PyTorch Basics: https://jovian.ml/aakashns/01-pytorch... 🔗 Linear Regression: https://jovian.ml/aakashns/02-linear-... 🔗 Machine Learning Intro: https://jovian.ml/aakashns/machine-le... 🔗 Discussion forum: https://jovian.ml/forum/t/lecture-1-p... 🔗 Programming Assignment: https://jovian.ml/forum/t/assignment-... Topics covered: ⌨️ Introduction to Machine Learning & Deep Learning ⌨️ PyTorch Basics: Tensors, Gradients & Autograd ⌨️ Linear Regression and gradient descent from scratch using Tensor operations ⌨️ Linear Regression using PyTorch built-ins (nn.Linear, nn.functional etc.)
-
Deep Learning with PyTorch Live Course - Working with Images & Logistic Regression (Part 2 of 6) - freeCodeCamp.org
This is a beginner-friendly coding-first online course on PyTorch - one of the most widely used and fastest growing frameworks for machine learning. This video covers techniques for working with images in PyTorch, the importance of creating training, validation & test sets, the process of creating & training an image classification model using Logistic regression and more. Resources: 🔗 Logistic regression (detailed): https://jovian.ml/aakashns/03-logisti... 🔗 Linear Regression (minimal starter): https://jovian.ml/aakashns/housing-li... 🔗 Logistic Regression (minimal starter): https://jovian.ml/aakashns/mnist-logi... 🔗 Discussion forum: https://jovian.ml/forum/t/lecture-2-w... 🔗 Programming Assignment: https://jovian.ml/forum/t/assignment-... Topics covered: ⌨️ Working with images from the MNIST dataset ⌨️ Creating training, validation and test sets ⌨️ Softmax and categorical cross entropy loss function ⌨️ Model training, evaluation and sample predictions
-
Deep Learning with PyTorch Live Course - Training Deep Neural Networks on GPUs (Part 3 of 6) - freeCodeCamp.org
Deep Learning with PyTorch: Zero to GANs is a free certification course from Jovian.ml. It will be live-streamed here every Saturday for six weeks at 8:30 AM PST. You can sign up here: https://bit.ly/pytorchcourse (not required to watch) Missed the other parts? Watch them here: • Deep Learning with PyTorch Live Course Each lecture will be around 2 hours long. Visit the course forum for more details: https://jovian.ml/forum/c/pytorch-zer... ⭐️ Resources ⭐️ 🔗 Feedforward neural networks: https://jovian.ml/aakashns/04-feedfor... 🔗 Neural networks (minimal): https://jovian.ml/aakashns/fashion-fe... 🔗 Data visualization cheatsheet: https://jovian.ml/aakashns/dataviz-ch... 🔗 Assignment details: https://jovian.ml/forum/t/assignment-... 🔗 Download the course curriculum: https://bit.ly/pytorchzerotogans 🔗 PyTorch Basics: https://jovian.ml/aakashns/01-pytorch... 🔗 Linear Regression: https://jovian.ml/aakashns/02-linear-... 🔗 Machine Learning Intro: https://jovian.ml/aakashns/machine-le...
-
Deep Learning with PyTorch Live Course - Image Classification with CNNs (Part 4 of 6) - freeCodeCamp.org
This is a beginner-friendly coding-first online course on PyTorch - one of the most widely used and fastest growing frameworks for machine learning. This video covers the basics of convolutions, the end-to-end process of training a convolutional neural network on a GPU for classifying images of everyday objects. Resources: 🔗 Image Classification with CNNs: https://jovian.ml/aakashns/05-cifar10... 🔗 Discussion forum: https://jovian.ml/forum/t/lecture-4-i... 🔗 Data science competition: https://www.kaggle.com/c/jovian-pytor... 🔗 Competition starter notebook: https://jovian.ml/aakashns/zerogans-p... 🔗 Course project: https://jovian.ml/aakashns/03-cifar10... Topics covered: ⌨️ Working with the 3-channel RGB images from the CIFAR10 dataset ⌨️ Introduction to Convolutions, kernels & features maps ⌨️ Underfitting, overfitting and techniques to improve model performance ⌨️ Building & training a convolutional neural network on a GPU
-
Deep Learning with PyTorch Live Course - ResNet, Regularization and Data Augmentation (Part 5 of 6) - freeCodeCamp.org
This is a beginner-friendly coding-first online course on PyTorch - one of the most widely used and fastest growing frameworks for machine learning. This video covers the process of applying advanced techniques like residual networks, data augmentation, batch normalization and transfer learning to achieve state of the art results for image classification in a very short time. Resources: 🔗 Classifying CIFAR10 images using a ResNet : https://jovian.ml/aakashns/05b-cifar1... 🔗 Transfer learning starter: https://jovian.ml/aakashns/transfer-l... 🔗 Discussion forum: https://jovian.ml/forum/t/lecture-5-d... 🔗 Data science competition: https://www.kaggle.com/c/jovian-pytor... 🔗 Course project: https://jovian.ml/forum/t/assignment-... Topics covered: ⌨️ Improving the dataset using data normalization and data augmentation ⌨️ Improving the model using residual connections and batch normalization ⌨️ Improving the training loop using learning rate annealing, weight decay and gradient clip ⌨️ Training a state of the art image classifier from scratch in 5 minutes
-
Deep Learning with PyTorch Live Course - GANs for Image Generation (Part 6 of 6) - freeCodeCamp.org
This is a beginner-friendly coding-first online course on PyTorch - one of the most widely used and fastest growing frameworks for machine learning. This video covers the concepts and techniques involved in building & training Generative Adversarial Networks or GANs to generate images of anime faces. Resources: 🔗 Deep Convolutional GANs: https://jovian.ml/aakashns/06b-anime-... 🔗 MNIST Generative Adversarial Network: https://jovian.ml/aakashns/06-mnist-gan 🔗 Discussion forum: https://jovian.ml/forum/t/lecture-6-i... 🔗 Course Graduation Party: • Deep Learning with PyTorch: Zero to G... 🔗 Data Analysis with Python: https://jovian.ml/learn/data-analysis... Topics covered: ⌨️ Introduction to generative modeling and application of GANs ⌨️ Creating generator and discriminator neural networks ⌨️ Generating and evaluating fake images of anime faces ⌨️ Training the generator and discriminator in tandem and visualizing results
-
Data Augmentation, Regularization, and ResNets | Deep Learning with PyTorch: Zero to GANs | 5 of 6 - freeCodeCamp.org
“Deep Learning with PyTorch: Zero to GANs” is a beginner-friendly online course offering a practical and coding-focused introduction to deep learning using the PyTorch framework. Learn more and register for a certificate of accomplishment here: http://zerotogans.com Watch the entire series here: • Deep Learning with PyTorch Course - D... Code and Resources: 🔗 Classifying CIFAR10 images using ResNet and Regularization techniques in PyTorch: https://jovian.ai/aakashns/05b-cifar1... 🔗 Image Classification using Convolutional Neural Networks in PyTorch: https://jovian.ai/aakashns/05-cifar10... 🔗 Discussion forum: https://jovian.ai/forum/t/lecture-5-d... Topics covered in this video: * Improving the dataset using data normalization and data augmentation * Improving the model using residual connections and batch normalization * Improving the training loop using learning rate annealing, weight decay, and gradient clip * Training a state of the art image classifier from scratch in 10 minutes
-
Image Generation using GANs | Deep Learning with PyTorch: Zero to GANs | Part 6 of 6 - freeCodeCamp.org
Code and Resources: 🔗 Generative Adversarial Networks in PyTorch: https://jovian.ai/aakashns/06b-anime-... 🔗 Generative Adversarial Networks using MNSIT: https://jovian.ai/aakashns/06-mnist-gan 🔗 Tensorflow 2.1 port of Pytorch - Zero to GANs: https://jovian.ai/kartik.godawat/coll... 🔗 Discussion forum: https://jovian.ai/forum/t/lecture-6-i...
-
Image Classification with Convolutional Neural Networks | Deep Learning with PyTorch: Zero to GANs | - freeCodeCamp.org
“Deep Learning with PyTorch: Zero to GANs” is a beginner-friendly online course offering a practical and coding-focused introduction to deep learning using the PyTorch framework. Learn more and register for a certificate of accomplishment here: http://zerotogans.com Watch the entire series here: • Deep Learning with PyTorch Course - D... Code and Resources: 🔗 Image Classification using Convolutional Neural Networks: https://jovian.ai/aakashns/05-cifar10... 🔗 Classifying images of everyday objects using a neural network: https://jovian.ai/aakashns/03-cifar10... 🔗 Discussion forum: https://jovian.ai/forum/t/lecture-4-i... Topics covered in this video: * Working with the 3-channel RGB images from the CIFAR10 dataset * Introduction to Convolutions, kernels & features maps * Underfitting, overfitting, and techniques to improve model performance
-
Training Deep Neural Networks on a GPU | Deep Learning with PyTorch: Zero to GANs | Part 3 of 6 - freeCodeCamp.org
“Deep Learning with PyTorch: Zero to GANs” is a beginner-friendly online course offering a practical and coding-focused introduction to deep learning using the PyTorch framework. Learn more and register for a certificate of accomplishment here: http://zerotogans.com Watch the entire series here: • Deep Learning with PyTorch Course - D... Code and Resources: Feedforward neural networks: https://jovian.ai/aakashns/04-feedfor... Neural networks (minimal): https://jovian.ai/aakashns/fashion-fe... Data Visualization Cheatsheet: https://jovian.ai/aakashns/dataviz-ch... Discussion forum: https://jovian.ai/forum/t/lecture-3-t... Topics covered in this video: * Working with cloud GPU platforms like Kaggle & Colab * Creating a multilayer neural network using nn.Module * Activation function, non-linearity, and universal approximation theorem * Moving datasets and models to the GPU for faster training
-
PyTorch Basics and Gradient Descent | Deep Learning with PyTorch: Zero to GANs | Part 1 of 6 - freeCodeCamp.org
“Deep Learning with PyTorch: Zero to GANs” is a beginner-friendly online course offering a practical and coding-focused introduction to deep learning using the PyTorch framework. Learn more and register for a certificate of accomplishment here: http://zerotogans.com/
-
PyTorch Images and Logistic Regression | Deep Learning with PyTorch: Zero to GANs | Part 2 of 6 - freeCodeCamp.org
“Deep Learning with PyTorch: Zero to GANs” is a beginner-friendly online course offering a practical and coding-focused introduction to deep learning using the PyTorch framework. Learn more and register for a certificate of accomplishment here: http://zerotogans.com Watch the entire series here: • Deep Learning with PyTorch Course - D... Code and Resources: Logistic regression: https://jovian.ai/aakashns/03-logisti... Image Classification with Logistic Regression: https://jovian.ai/aakashns/mnist-logi... House Price Prediction: https://jovian.ai/aakashns/housing-li... Discussion forum: https://jovian.ai/forum/c/pytorch-zer... Topics covered in this video: * Working with images from the MNIST dataset * Training and validation dataset creation * Softmax function and categorical cross entropy loss * Model training, evaluation, and sample predictions
-
1. 【必看】如何使用此教程&教程大綱 | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
希望大家可以訂閱壹波這一節,主要是教程大綱,大家可以有選擇性的來看,根據自己的需求。
-
2. 聊聊 Python | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
希望大家可以訂閱壹波這一節,一起聊聊Python的概念。其實Python就是一門編程語言,是人和計算機溝通的一門語言。
-
3. 聊聊 Python 中庫的概念 | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
希望大家可以訂閱壹波這一節,主要來聊聊庫的概念。庫是一個很重要的概念,正是因為有了庫,我們才能更加高效地進行一些開發和學習。其實PyTorch本質上就是一個庫,一個用於深度學習的庫。
-
4. 聊聊 PyTorch和Tensorflow | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
希望大家可以訂閱壹波這一節,主要來聊聊PyTorch和Tensorflow的概念。在了解了這兩個概念之後,你就會發現他們其實就是庫而已,並沒有什麽特別之處。
-
5. 聊聊 Anaconda | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
希望大家可以訂閱壹波愛好尋找有趣或更有效率的事、工具。同時,喜歡做教程,想做出更適合妳的教程。Anaconda是一個很優秀的軟件。我們可以利用他的虛擬環境功能有序的管理我們的包。同時,當我們安裝完Anaconda之後,其實我們就安裝好了Python。
-
6. 聊聊 conda 虛擬環境演示 | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
希望大家可以訂閱壹波愛好尋找有趣或更有效率的事、工具。同時,喜歡做教程,想做出更適合妳的教程。這一節帶大家感受下Anaconda的虛擬環境的作用,演示一下給大家看下~
-
7. 聊聊 PyCharm(一) | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
希望大家可以訂閱壹波愛好尋找有趣或更有效率的事、工具。同時,喜歡做教程,想做出更適合妳的教程。這一節講解Python代碼到底是如何運行起來的
-
8 聊聊PyCharm(二) | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
這一集主要是演示 PyCharm 到底是如何幫助我們快速開發代碼的。只為做出更加通俗易懂的教程,謝謝大家的關註和支持~
-
9 聊聊 顯卡GPU與CUDA | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
GPU顯卡和CUDA的關係,其實就是相輔相成的。希望通過這一集,能夠讓大家明白GPU顯卡和CUDA的關係。更好地理解爲什麽我們在後續的安裝過程中,要去選擇CUDA,以及CUDA在我們深度學習中到底起著什麽楊的作用。只為做出更加通俗易懂的教程,謝謝大家的關註和支持~
-
10 聊聊 深度學習中各個軟件關係 | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
在安裝PyTorch的過程中,我們常常需要安裝很多的軟件。這一集主要是讓大家瞭解各個軟件的作用,瞭解一個代碼到底是如何運行起來的。只為做出更加通俗易懂的教程,謝謝大家的關註和支持~
-
11 【必看】判斷是否有NVIDIA(英偉達)GPU | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
NVIDIA顯卡是我們深度學習當中一個非常重要的夥伴。這一集的内容主要是帶大家去判斷自己的電腦是否有GPU。如果有GPU,請看後續的GPU版本安裝PyTorch視頻。如果沒有GPU,也不用擔心,不影響我們對PyTorch的學習。只為做出更加通俗易懂的教程,謝謝大家的關註和支持~
-
12 CPU版本 Anaconda的安裝 | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
Anaconda是我們安裝PyTorch常見的一步。安裝了Anaconda后,我們就可以創建屬於自己的虛擬環境,同時也安裝上了Python。只為做出更加通俗易懂的教程,謝謝大家的關註和支持~
-
13 CPU版本 創建虛擬環境 | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
這一集主要講利用conda或者Anaconda來進行虛擬環境的創建。一個虛擬環境就相當於一個房子,我們一般情況下喜歡把PyTorch放到一個新的房子裏,這樣方便我們後續的使用和學習。只為做出更加通俗易懂的教程,謝謝大家的關註和支持~
-
14 CPU版本 conda的通道與鏡像地址 | Windows下深度學習環境PyTorch的安裝與配置教程 - 我是土堆
我們在利用 conda 來下載 Python 包的時候,你有沒有想過我們到底是從哪裏下載的。還有 conda install -c 又是什麽意思。這一集就帶大家來瞭解下~只為做出更加通俗易懂的教程,謝謝大家的關註和支持~
-
#1.1 Why? (PyTorch tutorial 神经网络 教学) - 莫烦Python
PyTorch 是 Torch 在 Python 上的衍生. 因为 Torch 是一个使用 Lua 语言的神经网络库, Torch 很好用, 但是 Lua 又不是特别流行, 所有开发团队将 Lua 的 Torch 移植到了更流行的语言 Python 上. 是的 PyTorch 一出生就引来了剧烈的反响. 为什么呢? If you like this, please star my Tutorial code in Github: https://github.com/MorvanZhou/PyTorch... 详细的文字教程: https://mofanpy.com/tutorials/machine...
-
#1.2 安装 (PyTorch tutorial 神经网络 教学) - 莫烦Python
PyTorch 暂时只支持 MacOS, Linux. 暂不支持 Windows! (可怜的 Windows 同学们.. 又被抛弃了). 不过说不定像 Tensorflow 一样, 因为 Windows 用户的强烈要求, 他们在某天就突然支持了.If you like this, please star my T...
-
#2.1 Numpy Torch 对比 (PyTorch tutorial 神经网络 教学) - 莫烦Python
Torch 自称为神经网络界的 Numpy, 因为他能将 torch 产生的 tensor 放在 GPU 中加速运算 (前提是你有合适的 GPU), 就像 Numpy 会把 array 放在 CPU 中加速运算. 所以神经网络的话, 当然是用 Torch 的 tensor 形式数据最好咯. 就像 Tensorflow 当中的 tensor 一样. If you like this, please star my Tutorial code in Github: https://github.com/MorvanZhou/PyTorch... 详细的文字教程: https://morvanzhou.github.io/tutorial...
-
#2.2 Variable 变量 (PyTorch tutorial 神经网络 教学) - 莫烦Python
在 Torch 中的 Variable 就是一个存放会变化的值的地理位置. 里面的值会不停的变化. 就像一个裝鸡蛋的篮子, 鸡蛋数会不停变动. 那谁是里面的鸡蛋呢, 自然就是 Torch 的 Tensor 咯. 如果用一个 Variable 进行计算, 那返回的也是一个同类型的 Variable. If you like this, please star my Tutorial code in Github: https://github.com/MorvanZhou/PyTorch... 详细的文字教程: https://morvanzhou.github.io/tutorial...
-
#2.3 Activation Function 激励函数 (PyTorch tutorial 神经网络 教学) - 莫烦Python
一句话概括 Activation: 就是让神经网络可以描述非线性问题的步骤, 是神经网络变得更强大. If you like this, please star my Tutorial code in Github: https://github.com/MorvanZhou/PyTorch-Tutorial 详...
-
#3.1 Regression 回归 (PyTorch tutorial 神经网络 教学) - 莫烦Python
我会这次会来见证神经网络是如何通过简单的形式将一群数据用一条线条来表示. 或者说, 是如何在数据当中找到他们的关系, 然后用神经网络模型来建立一个可以代表他们关系的线条.If you like this, please star my Tutorial code in Github: https://github...
-
#3.2 Classification 分类 (PyTorch tutorial 神经网络 教学) - 莫烦Python
这次我们也是用最简单的途径来看看神经网络是怎么进行事物的分类.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/PyTorch-Tutorial 详细的文字教程: https:/...
-
#3.3 快速搭建法 (PyTorch tutorial 神经网络 教学) - 莫烦Python
Torch 中提供了很多方便的途径, 同样是神经网络, 能快则快, 我们看看如何用更简单的方式搭建同样的神经网络.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/PyTorch...
-
#3.4 保存提取 (PyTorch tutorial 神经网络 教学) - 莫烦Python
训练好了一个模型, 我们当然想要保存它, 留到下次要用的时候直接提取直接用, 这就是这节的内容啦. 我们用回归的神经网络举例实现保存提取. If you like this, please star my Tutorial code on Github. Code: https://github.com/MorvanZhou/PyTorch... 详细的文字教程: https://morvanzhou.github.io/tutorial...
-
#3.5 批数据训练 (PyTorch tutorial 神经网络 教学) - 莫烦Python
Torch 中提供了一种帮你整理你的数据结构的好东西, 叫做 DataLoader, 我们能用它来包装自己的数据, 进行批训练.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/...
-
#3.6 Optimizer 优化器 (PyTorch tutorial 神经网络 教学) - 莫烦Python
对比几种常用的神经网络优化器.If you like this, please star my Tutorial code on Github.Code: https://github.com/MorvanZhou/PyTorch-Tutorial 详细的文字教程: https://morvanzhou.gith...
-
-
Keras深度學習框架
Keras是一個高級的神經網絡API,以其易用性和靈活性著稱,它能夠在TensorFlow、CNTK或Theano之上運行。這一系列視頻為初學者提供了全面的Keras教程,從為什麼選擇Keras開始,到安裝、不同後端的兼容性、以及構建和訓練各種類型的神經網絡。視頻內容包括回歸分析、分類問題、卷積神經網絡(CNNs)、循環神經網絡(RNNs)以及自編碼器的實現。此外,還涵蓋了如何保存和加載模型。這些視頻適合對深度學習有基本了解並希望進一步學習如何使用Keras進行項目開發的學生和專業人士。
-
Keras #1 Why? (教学 教程 tutorial) - 莫烦Python
Keras 是一个兼容 Theano 和 Tensorflow 的神经网络高级包, 用他来组件一个神经网络非常的快速, 几条语句就搞定了. 而且广泛的兼容性能使 Keras 在 Windows 和 MacOS 或者 Linux 上穿梭自如.Keras 播放列表: https://www.youtube.com/p...
-
Keras #2 安装 (教学 教程 tutorial) - 莫烦Python
安装说明: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/2-installation.py安装要保证已经安装过 Numpy 和 Scipy 了, 不然会安装不成功Keras 播放列表: https://www.youtube.com/p...
-
Keras #3 兼容 backend (教学 教程 tutorial) - 莫烦Python
介绍内容: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/3-backend.py怎么样调整 keras 的 backendKeras 播放列表: https://www.youtube.com/playlist?list=PLXO45t...
-
Keras #4 Regressor 回归 (教学 教程 tutorial) - 莫烦Python
本节代码: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/4-regressor_example.py用简单的几句语句就能搭建好 keras 的一个神经网络.Keras 播放列表: https://www.youtube.com/play...
-
Keras #5 Classifier 分类 (教学 教程 tutorial) - 莫烦Python
本节代码: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/5-classifier_example.py分类的代码我们用了很多不同的途径来完成同样的事情.Keras 播放列表: https://www.youtube.com/playli...
-
Keras #6 CNN 卷积神经网络 (教学 教程 tutorial) - 莫烦Python
本节代码: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/6-CNN_example.pyCNN 一般用来处理图片. 他在图片识别上有很多优势.[CNN 简介在这里]: https://www.youtube.com/watch?v=hM...
-
Keras #7 RNN Classifier 循环神经网络 (教学 教程 tutorial) - 莫烦Python
本节代码: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/7-RNN_Classifier_example.py使用 RNN 来对 mnist 数据集做分类.RNN 简介: https://www.youtube.com/watch?v=...
-
Keras #8 RNN Regressor 循环神经网络 (教学 教程 tutorial) - 莫烦Python
本节代码: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/8-RNN_LSTM_Regressor_example.py使用 LSTM RNN 来预测一个 sin, cos 曲线. LSTM 简介: https://www.youtube...
-
Keras #9 Autoencoder 自编码 (教学 教程 tutorial) - 莫烦Python
本节代码: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/9-Autoencoder_example.pyKeras 的 autoencoder自编码 也很好编辑, 类加上几个 layers 就好了. Autoencoder 自编码简介:...
-
Keras #10 Save & reload 保存提取 (教学 教程 tutorial) - 莫烦Python
本节代码: https://github.com/MorvanZhou/tutorials/blob/master/kerasTUT/10-save.py保存 Keras model 的时候需要安装h5py这个模块. 不然会不成功.Keras 播放列表: https://www.youtube.com/playl...
-
-
Theano深度學習框架
Theano是一款早期的深度學習框架,以其強大的數學表達能力和高效的計算性能而聞名。這一系列視頻為初學者提供了全面的Theano教程,從基本概念、安裝到神經網絡的構建和訓練。視頻內容包括了Theano的基本用法、函數定義、共享變量、激勵函數以及如何定義神經網絡層。此外,還涵蓋了回歸和分類問題的解決方法、模型的正則化和保存技巧。這些視頻適合想要了解和使用Theano進行深度學習的學生和研究人員。
-
Theano 1 why (神经网络 教学教程tutorial) - 莫烦Python
Theano 也是一款 python 神经网络方面的模块,相比起 Tensorflow 更为传统,更为学术化. 如果有兴趣的朋友们可以了解下他们的官方网站.http://deeplearning.net/software/theano/ 相比起 Tensorflow, theano 的优势就是他能够在 Windows 上运行.
-
Theano 2 安装 (神经网络 教学教程tutorial) - 莫烦Python
在 MacOS 和 Linux 上安装 theano 都很简单,只需要用 pip 安装所需的模块,再安装 pip install theano就好了.在 Windows 上安装 theano 可能会有点纠结,你可以参考这个网站进行安装: http://deeplearning.net/software/thea...
-
Theano 3 神经网络在做什么 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutorials/blob/master/theanoTUT/theano3_what_does_ML_do.pyTheano 能做的机器学习种类一般分两种, 一种是回归学习,一种是分类学习. 大家可以下载代码自己看看theano, 神...
-
Theano 4 基本用法 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutorials/blob/master/theanoTUT/theano4_basic_usage.py在theano 中学会定义矩阵 matrix 和功能 function 是一个比较重要的事, 我们在这里简单的提及了一下在 the...
-
Theano 5 function 用法 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... theano 当中的function 就和 python 中的 function 类似,不过因为要被用在多进程并行运算中,所以他的 function 有他自己的一套使用方式.
-
Theano 6 shared 变量 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... theano 中的 shared 基本上是用于定义神经网络的 weights 和 biases 的工具. 其中还有 get_value() 和 set_value()的功能.使用这些功能我们可以查看, 导入,导出我们的这些 model 的参数.
-
Theano 7 activation function 激励函数 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... Activation function 激励函数是神经网络学习当中必不可少的内容,对于不同种的问题,我们运用到的激励函数也会不同. 大家可以尝试不同种的激励函数看看效果哪种会好.
-
Theano 8 定义 Layer 类 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... 用一个class类来规划神经网络层的信息会比较方便的我们之后的运用.所以这一次,不同于 Tensorflow, 我们会用一个 class 来定义 layer.
-
Theano 9 regression 回归例子 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... 用 theano 做回归的问题可以像视频中提到的这种方法来做. 我们要做一个非线性的回归问题,,所以我们添加了两层 layer, 还是用了不同的激励函数. 神经网络成功的使预测误差得到了减小.
-
Theano 10 可视化结果 回归例子 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... 如果我们能可视化训练的结果,这样会对我们理解神经网络有很大的帮助,这次就举了一个例子来看看一个非线性的 regression, 怎样能够可视化他的结果
-
Theano 11 classification 分类学习 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... 这节我们说到了一很简单的神经网络,甚至还不能算是一个正规的神经网络,不过原理通了,大家就能应用自如啦.这个神经网络只有两层,一个输入,一个输出层,没有隐藏层,不过大家可以根据上次所讲的 Layer class 来自己做练习,加上隐藏层.
-
Theano 12 regularization 正规化 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... 机器学习中,我们常常会遇到 overfitting 的问题,使用 regularization 可以帮助我们解决这种问题,达到预测的更好效果. 视频中会介绍什么是 overfitting, 怎么使用l1, l2 regularization terms.
-
Theano 13 save model 保存 (神经网络 教学教程tutorial) - 莫烦Python
本节练习代码: https://github.com/MorvanZhou/tutoria... 我们肯定需要保存学习好的 model, 毕竟不能浪费学习的时间和资源. 所以这里讲到了如何保存和提取出已经学习好的 model 参数.
-
Theano 14 总结和更多 (神经网络 教学教程tutorial) - 莫烦Python
本节总结:https://github.com/MorvanZhou/tutoria... 谢谢大家一贯的支持, 使用 theano 的神经网络教程就说到这了. 我们从基础上升,一直学会做些简单的神经网络,而且还学会了如果克服 overfitting 的问题,还有 normalization 的优化处理, 甚至是保存学好的神经网络. 但是这只是学习道路上的前奏, 后面如果还想深入学习的话, 还有很多很多值得研究的地方.
-
-
MXNet/Gluon深度學習
Apache MXNet/Gluon是一個靈活且高效的深度學習框架,支持快速模型設計和高性能的數據運算。這一系列視頻提供了從基礎到進階的MXNet/Gluon深度學習教程,覆蓋了多種深度學習模型和技術。從深度卷積網絡到循環神經網絡,從基本的神經網絡構建到複雜的物體檢測和語義分割,這些視頻提供了全面的學習材料。此外,還涵蓋了門控循環單元(GRU)、長短期記憶網絡(LSTM)、詞向量、seq2seq模型和注意力機制等進階主題。這些視頻適合對深度學習有基本了解並希望進一步學習如何使用MXNet/Gluon進行項目開發的學生和專業人士。
-
动手学深度学习第一课:从上手到多类分类 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 第一课内容 - [30min] 介绍,幻灯片在此 https://raw.githubusercontent.com/mli... - [15min] 演示从干净系统安装依赖包 http://zh.gluon.ai/install.html,有条件的同... - [15min] 使用NDArray来处理数据 http://zh.gluon.ai/ndarray.html。根据问卷调... - linear algebra http://gluon.mxnet.io/chapter01_crash... - probablity http://gluon.mxnet.io/chapter01_crash... - [15min] 使用autograd来自动求导 http://zh.gluon.ai/autograd.html - [20min] 现在我们有足够的背景知识来写第一个模型了: [线性回归 http://zh.gluon.ai/linear-regression-... - [10min] 上面模型使用Gluon的实现 http://zh.gluon.ai/linear-regression-... - [15min] 再来一个模型:多类Logistic回归 http://zh.gluon.ai/softmax-regression... http://zh.gluon.ai/softmax-regression... 留作了课后阅读了
-
动手学深度学习第二课:过拟合、多层感知机、GPU和卷积神经网络 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== - [20min] 接着前一课我们讨论从0实现多层感知机https://zh.gluon.ai/mlp-scratch.html,... - [15min] 然后我们讨论机器学习里面的一个重要问题:过拟合和欠拟合 https://zh.gluon.ai/underfit-overfit.... - [20min] 解决过拟合的一个常用办法是正则化,如何从0开始 https://zh.gluon.ai/reg-scratch.html)... https://zh.gluon.ai/reg-gluon.html - [20min] 下面我们要介绍更复杂的神经网络,战略核武器备起来:从干净系统开始装GPU驱动和对应的MXNet https://zh.gluon.ai/install.html#gpu - [10min] 使用GPU进行计算 https://zh.gluon.ai//use-gpu.html - [35min] 从0开始实现卷积神经网络 https://zh.gluon.ai//cnn-scratch.html... https://zh.gluon.ai//cnn-gluon.html 作业:实战Kaggle比赛——使用Gluon预测房价和K折交叉验证 https://zh.gluon.ai/kaggle-gluon-kfol...
-
动手学深度学习第三课:深度卷积网络,如何使用Gluon,以及核武器购买指南 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== - [15min] GPU产品线丰富,各个之间价格和性能差别巨大。我们先介绍下如何挑选GPU https://zh.gluon.ai/buy-gpu.html - [45min] 前两课我们主要关注模型,没加解释了使用了Gluon。这里我们详细介绍Gluon的基本使用,包括创建神经网络 https://zh.gluon.ai/block.html,初始化参数 https://zh.gluon.ai/parameters.html,读... https://zh.gluon.ai/custom-layer.html - [25min] 在讲深度卷积网络之前我们介绍一个新的正则化办法dropout(从0开始 https://zh.gluon.ai/dropout-scratch.h... https://zh.gluon.ai/dropout-gluon.html)。 - [35min] 最后我们讲引爆深度学习的导火线Alexnet https://zh.gluon.ai/alexnet-gluon.htm... https://zh.gluon.ai/vgg-gluon.html。
-
动手学深度学习第四课:BatchNorm,更深的卷积神经网络,图片增强和新的Kaggle练习 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== - [5min] 奖励积极参加第一次Kaggle作业的小伙伴总共$2000的AWS Credit https://discuss.gluon.ai/t/topic/1039/ - [20min] 批量归一化使得更深的卷积神经网络很容易训练:从0开始 https://zh.gluon.ai/chapter_convoluti... https://zh.gluon.ai/chapter_convoluti... - [60min] 更深的卷积神经网络:NiN https://zh.gluon.ai/chapter_convoluti..., GoogLeNet https://zh.gluon.ai/chapter_convoluti..., ResNet https://zh.gluon.ai/chapter_convoluti..., DenseNet https://zh.gluon.ai/chapter_convoluti... - [20min] 运用图片增强 https://zh.gluon.ai/chapter_computer-... 减少过拟合,增强网络泛化能力 - [10min] 新的Kaggle练习:cifar10分类 https://zh.gluon.ai/chapter_convoluti... 。我们同样会给积极参加小伙伴提供核弹支持。
-
动手学深度学习第五课:Gluon高级和优化算法基础 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== - 第一部分:Gluon高级 - [15min] Hybridize http://zh.gluon.ai/chapter_gluon-adva... - [15min] 延迟执行 http://zh.gluon.ai/chapter_gluon-adva... - [15min] 自动并行 http://zh.gluon.ai/chapter_gluon-adva... - [15min] 多GPU来训练 — 从0开始 http://zh.gluon.ai/chapter_gluon-adva... - [15min] 多GPU来训练 — 使用Gluon http://zh.gluon.ai/chapter_gluon-adva... - 第二部分:优化算法基础 - [15min] 优化算法概述 http://zh.gluon.ai/chapter_optimizati... - [15min] 梯度下降和随机梯度下降 — 从0开始 http://zh.gluon.ai/chapter_optimizati... - [10min] 梯度下降和随机梯度下降 — 使用Gluon http://zh.gluon.ai/chapter_optimizati...
-
动手学深度学习第六课:优化算法高级和计算机视觉 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== - [5min] 总结Kaggle CIFAR10结果 - [20min] 计算机视觉 - Fine-tuning http://zh.gluon.ai/chapter_computer-v... - [1h 35min] 优化算法高级 - Momentum, RMSprop, Adagrad, AdaDelta, Adam http://zh.gluon.ai/chapter_optimizati...
-
动手学深度学习第七课:物体检测 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== - [5min] 回顾CIFAR10竞赛 https://discuss.gluon.ai/t/topic/1545 - [30min] 物体检测,R-CNN, Fast R-CNN Faster R-CNN http://zh.gluon.ai/chapter_computer-v... - [25min] SSD http://zh.gluon.ai/chapter_computer-v...
-
动手学深度学习第八课:物体检测·续 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 上一课我们介绍了用于物体检测的R-CNN, Fast R-CNN, Faster R-CNN和SSD。这节课我们继续这个课题。 - [40min] 续上节课我们介绍完SSD的具体实现 http://zh.gluon.ai/chapter_computer-v... - [15min] YOLO: You only look once http://zh.gluon.ai/chapter_computer-v... - [5min] 实战练习:120类狗分类 http://zh.gluon.ai/chapter_computer-v...
-
动手学深度学习第九课:物体检测·再续 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 上节课最后15分钟很神奇的断线了。这节课将补完SSD最后的训练部分,并介绍Yolo和Mask R-CNN。
-
动手学深度学习第十课:语义分割 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 介绍使用全连接卷积网络来进行语义分割
-
动手学深度学习第十一课:样式迁移 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 作为计算机视觉部分最后一课,我们将介绍如何将题图中的水粉画和橡树照合成为题图的背景画。
-
动手学深度学习第十二课:循环神经网络 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== [5min]:奖励参与ImageNet Dogs Kaggle比赛的小伙伴们 [15min]:李沐老师 NIPS’17 分享 [40min]:循环神经网络 我们将使用周杰伦前十张专辑歌曲的歌词来训练循环神经网络模型作词。
-
动手学深度学习第十三课:正向传播、反向传播和通过时间反向传播 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 在上节课给出的循环神经网络示例代码中,如果不做梯度裁剪,模型还能正常训练吗?为什么不能?为什么在前馈神经网络中不需要做梯度裁剪? 我们在第五课和第六课介绍了优化算法。使用梯度迭代模型参数的优化算法是训练神经网络的方法。但梯度的计算在神经网络中往往并不直观。这也对我们分析模型训练出现的问题造成了一定的难度。 为了更深刻理解神经网络的训练,特别是循环神经网络的训练,本节课中我们将一起探索深度学习有关梯度计算的重要概念:正向传播、反向传播和通过时间反向传播。通过2017年最后一课的学习,我们将进一步了解深度学习模型训练的本质,并激发改进循环神经网络的灵感。 本节课的安排: [5min]:循环神经网络的梯度裁剪 http://zh.gluon.ai/chapter_recurrent-... [25min]:正向传播和反向传播 http://zh.gluon.ai/chapter_supervised... [30min]:循环神经网络的通过时间反向传播 http://zh.gluon.ai/chapter_recurrent-...
-
动手学深度学习第十四课:实现、训练和应用循环神经网络 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 欢迎来到2018年的第一课。 在前两节课中,我们一起探讨了循环神经网络的设计思想和训练这类网络所需要的梯度计算方法。 在新年的第一课中,我们将回归“动手”:我们将通过运行代码来理解在时序数据上训练和应用循环神经网络的方法。通过前两课和本节课的学习,我们将掌握以下这套核心技术: 即使不借助深度学习框架,也能从零设计、实现并在时序数据上训练和应用循环神经网络。 本节课的安排: [30 mins]:时序数据的批量采样。 [20 mins]:循环神经网络的设计、实现、训练和应用。 [10 mins]:语言模型上的实验和评价。
-
动手学深度学习第十五课:门控循环单元(GRU)、长短期记忆(LSTM)、多层循环神经网络以及Gluon实现 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 为了捕捉时序数据中的长期依赖关系,门控循环单元(GRU)和长短期记忆(LSTM)被广泛应用。本节课将介绍这两类门控循环神经网络的设计和实现。我们还将了解多层循环神经网络的设计和使用Gluon实现循环神经网络的方法。 本课是循环神经网络篇章的最后一课。 本节课的安排: [20 mins]:门控循环单元(GRU)。 [20 mins]:长短期记忆(LSTM)。 [10 mins]:多层循环神经网络。 [10 mins]:使用Gluon实现多种循环神经网络。 部分链接稍后更新。
-
动手学深度学习第十六课:词向量(word2vec) - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 词向量(word embedding)已逐渐成为自然语言处理的基础知识。 本课将以word2vec为例,着重介绍两套模型:跳字模型(Skip-gram)和连续词袋模型(CBOW),以及两套近似训练法:负采样(Negative sampling)和层序softmax(Hierarchical softmax)。 本节课的大致安排: [10 mins]:词向量和word2vec概述。 [15 mins]:跳字模型。 [15 mins]:连续词袋模型。 [10 mins]:负采样。 [10 mins]:层序softmax。
-
动手学深度学习第十七课:GloVe、fastText和使用预训练的词向量 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 上节课我们以word2vec为例介绍了词向量的基础知识。本课会介绍更新一点的词向量,例如GloVe和fastText。我们还将介绍词向量的应用,例如使用预训练的词向量回答类比问题。 本节课的大致安排: [30 mins]:词向量:GloVe。 [10 mins]:词向量:fastText。 [20 mins]:使用预训练的词向量。
-
动手学深度学习第十八课:seq2seq(编码器和解码器)和注意力机制 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 在循环神经网络中我们学习了如何将一个序列转化成定长输出(例如一个标签)。本节课中,我们将探究如何将一个序列转化成一个不定长的序列输出(例如一个不定长的标签序列)。 本节课将介绍seq2seq(编码器和解码器)以及注意力机制的设计。这些是神经机器翻译的基础知识。 本节课的大致安排: [30 mins]:seq2seq(编码器和解码器)。 [30 mins]:注意力机制。
-
动手学深度学习第十九课:应用seq2seq和注意力机制:机器翻译 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... ===== 感谢小伙伴们从2017年9月的第一课开始陪伴我们一路走来。 终于,我们迎来了第一季完结。 上节课我们介绍了seq2seq(编码器和解码器)以及注意力机制的设计。本节课将介绍如何使用Gluon实现并应用它们于机器翻译任务中。 本节课后我们会做一个问卷调查。我们恳请大家积极填写对本系列课程的反馈意见,这将有助于我们在今后提供更优质的动手学深度学习课程。 本节课的教程链接(不断更新中): 应用seq2seq和注意力机制:机器翻译 课程反馈问卷: https://discuss.gluon.ai/t/topic/4701
-
动手学深度学习番外篇:注意力机制概述 - Apache MXNet/Gluon 中文频道
资源网站: https://zh.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-zh ),纸质书详情见资源网站(上架4周重印2次,累计3万册)。 English version: https://www.d2l.ai (GitHub: https://github.com/d2l-ai/d2l-en ) 最接近课程视频中的课件: https://github.com/d2l-ai/d2l-zh/rele... === 深度学习中,注意力是实现非参模型的关键机制。可以说,它是近年来深度学习在自然语言处理、计算机视觉、语音识别、图像合成、解NP-hard问题和强化学习等领域进一步取得重大进展的基石。 与正篇课程从“聚焦细节”视角切入不同,该番外篇中,我们将从“观其大略”的视角,简要了解注意力机制的分类、实现和预训练,以及在减参、结构化和稀疏化的最新进展。也就是说,与正篇中的课程不同,番外篇中将同时讨论较多话题,因此无法深入探讨方法细节。大家可以回顾第18课有关注意力机制的详细介绍,或者书中相应内容。 内容提纲及时间分配如下: 10min: 非参数回归 10min: 注意力机制 10min: 层次注意力 10min: 循环注意力 30min: 含输出的循环注意力 40min: 多头注意力、BERT和GPT 10min: 减参、结构化、稀疏化
-
-
Meta Learning
少樣本學習是深度學習領域的一個重要分支,專注於如何讓機器學習模型能夠在只有非常少量的訓練數據的情況下仍然表現良好。這一系列視頻深入探討了少樣本學習的基本概念、方法和實踐應用。從基礎理論到具體的技術如孪生網絡(Siamese Network)和結合預訓練與微調策略,這些視頻為理解如何在數據稀缺的環境下有效地訓練模型提供了深入見解。
-
Few-Shot Learning (1/3): 基本概念 - Shusen Wang
這節課的內容是 Few-Shot Learning (小樣本學習) 和 Meta-Learning (元學習)的基本概念。下節課內容是用Siamese Network解決Few-shot learning。
-
Few-Shot Learning (2/3): Siamese Network (孪生网络) - Shusen Wang
這節課的內容是用Siamese Network (孿生網絡) 解決Few-shot learning (小樣本學習)。 Siamese Network並不是Meta Learning最好的方法,但是通過學習Siamese Network,非常有助於理解其他Meta Learning算法。
-
Few-Shot Learning (3/3):Pretraining + Fine Tuning - Shusen Wang
這節課接著講 Few-shot learning (小樣本學習)。這節課內容是用 pretraining (預訓練) + Fine Tuning解決小樣本學習。雖然這類方法很簡單,但是準確率與最好的方法相當。
-
-
語言模型(Language Models, LM)
語言模型是自然語言處理(NLP)的基礎之一,主要用於預測文本中單詞的序列。這一系列視頻提供了對語言模型的全面介紹,從基本概念到更高級的技術,如平滑技術和排序公式。視頻涵蓋了諸如單元模型、零頻率問題、拉普拉斯校正、絕對折扣、Good-Turing估計、插值、Jelinek-Mercer平滑、Dirichlet平滑等主題。這些內容對於理解如何建構和使用語言模型來處理和生成自然語言非常重要。
-
LLM Explained | What is LLM - codebasics
Simple and easy explanation of LLM or Large Language Model in less than 5 minutes. In this short video, you will build an intuition of how a large language model works using animation and simple story telling. This is the explanation that even a high school student can understand easily.
-
LM.1 Overview - Victor Lavrenko
-
LM.2 What is a language model? - Victor Lavrenko
-
LM.3 Query likelihood ranking - Victor Lavrenko
-
LM.4 The unigram model (urn model) - Victor Lavrenko
-
LM.5 Zero-frequency problem - Victor Lavrenko
-
LM.6 Laplace correction and absolute discounting - Victor Lavrenko
-
LM.7 Good-Turing estimate - Victor Lavrenko
-
LM.8 Interpolation with background model - Victor Lavrenko
-
LM.9 Jelinek-Mercer smoothing - Victor Lavrenko
-
LM.10 Dirichlet smoothing - Victor Lavrenko
-
LM.11 Leave-one-out smoothing - Victor Lavrenko
-
LM.12 Smoothing and inverse document frequency - Victor Lavrenko
-
LM.13 Language model ranking formula - Victor Lavrenko
-
LM.14 Issues to consider - Victor Lavrenko
-
-
自然語言處理 (NLP)
自然語言處理(NLP)是人工智能領域的一個重要分支,專注於使計算機能夠理解和處理人類語言。這一系列講座深入探討了NLP的多個方面,從基礎的詞向量表示技術(如word2vec和GloVe)到更高級的概念,包括神經機器翻譯、注意力機制,以及在NLP中的深度學習應用。這些講座還涵蓋了依存句法分析、語音處理等專門的NLP任務,以及反向傳播等深度學習技術。這些內容對於想要深入了解自然語言處理,特別是在深度學習框架下NLP的應用和挑戰的學生和研究人員來說,是非常有價值的。
-
Lecture 1 | Natural Language Processing with Deep Learning - Stanford University School of Engineering
Lecture 1 introduces the concept of Natural Language Processing (NLP) and the problems NLP faces today. The concept of representing words as numeric vectors is then introduced, and popular approaches to designing word vectors are discussed. Key phrases: Natural Language Processing. Word Vectors. Singular Value Decomposition. Skip-gram. Continuous Bag of Words (CBOW). Negative Sampling. Hierarchical Softmax. Word2Vec. ------------------------------------------------------------------------------- Natural Language Processing with Deep Learning Instructors: - Chris Manning - Richard Socher Natural language processing (NLP) deals with the key artificial intelligence technology of understanding complex human language communication. This lecture series provides a thorough introduction to the cutting-edge research in deep learning applied to NLP, an approach that has recently obtained very high performance across many different NLP tasks including question answering and machine translation. It emphasizes how to implement, train, debug, visualize, and design neural network models, covering the main technologies of word vectors, feed-forward models, recurrent neural networks, recursive neural networks, convolutional neural networks, and recent models involving a memory component.
-
Lecture 2 | Word Vector Representations: word2vec - Stanford University School of Engineering
Lecture 2 continues the discussion on the concept of representing words as numeric vectors and popular approaches to designing word vectors. Key phrases: Natural Language Processing. Word Vectors. Singular Value Decomposition. Skip-gram. Continuous Bag of Words (CBOW). Negative Sampling. Hierarchical Softmax. Word2Vec.
-
Lecture 3 | GloVe: Global Vectors for Word Representation - Stanford University School of Engineering
Lecture 3 introduces the GloVe model for training word vectors. Then it extends our discussion of word vectors (interchangeably called word embeddings) by seeing how they can be evaluated intrinsically and extrinsically. As we proceed, we discuss the example of word analogies as an intrinsic evaluation technique and how it can be used to tune word embedding techniques. We then discuss training model weights/parameters and word vectors for extrinsic tasks. Lastly we motivate artificial neural networks as a class of models for natural language processing tasks. Key phrases: Global Vectors for Word Representation (GloVe). Intrinsic and extrinsic evaluations. Effect of hyperparameters on analogy evaluation tasks. Correlation of human judgment with word vector distances. Dealing with ambiguity in word using contexts. Window classification.
-
Lecture 4: Word Window Classification and Neural Networks - Stanford University School of Engineering
Lecture 4 introduces single and multilayer neural networks, and how they can be used for classification purposes. Key phrases: Neural networks. Forward computation. Backward propagation. Neuron Units. Max-margin Loss. Gradient checks. Xavier parameter initialization. Learning rates. Adagrad.
-
Lecture 5: Backpropagation and Project Advice - Stanford University School of Engineering
Lecture 5 discusses how neural networks can be trained using a distributed gradient descent technique known as back propagation. Key phrases: Neural networks. Forward computation. Backward propagation. Neuron Units. Max-margin Loss. Gradient checks. Xavier parameter initialization. Learning rates. Adagrad.
-
Lecture 6: Dependency Parsing - Stanford University School of Engineering
Lecture 6 covers dependency parsing which is the task of analyzing the syntactic dependency structure of a given input sentence S. The output of a dependency parser is a dependency tree where the words of the input sentence are connected by typed dependency relations. Key phrases: Dependency Parsing.
-
Lecture 7: Introduction to TensorFlow - Stanford University School of Engineering
Lecture 7 covers Tensorflow. TensorFlow is an open source software library for numerical computation using data flow graphs. It was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research. Key phrases: TensorFlow.
-
Lecture 10: Neural Machine Translation and Models with Attention - Stanford University School of Engineering
Lecture 10 introduces translation, machine translation, and neural machine translation. Google's new NMT is highlighted followed by sequence models with atte...
-
Lecture 12: End-to-End Models for Speech Processing - Stanford University School of Engineering
Lecture 12 looks at traditional speech recognition systems and motivation for end-to-end models. Also covered are Connectionist Temporal Classification (CTC) and Listen Attend and Spell (LAS), a sequence-to-sequence based model for speech recognition.
-
Lecture 15: Coreference Resolution - Stanford University School of Engineering
Lecture 15 covers what is coreference via a working example. Also includes research highlight "Summarizing Source Code", an introduction to coreference resolution and neural coreference resolution.
-
Lecture 17: Issues in NLP and Possible Architectures for NLP - Stanford University School of Engineering
Lecture 17 looks at solving language, efficient tree-recursive models SPINN and SNLI, as well as research highlight "Learning to compose for QA." Also covered are interlude pointer/copying models and sub-word and character-based models.
-
Lecture 18: Tackling the Limits of Deep Learning for NLP - Stanford University School of Engineering
Lecture 18 looks at tackling the limits of deep learning for NLP followed by a few presentations.
-
How to Do Sentiment Analysis - Intro to Deep Learning #3 - Siraj Raval
In this video, we'll use machine learning to help classify emotions! The example we'll use is classifying a movie review as either positive or negative via TF Learn in 20 lines of Python. Coding Challenge for this video: https://github.com/llSourcell/How_to_... Ludo's winning code: https://github.com/ludobouan/pure-num... See Jie Xun's runner up code: https://github.com/jiexunsee/Neural-N...
-
What is Word2Vec? A Simple Explanation | Deep Learning Tutorial 41 (Tensorflow, Keras & Python) - codebasics
A very simple explanation of word2vec. This video gives an intuitive understanding of how word2vec algorithm works and how it can generate accurate word embeddings for words such that you can do math with words (a famous example is king - man + woman = queen) Part 2 (Coding): • Word2Vec Part 2 | Implement word2vec ... Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. 🔖Hashtags🔖 #word2vecexplained #word2vec #nlpword2vec #nlpword2vectutorial #word2vecdeeplearning #word2vecpython #wordembeddings #wordembedding #pythonword2vec #deeplearning #word2vec #deeplearningtensorflow #deeplearningWord2Vec
-
Word2Vec Part 2 | Implement word2vec in gensim | | Deep Learning Tutorial 42 with Python - codebasics
We will train word2vec model in python gensim library using amazon product reviews. There is an exercise as well at the end of this video. Code: https://github.com/codebasics/deep-le... Part 1(Theory): • What is Word2Vec? A Simple Explanatio... ⭐️ Timestamps ⭐️ 00:00 Introduction 00:46 Coding 16:53 Exercise Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. 🔖Hashtags🔖 #word2vecexplained #word2vec #nlpword2vec #nlpword2vectutorial #word2vecdeeplearning #word2vecpython #wordembeddings #wordembedding #pythonword2vec #wordembeddingpython #word2vecmodel #tensorflowword2vec #word2vectensorflow #word2vecembeddings #word2veckeras #kerasword2vec #wordembeddingeffect #word2vecnlp #deeplearningword2vec
-
Word2Vec (tutorial) - Siraj Raval
In this video, we'll use a Game of Thrones dataset to create word vectors. Then we'll map these word vectors out on a graph and use them to tell us related words that we input. We'll learn how to process a dataset from scratch, go over the word vectorization process, and visualization techniques all in one session. Code for this video: https://github.com/llSourcell/word_ve... Join us in our Slack channel: http://wizards.herokuapp.com/ More learning resources: https://www.tensorflow.org/tutorials/... https://radimrehurek.com/gensim/model... https://www.kaggle.com/c/word2vec-nlp... http://sebastianruder.com/word-embedd... http://natureofcode.com/book/chapter-...
-
Word Embedding and Word2Vec, Clearly Explained!!! - StatQuest
Words are great, but if we want to use them as input to a neural network, we have to convert them to numbers. One of the most popular methods for assigning n...
-
Word embedding using keras embedding layer | Deep Learning Tutorial 40 (Tensorflow, Keras & Python) - codebasics
In this video we will discuss how exactly word embeddings are computed. There are two techniques for this (1) supervised learning (2) self supervised learning techniques such as word2vec, glove. In this tutorial we will look at the first technique of supervised learning. We will also write code for food review classification and see how word embeddings are calculated while solving that problem Code: https://github.com/codebasics/deep-le... Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... 🔖 Hashtags 🔖 #WordEmbeddingUsingKeras #WordEmbedding #EmbeddingLayerKeras #WordEmbeddingdeeplearning #WordembeddingswithKeras #wordembeddinginpython #wordembeddingpython #wordembeddingtensorflow
-
-
人類語言處理(Human Language Processing)
人類語言處理是自然語言處理(NLP)和語音處理的一個廣泛領域,它專注於使計算機能夠理解、解釋和生成人類語言。這一系列講座深入探討了人類語言處理的多個方面,包括語音識別、語音合成、語者驗證、自然語言理解和生成等。從基礎的BERT模型到更複雜的GPT-3,這些講座提供了對當前人類語言處理技術的深入分析。此外,還包括了針對特定NLP任務(如共指消解、構成句法分析、依存句法分析和問答系統)的深入探討。
-
Deep Learning for Language Modeling - 李宏毅
-
[DLHLP 2020] Deep Learning for Human Language Processing (Course Overview) - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/Introduction%20(v9).pdf影片中的 audio demo 有一個回音,那不是 demo audio 原有的音聲,是教室的播音設備所造成的,只好請大家腦補去掉回音後的原音
-
[DLHLP 2020] Speech Recognition (1/7) - Overview - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ASR%20(v12).pdf
-
[DLHLP 2020] Speech Recognition (2/7) - Listen, Attend, Spell - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ASR%20(v12).pdf
-
[DLHLP 2020] Speech Recognition (3/7) - CTC, RNN-T and more - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ASR%20(v12).pdf
-
[DLHLP 2020] Speech Recognition (4/7) - HMM (optional) - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ASR2%20(v6).pdf因為錄影時忘了關直播,導致背景有很明顯的人聲雜訊,還請見諒
-
[DLHLP 2020] Speech Recognition (5/7) - Alignment of HMM, CTC and RNN-T (optional) - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ASR2%20(v6).pdf因為錄影時忘了關直播,導致背景有很明顯的人聲雜訊,還請見諒
-
[DLHLP 2020] Speech Recognition (6/7) - RNN-T Training (optional) - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ASR2%20(v6).pdfRNN-T 的訓練比較複雜,這部分的內容就算是無法理解也不會影響接下來的學習
-
[DLHLP 2020] Speech Recognition (7/7) - Language Modeling - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ASR3.pdf
-
[DLHLP 2020] Voice Conversion (1/2) - Feature Disentangle - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/Voice%20Conversion%20(v3).pdf
-
[DLHLP 2020] Voice Conversion (2/2) - CycleGAN and StarGAN - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/Voice%20Conversion%20(v3).pdf
-
[DLHLP 2020] Speech Separation (1/2) - Deep Clustering, PIT - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/SP%20(v3).pdf
-
[DLHLP 2020] Speech Separation (2/2) - TasNet - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/SP%20(v3).pdf因為之前上傳的影片剪輯有誤,所以重新上傳
-
[DLHLP 2020] Speech Synthesis (1/2) - Tacotron - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/TTS%20(v4).pdf
-
[DLHLP 2020] Speech Synthesis (2/2) - More than Tacotron - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/TTS%20(v4).pdf
-
[DLHLP 2020] Speaker Verification - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/Speaker%20(v3).pdf
-
[DLHLP 2020] Vocoder (由助教許博竣同學講授)
slides: https://docs.google.com/presentation/d/1HlX_RXu3mMnJSgcs9s7eV2roJLYRlw_nU9S0ayYlNpI/edit?usp=sharing
-
[DLHLP 2020] Overview of NLP Tasks - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/TaskShort%20(v9).pdf
-
[DLHLP 2020] BERT and its family - Introduction and Fine-tune - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/BERT%20train%20(v8).pdf
-
[DLHLP 2020] BERT and its family - ELMo, BERT, GPT, XLNet, MASS, BART, UniLM, ELECTRA, and more - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/BERT%20train%20(v8).pdf
-
[DLHLP 2020] 來自獵人暗黑大陸的模型 GPT-3 - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/GPT3%20(v6).pdfGPT-2 請見之前的影片https://youtu.be/UYPa347-DdE?t=2963
-
[DLHLP 2020] Multilingual BERT - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/Multi%20(v2).pdf
-
[DLHLP 2020] Audio BERT (1/2) (由助教劉廷緯同學講授) - 李宏毅
slides: https://docs.google.com/presentation/d/1nToP2wYYTWG7b70CRZdaXfygADm-btSeMKWZkS4y7c0/edit?usp=sharing
-
[DLHLP 2020] Audio BERT (2/2) (由助教紀伯翰同學和助教楊書文同學講授)
slides:https://docs.google.com/presentation/d/1peS8IOo7kabESZwiWJLJFCsEXhjXFi8G0ep098DCd5c/edit?usp=sharinghttps://docs.google.com/presentation/d/13OnBS148PH...
-
[DLHLP 2020] Non-Autoregressive Sequence Generation (由助教莊永松同學講授)
slides: https://docs.google.com/presentation/d/1lnXSxPL3hZQ_OHyV_Lksz3UcOmPyiDqC1eg29Q5IAr0/edit
-
[DLHLP 2020] Text Style Transfer and Unsupervised Summarization/Translation/Speech Recognition - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/UnsupervisedNLP%20(v2).pdf
-
[DLHLP 2020] Deep Learning for Coreference Resolution - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/Coref%20(v2).pdf
-
[DLHLP 2020] Deep Learning for Constituency Parsing - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ParsingC%20(v2).pdf
-
[DLHLP 2020] Deep Learning for Dependency Parsing - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ParsingD%20(v2).pdf
-
[DLHLP 2020] Deep Learning for Question Answering (1/2) (重新上傳) - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/QA%20(v12).pdf修改的部分:-- 影片開頭 -- 12:10: 產生 answer span 的方法
-
[DLHLP 2020] Deep Learning for Question Answering (2/2) - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/QA%20(v12).pdf
-
[DLHLP 2020] Controllable Chatbot - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/Chatbot%20(v6).pdf
-
[DLHLP 2020] Dialogue State Tracking (as Question Answering) - 李宏毅
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/DSTQA%20(v6).pdf
-
-
Transformer
Transformer技術是近年來深度學習領域的一項重要創新,它在處理序列數據,特別是在自然語言處理(NLP)領域中取得了革命性的進展。這一技術由谷歌在2017年提出,其核心是“自注意力機制”(Self-Attention Mechanism),這使得模型能夠更有效地處理序列數據中的長距離依賴關係。Transformer模型已成為多種語言任務的基礎,如機器翻譯、文本生成和語音識別。隨後,基於此架構開發的衍生模型,如BERT和GPT,進一步推動了NLP領域的發展。甚至在計算機視覺領域,如Vision Transformer(ViT),也展現了Transformer架構的多功能性和強大能力。
-
Transformer模型(1/2): 剥离RNN,保留Attention - Shusen Wang
Transformer模型是目前机器翻译等NLP问题最好的解决办法,比RNN有大幅提高。这节课和下节课讲解Transformer模型。这节课的内容是 剥离RNN,保留Attention,设计Attention层与Self-Attention层。下节课的内容是用这两种层与全连接层搭建深度神经网络——Transformer模型。我不会像其他视频和博客那样剖析Transformer的每个组件。我的思路是从零开始设计一个Transformer模型。希望大家能跟着我的思路一起解决这个问题:如何搭建一个纯基于Attention的深度神经网络,并且能解决一切RNN擅长的问题? 课件:https://github.com/wangshusen/DeepLea...
-
Transformer模型(2/2): 从Attention层到Transformer网络 - Shusen Wang
Transformer模型是目前机器翻译等NLP问题最好的解决办法,比RNN有大幅提高。这节课和上节课讲解Transformer模型。这节课的内容是 用Attention层与Self-Attention层搭建深度神经网络——Transformer模型。 我没有像其他视频和博客那样剖析Transformer的每个组件。我的思路是从零开始设计一个Transformer模型。希望大家能跟着我的思路一起解决这个问题:如何搭建一个纯基于Attention的深度神经网络,并且能解决一切RNN擅长的问题? 课件:https://github.com/wangshusen/DeepLea...
-
What is BERT? | Deep Learning Tutorial 46 (Tensorflow, Keras & Python) - codebasics
What is BERT (Bidirectional Encoder Representations From Transformers) and how it is used to solve NLP tasks? This video provides a very simple explanation of it. I am not going to go in details of how transformer based architecture works etc but instead I will go over an overview where you understand the usage of BERT in NLP tasks. In coding section we will generate sentence and word embeddings using BERT for some sample text. We will cover various topics such as, * Word2vec vc BERT * How BERT is trained on masked language model and next sentence completion task ⭐️ Timestamps ⭐️ 00:00 Introduction 00:39 Theory 11:00 Coding in tensorflow Code: https://github.com/codebasics/deep-le... BERT article: http://jalammar.github.io/illustrated... Word2Vec video: • What is Word2Vec? A Simple Explanatio...
-
BERT (预训练Transformer模型) - Shusen Wang
Transformer模型是目前机器翻译等NLP问题最好的解决办法,比RNN有大幅提高。Bidirectional Encoder Representations from Transformers (BERT) 是预训练Transformer最常用的方法,可以大幅提升Transformer的表现。
-
Vision Transformer (ViT) 用于图片分类 - Shusen Wang
Vision Transformer (ViT) 是很新的模型,2020年10月挂在 arXiv 上,2021年正式发表。在所有的公开数据集上,ViT 的表现都超越了最好的 ResNet。前提是要在足够大的数据集上预训练 ViT。在越大的数据集上做预训练,ViT 的优势越明显。 课件: https://github.com/wangshusen/DeepLea...
-
Text Classification Using BERT & Tensorflow | Deep Learning Tutorial 47 (Tensorflow, Keras & Python) - codebasics
Using BERT and Tensorflow 2.0, we will write simple code to classify emails as spam or not spam. BERT will be used to generate sentence encoding for all emails and after that we will use a simple neural network with one drop out layer and one output layer. What is BERT? • What is BERT? | Deep Learning Tutoria... Code: https://github.com/codebasics/deep-le... Do you want to learn technology from me? Check https://codebasics.io/?utm_source=des... for my affordable video courses. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist: • Machine Learning Tutorial Python | Ma... 🔖Hashtags🔖 #BERTModel #bertmodelnlppython #BERTtextclassification #BERTtutorial #tensorflowbert #tensorflowberttutorial
-
自注意力機制 (Self-attention) 變型 - Hung-yi Lee
-
-
深度學習 進階
這個播放列表聚焦於深度學習的進階領域,包括神經網絡結構搜索(Neural Architecture Search,NAS),樹狀遞歸神經網絡及語言解析,分布式訓練方法,以及梯度消失和梯度爆炸問題。其中,NAS系列視頻探討了如何使用隨機搜索、結合遞歸神經網絡與強化學習,以及利用可微方法來優化神經網絡結構。另外,視頻還介紹了自動編碼器、不同的訓練策略,以及如何在高性能計算環境中實施分布式訓練。此外,還包括關於圖神經網絡的教程,這是一種用於處理圖結構數據的先進深度學習方法。這系列視頻適合對深度學習進階主題感興趣的觀眾,特別是那些尋求了解最新研究和應用的學者和開發者。
-
神经网络结构搜索 (1/3): 基本概念和随机搜索 Neural Architecture Search: Basics & Random Search - Shusen Wang
这节课介绍神经网络结构搜索 (Neural Architecture Search) 的基础知识,包括超参数 (Hyper-parameters)、搜索空间 (Search Space)、随机搜索 (Random Search) 等概念。 课件: https://github.com/wangshusen/DeepLea...
-
神经网络结构搜索 (2/3): RNN + RL Neural Architecture Search: RNN + RL - Shusen Wang
这节课继续讲解神经网络结构搜索 (Neural Architecture Search)。这节课介绍的方法基于循环神经网络 (RNN) 和强化学习 (Reinforcement Learning)。 课件: https://github.com/wangshusen/DeepLea... 参考文献: - Zoph & Le. Neural architecture search with reinforcement learning. In ICLR, 2017.
-
神经网络结构搜索 (3/3): 可微方法 Differentiable Neural Architecture Search - Shusen Wang
这节课介绍可微神经网络结构搜索 (Differentiable Neural Architecture Search)。 课件: https://github.com/wangshusen/DeepLea... 参考文献: - Liu, Simonyan, & Yang. DARTS: Differentiable Architecture Search. In ICLR, 2019. - Wu et al. FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search. In CVPR, 2019.
-
Lecture 14: Tree Recursive Neural Networks and Constituency Parsing - Stanford University School of Engineering
Lecture 14 looks at compositionality and recursion followed by structure prediction with simple Tree RNN: Parsing. Research highlight ""Deep Reinforcement Learning for Dialogue Generation"" is covered is backpropagation through Structure. Key phrases: RNN, Recursive Neural Networks, MV-RNN, RNTN
-
Distributed Training On NVIDIA DGX Station A100 | Deep Learning Tutorial 43 (Tensorflow & Python) - codebasics
Using tensorflow mirrored strategy we will perform distributed training on NVIDIA DGX Station A100 System. Distributed training is used to split the training workload on different GPUs on a multi GPU system. We will see how performance can be optimized and training times can be reduced using this approach. Code: https://github.com/codebasics/deep-le... Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... DGX station A100: https://www.nvidia.com/en-us/data-cen... 🔖Hashtags🔖 #deeplearningmultigpu #deeplearninggpusetup #tensorflowdistributedtraining #tensorflowmirroredstratergy #distributedtraining #dgxa100 #nvidiadgxa100
-
Vanishing and exploding gradients | Deep Learning Tutorial 35 (Tensorflow, Keras & Python) - codebasics
Vanishing gradient is a commong problem encountered while training a deep neural network with many layers. In case of RNN this problem is prominent as unrolling a network layer in time makes it more like a deep neural network with many layers. In this video we will discuss what vanishing and exploding gradients are in artificial neural network (ANN) and in recurrent neural network (RNN) Do you want to learn technology from me? Check https://codebasics.io/ for my affordable video courses. Deep learning playlist: • Deep Learning With Tensorflow 2.0, Ke... Machine learning playlist : https://www.youtube.com/playlist?list... #vanishinggradient #gradient #gradientdeeplearning #deepneuralnetwork #deeplearningtutorial #vanishing #vanishingdeeplearning
-
Transfer Learning | Deep Learning Tutorial 27 (Tensorflow, Keras & Python) - codebasics
📺 Transfer learning is a very important concept in the field of computer vision and natural language processing. Using transfer learning you can use pre trained model and customize it for your needs. This saves computation time and money. It has been a revolutionary break through in the field of deep learning and nowadays you see it being used widely in the industry. In this video we will go over some theory behind transfer learning and then use google's mobile net v2 pre trained model to train our flowers dataset #transferlearning #transferlearningdeeplearning #transferlearningkeras #transferlearningtensorflow #transferlearningmodels #deeplearningtutorial #transferlearningexample
-
Deep Learning ::Denoising Autoencoder @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Deep Learning ::Autoencoder @ Machine Learning Techniques (機器學習技法) - 林軒田
-
Differentiable Neural Computer (LIVE) - Siraj Raval
The Differentiable Neural Computer is an awesome model that DeepMind recently released. It's a memory augmented network that can perform meta-learning (learning to learn). We'll go over it's architecture details and implement it ourselves in Tensorflow. Code for this video: https://github.com/llSourcell/differe... Please Subscribe! And like. And comment. That's what keeps me going. More learning resources: https://deepmind.com/blog/differentia... https://www.quora.com/How-groundbreak... https://github.com/dsindex/blog/wiki/... https://blog.acolyer.org/2016/03/09/n... https://thenewstack.io/googles-deepmi...
-
[TA 補充課] Graph Neural Network (1/2) (由助教姜成翰同學講授)
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML2020/GNN.pdf
-
[TA 補充課] Graph Neural Network (2/2) (由助教姜成翰同學講授)
slides: http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML2020/GNN.pdf
-
-
實作
這個播放列表集中於深度學習的實際應用和實作案例。視頻涵蓋了如何使用TensorFlow構建序列到序列(seq2seq)模型、生成視頻、風格轉換、藝術品創作、音樂生成以及文本摘要等多種主題。除此之外,還包括了使用TensorFlow進行時間序列分析和構建聊天機器人的教程。Siraj Raval的一系列視頻探討了如何創建各種啟動項目,如金融、市場營銷、人工智能、交易機器人和教育等領域的創業公司。此外,Edureka的視頻提供了具體的機器學習項目實例,如使用OpenCV構建文檔掃描器、使用自動編碼器為舊照片上色、在MNIST數據集上進行手寫數字識別、以及使用DC-GAN生成圖像等。這些視頻適合那些尋求將深度學習理論應用於實際問題,並對創建與深度學習相關的應用和項目感興趣的觀眾。
-
How to Use Tensorflow for Seq2seq Models (LIVE) - Siraj Raval
Let's build a Sequence to Sequence model in Tensorflow to learn exactly how they work. You can use this model to make chatbots, language translators, text generators, and much more . We'll go over memory, attention, and some variants (like bidirectional layers) both programmatically and mathematically. Code for this video: https://github.com/llSourcell/seq2seq...
-
How to Generate Video - Intro to Deep Learning #15 - Siraj Raval
Generative Adversarial Networks. It's time. We're going to use a Deep Convolutional GAN to generate images of the alien language from the movie arrival that we can then stitch together to animate into video. I'll go over the architecture of a GAN and then we'll implement one ourselves! Code for this video (coding challenge included): https://github.com/llSourcell/how_to_... Nemanja's winning code: https://github.com/Nemzy/video_generator Niyas' Runner up code: https://github.com/niazangels/vae-pok... and his blog post: https://hackernoon.com/how-to-autoenc...
-
How to Do Style Transfer with Tensorflow (LIVE) - siraj raval
We're going to learn about all the details of style transfer (especially the math) using just Tensorflow. The goal of this session is for you to understand the details behind how style+content loss is calculated and minimized. We'll also talk about future discoveries. Code for this video: https://github.com/llSourcell/How_to_...
-
How to Generate Art - Intro to Deep Learning #8 - Siraj Raval
We're going to learn how to use deep learning to convert an image into the style of an artist that we choose. We'll go over the history of computer generated art, then dive into the details of how this process works and why deep learning does it so well. Coding challenge for this video: https://github.com/llSourcell/How-to-... Itai's winning code: https://github.com/etai83/lstm_stock_... Andreas' runner up code: https://github.com/AndysDeepAbstracti...
-
How to Generate Music with Tensorflow (LIVE) - Siraj Raval
This live session will focus on the details of music generation using the Tensorflow library. The goal is for you to understand the details of how to encode music, feed it to a well tuned model, and use it to generate really cool sounds. And I'm going to NOT use Google Hangouts, instead I'll do this with a green screen and a DSLR camera :) Code for this video: https://github.com/llSourcell/music_d...
-
How to Generate Music - Intro to Deep Learning #9 - siraj raval
We're going to build a music generating neural network trained on jazz songs in Keras. I'll go over the history of algorithmic generation, then we'll walk step by step through the process of how LSTM networks help us generate music. Coding Challenge for this video: https://github.com/llSourcell/How-to-... Vishal's Winning Code: https://github.com/erilyth/DeepLearni... Michael's Runner up code: https://github.com/michalpelka/How-to... More Learning Resources: https://medium.com/@shiyan/understand... http://mourafiq.com/2016/05/15/predic... https://magenta.tensorflow.org/2016/0... http://deeplearning.net/tutorial/rnnr... https://maraoz.com/2016/02/02/abc-rnn/ http://www.cs.cmu.edu/~music//cmsip/s... http://www.hexahedria.com/2015/08/03/...
-
How to Make a Text Summarizer - Intro to Deep Learning #10 - Siraj Raval
I'll show you how you can turn an article into a one-sentence summary in Python with the Keras machine learning library. We'll go over word embeddings, encoder-decoder architecture, and the role of attention in learning theory. Code for this video (Challenge included): https://github.com/llSourcell/How_to_... Jie's Winning Code: https://github.com/jiexunsee/rudiment... More Learning resources: https://www.quora.com/Has-Deep-Learni... https://research.googleblog.com/2016/... https://en.wikipedia.org/wiki/Automat... http://deeplearning.net/tutorial/rnns... http://machinelearningmastery.com/tex...
-
How to Generate Your Own Wikipedia Articles (LIVE) - Siraj Raval
We're going to build an LSTM network in Tensorflow (no Keras) to generate text after training on Wikipedia articles. You'll learn how an LSTM cell works programmatically since we'll build one using TF's math functions and how you can parse a similar dataset Code: https://github.com/llSourcell/wiki_ge... Dataset: https://metamind.io/research/the-wiki...
-
How to Make a Language Translator - Intro to Deep Learning #11 - Siraj Raval
Let's build our own language translator using Tensorflow! We'll go over several translation methods and talk about how Google Translate is able to achieve state of the art performance. Code for this video: https://github.com/llSourcell/How_to_... Ryan's Winning Code: https://github.com/rtlee9/recipe-summ... Sarah's Runner-up Code: https://github.com/scollins83/teal_deer
-
How to Win Slot Machines - Intro to Deep Learning #13 - Siraj Raval
We'll learn how to solve the multi-armed bandit problem (maximizing success for a given slot machine) using a reinforcement learning technique called policy gradients. Code for this video: https://github.com/llSourcell/how_to_... Mike's winning code: https://github.com/xkortex/Siraj_Chat... Vishal's runner up code: https://github.com/erilyth/DeepLearni... this coding challenge was really close, so i'm also going to put code for 3rd place just this time (Eibriel): https://github.com/Eibriel/ice-cream-...
-
How to Generate Images - Intro to Deep Learning #14 - Siraj Raval
We're going to build a variational autoencoder capable of generating novel images after being trained on a collection of images. We'll be using handwritten digit images as training data. Then we'll both generate new digits and plot out the learned embeddings. And I introduce Bayesian theory for the first time in this series :) Code for this video: https://github.com/llSourcell/how_to_... Mike's Winning Code: https://github.com/xkortex/how_to_win... SG's Runner up Code: https://github.com/esha-sg/Intro-Deep...
-
How to Use Tensorflow for Time Series (Live) - Siraj Raval
We're going to use Tensorflow to predict the next event in a time series dataset. This can be applied to any kind of sequential data. Code for this video: https://github.com/llSourcell/rnn_tut...
-
How to Make a Chatbot - Intro to Deep Learning #12 - Siraj Raval
Lets Make a Question Answering chatbot using the bleeding edge in deep learning (Dynamic Memory Network). We'll go over different chatbot methodologies, then dive into how memory networks work, with accompanying code in Keras. Code + Challenge for this video: https://github.com/llSourcell/How_to_... Nemanja's Winning Code: https://github.com/Nemzy/language-tra... Vishal's Runner up code: https://github.com/erilyth/DeepLearni... Web app to run the code yourself: https://ethancaballero.pythonanywhere...
-
How to Convert Text to Images - Intro to Deep Learning #16 - Siraj Raval
Generative Adversarial Networks are back! We'll use the cutting edge StackGAN architecture to let us generate images from text descriptions alone. This is pretty wild stuff and there is so much room for improvement. The possibilities are endless. I'll go through the architecture, code, and the implications of this technology for humanity. Special shoutout to new Patrons Joshua Tobkin, Cameron Tofer, and Zarathustra Technologies. I'll add you guys to the credits next video.
-
Building a Health DAO with GitHub CoPilot (AlphaCare: Episode 5) - Siraj Raval
Decentralized Autonomous Organizations (DAOs) are the future of the Web. In this episode, we'll build a simple DAO that let's users submit their health data to a marketplace and let's health organizations buy that data, with the proceeds going directly to users. This is an idea I've been toying with for a few weeks as I'm really interested in ways of incentivizing people to be healthy. We'll learn about how DAOs work, then we'll build a toy DAO example. I'll then show you how to build a DAO without coding, then we'll build the Health DAO using Solidity, Truffle, Metamask, Ganache, IPFS, and Javascript. Interspersed in all of this are demos of GitHub's CoPilot tool that I now have access to, which auto-completes your code in an incredible way. Get hype!
-
Mint a Genome NFT (AlphaCare: Episode 4) - Siraj Raval
Health data is among the most valuable types of data, and using blockchain technology, patients can earn a passive income from sharing it with researchers and clinics. In this episode, I'll show you how I minted an NFT (Non-fungible token) of my genome that I sequenced from AncestryDNA. I used the Polygon and Ethereum blockchains to do this, and it's currently listed for sale on the OpenSea marketplace. NFTs are just getting started, they enable property rights for the meta-verse by artificially inducing scarcity on an otherwise infinitely replicable resource; data. We are just at the beginning of the NFT revolution and we have a lot to discuss in this video including how Ethereum, Polygon, and Genomic visualization works. Get ready for Python, Javascript, and Solidity programming. Enjoy!
-
Multiomics Data for Cancer Diagnosis (AlphaCare: Episode 3) - Siraj Raval
The amount of molecular biology datasets available are growing exponentially every month. Multiomics consist of all the layers of the molecular biome; the genome, epigenome, transcriptome, proteome, and metabolome. In this episode, we're going to learn how each of the layers of the molecular biology stack work, and then look at 3 different real world use cases for Cancer patients (diagnostic, prognostic, and predictive) using open-source python code on GitHub. Then we'll look at how a Generative Adversarial Network can be used to generate synthetic genomic data to battle imbalanced classes. Enjoy!
-
Perceiver for Cardiac Video Data Classification (AlphaCare: Episode 2) - Siraj Raval
DeepMind recently released a new type of Transformer called the Perceiver IO, which was able to achieve state of the art accuracy across multiple data types (text, images, point clouds, and more). In this episode of the AlphaCare series, I'll explain how Perceiver works, and how we used it to improve accuracy scores for Cardiac video data. The EchoNet dataset was recently made public by Stanford University, and it contains 10K privatized heart videos from patients. We'll also discuss why Transformer networks work so well, and how by using 2 key features (Cross attention & positional embeddings), the Perceiver improved on all variants of Transformers. Get hype!
-
Financial Forecasting using Tensorflow.js (LIVE) - Siraj Raval
Can we use convolutional neural networks for time series analysis? It seems like a strange use case of convolutional networks, since they are generally used for image related tasks. But in recent months, more and more papers have started using convolutional networks for sequence classification. And since stock prices are a sequence, we can use them to make predictions. In this video, i'll use the popular tensorflow.js library to test out a prediction model for Apple stock. I'll also talk about how recurrent networks work as background. This is my first proper live stream in a year. Get hype!
-
Cancer Detection Using Deep Learning | Deep Learning Projects | CNN | Edureka
This Edureka video on 𝐂𝐚𝐧𝐜𝐞𝐫 𝐃𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐔𝐬𝐢𝐧𝐠 𝐃𝐞𝐞𝐩 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠, will help you understand how to develop models using Convolution Neural Networks. We will also have a discussion on improving model accuracy using pretrained models. Below are the topics covered in Cancer Detection Using Deep Learning video : 00:00:00 Introduction 00:00:52 Introduction to Deep Learning 00:02:57 Deep Learning General Intuition 00:05:43 Image Processing Using DL 00:16:54 Brain Tumor Detection Using Custom Model 01:07:43 Transfer Learning 01:11:21 CNN Architectures
-
Building Document Scanner Using OpenCV | Machine Learning Project | Edureka
This Edureka video on 'Building Document Scanner Using OpenCV' will give you an overview of Building Document Scanner Using OpenCV with Machine Learning and will help you understand various important concepts that concern Building Document Scanner Using OpenCV. Following pointers are covered in this Building Document Scanner Using OpenCV tutorial: 1) Agenda 00:00 2) Problem Statement 00:50 3) Tools and Frameworks 01:52 3) Hands-on: Project 02:34
-
Color Old Photographs Using Autoencoders | Machine Learning Projects | Edureka
This Edureka video on '𝐂𝐨𝐥𝐨𝐫 𝐎𝐥𝐝 𝐏𝐡𝐨𝐭𝐨𝐠𝐫𝐚𝐩𝐡𝐬 𝐔𝐬𝐢𝐧𝐠 𝐀𝐮𝐭𝐨𝐞𝐧𝐜𝐨𝐝𝐞𝐫𝐬' will give you an overview of how to color your old Photographs with the help of Machine Learning. Following pointers are covered in this Color old photographs using Autoencoders: 00:00:00 Agenda 00:00:47 Problem Statement 00:02:40 Tools and Frameworks 00:03:16 Project
-
Handwritten Digit Recognition on MNIST dataset | Machine Learning Projects | keras | Edureka
This Edureka video on 𝐁𝐨𝐚𝐫𝐝 𝐌𝐍𝐈𝐒𝐓 𝐟𝐨𝐫 𝐡𝐚𝐧𝐝𝐰𝐫𝐢𝐭𝐭𝐞𝐧 𝐝𝐢𝐠𝐢𝐭𝐬' will give you an overview of Board MNIST for handwritten digits using Machine Learning. Following pointers are covered in this Board MNIST for handwritten digits: 00:00:00 Agenda 00:00:46 Problem Statement 00:02:32 Tools and Frameworks 00:03:12 Project
-
Cartoon Effect on Image using OpenCV | Machine Learning Project | Edureka - YouTube
This Edureka video on 'Cartoon Effect on Image using OpenCV' will give you an overview of Cartoon Effect on Image using Machine Learning and will help you understand various important concepts that concern Cartoon Effect on Image with ML. Following pointers are covered in this Cartoon Effect on Image using OpenCV: 1) Introduction 2) Tools and Frameworks 3) Project
-
Plant Leaf Disease Detection GUI | Machine Learning Projects | Edureka - YouTube
This Edureka video on ' 𝐏𝐥𝐚𝐧𝐭 𝐋𝐞𝐚𝐟 𝐃𝐢𝐬𝐞𝐚𝐬𝐞 𝐃𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐰𝐢𝐭𝐡 𝐆𝐔𝐈' will give you an overview of how to detect the various Plant Leaf Diseases using Image Processing with GUI. Following pointers are covered in this Plant Leaf Disease Detection with GUI : 00:00:00 Agenda 00:00:53 Problem Statement 00:02:26 Tools and Frameworks 00:02:58 Project
-
Emoji Prediction using LSTM | Machine Learning Projects | Edureka - YouTube
This Edureka video on '𝐄𝐦𝐨𝐣𝐢 𝐏𝐫𝐞𝐝𝐢𝐜𝐭𝐢𝐨𝐧 𝐮𝐬𝐢𝐧𝐠 𝐋𝐒𝐓𝐌' will help you predict emojis using LSTM. Following pointers are covered in this Emoji Prediction using LSTM: 00:00:00 Agenda 00:00:43 Introduction 00:02:30 Tools and Frameworks 00:02:54 Project
-
Generate Images Using DC-GAN | Generative Adversarial Networks | Machine Learning Projects 6|Edureka
This Edureka video on 'Generate Images Using DC-Gan's' will give you an overview of Generate Images Using DC-Gan's using Machine Learning and will help you understand various important concepts that concern Generate Images Using DC-Gan's with ML. Following pointers are covered in this Generate Images Using DC-Gan's: Agenda 00:00 Problem Statement 00:45 Workflow of the Project 01:25 Tools and Frameworks 01:55 Hands-on: Project 02:30
-
Watch Me Build a Finance Startup - Siraj Raval
I've built an app called Artificial Advisor that helps you manage your personal finances. After connecting to your bank account, it automatically categorizes your transactions and helps you allocate a monthly budget. You can ask the app questions about your budget and it will also make automated investment decisions for you in several stocks in the industry of your choosing. In this lecture, i'll explain the code and thought process I used to build it so that you can build your own finance startup. I used Tensorflow + Firebase + Plaid + Dialogflow + Alpaca to build this. Enjoy!
-
Watch Me Build a Marketing Startup - Siraj Raval
I've built an app called VectorFunnel that automatically scores leads for marketing & sales teams! I used React for the frontend, Node.js for the backend, PostgreSQL for the database, and Tensorflow.js for scoring each lead in an excel spreadsheet. There are a host of other tools that I used like ClearBit's data API and various Javascript frameworks. If you have no idea what any of that is, that's ok I'll show you! In this video, I'll explain how I built the app so that you can understand how all these parts fit together. The learning goal here is to give you enough of an idea of how these tools work to be able to formulate a plan for your own marketing startup MVP (minimum viable product). Enjoy!
-
Watch Me Build an AI Startup - Siraj Raval
I'm going to build a medical imaging classification app called SmartMedScan! The potential customers for this app are medical professionals that need to scale and improve the accuracy of their diagnoses using AI. From ideation, to logo design, to integrating features like payments and AI into a single app, I'll show you my 10 step process. I hope that by seeing my thought process and getting familiar with the sequence of steps I'll demonstrate, you too will be as inspired as I am to use this technology to do something great for the world. Enjoy!
-
Watch Me Build a Trading Bot - Siraj Raval
I've built a cryptocurrency trading bot called GradientTrader, and in this video I'll show you the tools I used to build it! It uses a graphical interface that lets you back-test on historical data, simulate paper trading, and implement a custom trading strategy for the real markets. The technique I used was a cutting edge Deep Reinforcement Learning strategy called Multi Agent Actor Critic. I'll explain it all here, enjoy!
-
Watch Me Build an Education Startup - Siraj Raval
I've built a tool for teachers that automatically grades and validates essays using modified versions of popular language models, specifically BERT and GPT-2. It's called EssayBrain and I built it using the Python programming language, as well Flask, Tensorflow.js, Tensorflow, D3.js, CopyLeaks, Stripe, and Firebase. In this video tutorial, i'll guide you through my process as I build this project. The code is open source and I'll link to it below. Use it as inspiration to start your own profitable business in this space. We've got to upgrade education, and with the power of technology anyone anywhere can create a viable engineering solution that creates a positive impact. Enjoy!
-
How to Build a Healthcare Startup - Siraj Raval
I'm going to show you the entire process I used to conceive, design, and build the prototype for a healthcare startup business! This app is called "Macy", your personal Yoga Instructor. It uses a machine learning model called PoseNet to detect human poses and overlay a skeleton stick figure on top. I retrained posenet on labeled Yoga poses images so it could detect when a person correctly performs a certain pose, then i used speech generation to have the Flutter app guide your actions from pose to pose, just as a real Yoga instructor would do. The goal is to help a user reduce stress, anxiety, and depression through a series of guide meditative poses. I've integrated a subscription service and some interesting design schemes, but the app isn't finished! There is still more to do. The point is to give you the starting template to start your own profitable business. Enjoy!
-
How to Build a Bitcoin Startup - Siraj Raval
Healthpass is a secure cloud store for all your health records! This serves patients by giving them one place to store and view all of their health data across all providers globally. It serves providers by ensuring that health records are stored securely via a blockchain and allows them to see a patients previous health history easily. To build this app I used Node.js, Firebase, Bitcoin's Blockchain, Paypal, Stripe, Tesserect.js, BioBERT, and a health data visualization library called hFigures. The reason I built this app is to give you an idea on how a "bitcoin" startup would look practically, programmatically, and visually. I hope this inspires you to build something cool and even managed to fit a rap about SHA-256 into it. Enjoy!
-
How to Build a Retail Startup - Siraj Raval
I've built a demo app called SmartSneaks that lets a user convert a song or image into a generated shoe design! This is an example of how AI can be used to transform retail by giving users a more personalized experience. The tools I used to build this are the Flutter framework for mobile development and the Flask framework for web development. There are 3 learning objectives in this video including how to build a deep learning API for your mobile app, how to generate images with a generative adversarial network, and how to calculate image similarity with OpenCV. I hope you find this demo tutorial useful, enjoy! Code for this video: https://github.com/llSourcell/Build-a...
-
How to Build a Biomedical Startup - Siraj Raval
I've built an open source app called Dr Source, your personal medical question answering service! It uses a model called BioBERT trained on over 700K Q&As from PubMed, HealthTap, and other health related websites. I used Flutter to build an app around it and present it to you as a more thought out idea. I was excited by what I saw possible with BioBERT's output in Python notebooks, and thought a cleaner interface could absolutely make it a viable business. There are millions of people in this world without access to healthcare, and while this app isn't perfect, an automated diagnosis is better than no diagnosis. I urge you to use this code and video as a starting point in your journey to generate value for the world, and build wealth while doing so. Enjoy! Code for this video (and presentation): https://github.com/llSourcell/How-to-...
-
How to Predict Music You Love (LIVE) - Siraj Raval
In this video, we're going to look at several different type of recommender systems in an iPython notebook. Popularity based, item-item collaborative, then user-item collaborative. Then we'll touch on the bleeding edge in deep learning at the end. Also I freestyle. Twice lol. Code for this video: https://github.com/llSourcell/recomme...
-
Self-Driving Car with JavaScript Course – Neural Networks and Machine Learning - freeCodeCamp.org
earn how to create a neural network using JavaScript with no libraries. In this course you will learn to make a self-driving car simulation by implementing every component step-by-step. You will learn how to implement the car driving mechanics, how to define the environment, how to simulate some sensors, how to detect collisions, and how to make the car control itself using a neural network. The course covers how artificial neural networks work, by comparing them with the real neural networks in our brain. You will learn how to implement a neural network and how to visualize it so we can see it in action. ✏️ Dr. Radu Mariescu-Istodor created this course. Check out his channel: / @radu 💻 Code: https://github.com/gniziemazity/Self-... ⭐️ Course Contents ⭐️ ⌨️ (0:00:00) Intro ⌨️ (0:03:44) Car driving mechanics ⌨️ (0:32:26) Defining the road ⌨️ (0:50:50) Artificial sensors ⌨️ (1:10:07) Collision detection ⌨️ (1:23:20) Simulating traffic ⌨️ (1:34:57) Neural network ⌨️ (2:03:10) Parallelization ⌨️ (2:18:31) Genetic algorithm ⌨️ (2:29:40) Ending ⭐️ Links ⭐️ 🔗 Radu's website (with enhanced version of code): https://radufromfinland.com 🔗 Radu's workplace (consider applying): https://karelia.fi/en/front-page 🔗 Segment intersection (Math and JavaScript code): • Segment intersection formula explained 🔗 Visualizing a neural network in JavaScript: • Self-driving car - No libraries - Jav... 🔗 Visualizer code: https://radufromfinland.com/projects/... 🔗 Drawing random color cars in JavaScript: • Self-driving car - No libraries - Jav... 🎉 Thanks to our Champion and Sponsor supporters: 👾 Raymond Odero 👾 Agustín Kussrow 👾 aldo ferretti 👾 Otis Morgan 👾 DeezMaster
-