computer_science:machine_learning:coursera:introduction_tensorflow_artificial_intelligence_deep

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
computer_science:machine_learning:coursera:introduction_tensorflow_artificial_intelligence_deep [2020/08/08 23:44] carlossousacomputer_science:machine_learning:coursera:introduction_tensorflow_artificial_intelligence_deep [2024/08/16 12:54] (current) – removed carlossousa
Line 1: Line 1:
-====== Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning ====== 
- 
-===== Week 1: ===== 
- 
-==== Traditional Programming vs Machine Learning ==== 
- 
-{{:computer_science:machine_learning:coursera:tradtional_programming_vs_machine_learning.png?nolink&847x416}} 
- 
-==== Hello World of Neural Networks ==== 
- 
-{{:computer_science:machine_learning:coursera:helloworld_of_neuralnetworks.jpg?nolink&627x164}} 
- 
-<code python> 
-# One Neuron Neural Network 
-# Dense = Define a Layer of connected Neurons 
-# Only 1 Layer, Only 1 Unit, so a Single (1) Neuron 
- 
-model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) 
-model.compile(optimizer='sgd', loss='mean_squared_error') 
- 
-</code> 
- 
-You've probably seen that for machine learning, you need to know and use a lot of math, calculus probability and the like.\\ 
-It's really good to understand that as you want to optimize your models but the nice thing for now about TensorFlow and keras is that a lot of that math is implemented for you in functions.\\ 
-There are two function roles that you should be aware of though and these are loss functions and optimizers.\\ 
-This code defines them. I like to think about it this way.\\ 
-The neural network has no idea of the relationship between X and Y, so it makes a guess.\\ 
-Say it guesses Y equals 10X minus 10. It will then use the data that it knows about, that's the set of Xs and Ys that we've already seen to measure how good or how bad its guess was.\\ 
-The loss function measures this and then gives the data to the optimizer which figures out the next guess. So the optimizer thinks about how good or how badly the guess was done using the data from the loss function.\\ 
-Then the logic is that each guess should be better than the one before. As the guesses get better and better, an accuracy approaches 100 percent, the term convergence is used.\\ 
-In this case, the loss is mean squared error and the optimizer is SGD which stands for stochastic gradient descent. 
- 
-===== External References: ===== 
- 
-[[https://www.coursera.org/learn/introduction-tensorflow/home/welcome|The Introduction to Tensorflow in Coursera Course]] 
- 
  
  • computer_science/machine_learning/coursera/introduction_tensorflow_artificial_intelligence_deep.1596923070.txt.gz
  • Last modified: 2023/12/01 13:07
  • (external edit)