- Видео 147
- Просмотров 1 109 732
Preserve Knowledge
Канада
Добавлен 22 июн 2011
Perserve Knowledge is a Canada higher education media organization that focuses on advances in mathematics, computer science, and artificial intelligence. Its goal is to bring together the world's leading researchers and students in computer science and related fields.
How AI Powers Self-Driving Tesla with Elon Musk and Andrej Karpathy
A segment on the technology powering self-driving Teslas from Tesla's Autonomy Day 2019
Просмотров: 58 500
Видео
Tesla AI Andrej Karpathy on Scalability in Autonomous Driving
Просмотров 8 тыс.3 года назад
Tesla's Senior Director of AI, Andrej Karpathy provides a keynote at the CVPR 2020 Scalability in Autonomous Driving Workshop.
Yann LeCun: Turing Award Lecture "The Deep Learning Revolution: The Sequel"
Просмотров 2,3 тыс.3 года назад
Yann LeCun's 2018 ACM A.M. Turing Award Lecture: "The Deep Learning Revolution: The Sequel"
Geoffrey Hinton: Turing Award Lecture "The Deep Learning Revolution"
Просмотров 7 тыс.3 года назад
Geoffrey Hinton's 2018 ACM A.M. Turing Award Lecture: "The Deep Learning Revolution"
Microsoft CEO Satya Nadella CVPR 2020
Просмотров 8543 года назад
Harry Shum chats with Satya Nadella at CVPR 2020
David Duvenaud | Reflecting on Neural ODEs | NeurIPS 2019
Просмотров 26 тыс.4 года назад
Original paper: arxiv.org/abs/1806.07366 David's homepage: www.cs.toronto.edu/~duvenaud/ Summary: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-dept...
Yoshua Bengio | From System 1 Deep Learning to System 2 Deep Learning | NeurIPS 2019
Просмотров 39 тыс.4 года назад
Slides: www.iro.umontreal.ca/~bengioy/NeurIPS-11dec2019.pdf Summary: Past progress in deep learning has concentrated mostly on learning from a static dataset, mostly for perception tasks and other System 1 tasks which are done intuitively and unconsciously by humans. However, in recent years, a shift in research direction and new tools such as soft-attention and progress in deep reinforcement l...
NeurIPS 2019 Test of Time Award - Lin Xiao
Просмотров 3,2 тыс.4 года назад
Dual Averaging Method for Regularized Stochastic Learning and Online Optimization Slides: imgur.com/a/b2AiEUI Paper: papers.nips.cc/paper/3882-dual-averaging-method-for-regularized-stochastic-learning-and-online-optimization.pdf Abstract: We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss fun...
NIPS 2017 Test of Time Award "Machine learning has become alchemy.” | Ali Rahimi, Google
Просмотров 23 тыс.6 лет назад
NIPS 2017 Test of Time Award "Machine learning has become alchemy.” | Ali Rahimi, Google
Yann LeCun, Christopher Manning on Innate Priors in Deep Learning Systems at Stanford AI
Просмотров 1,5 тыс.6 лет назад
Yann LeCun is the Chief AI Scientist at Facebook AI Research, a Silver Professor at New York University, and one of the leading voices in AI. He pioneered the early use of convolutional neural networks, which have been central to the modern success of Deep Learning. LeCun has been a leading proponent for the ability of simple but powerful neural architectures to perform sophisticated tasks with...
Meet Geoffrey Hinton, U of T's Godfather of Deep Learning
Просмотров 13 тыс.6 лет назад
Meet Geoffrey Hinton: U of T Professor Emeritus of computer science, an Engineering Fellow at Google, and Chief Scientific Adviser at the Vector Institute for Artificial Intelligence. In this interview with U of T News, Prof. Hinton discusses his career, the field of artificial intelligence and the importance of funding curiosity-driven scientific research.
Learning Representations: A Challenge for Learning Theory, COLT 2013 | Yann LeCun, NYU
Просмотров 9426 лет назад
Slides: videolectures.net/site/normal_dl/tag=800934/colt2013_lecun_theory_01.pdf Perceptual tasks such as vision and audition require the construction of good features, or good internal representations of the input. Deep Learning designates a set of supervised and unsupervised methods to construct feature hierarchies automatically by training systems composed of multiple stages of trainable mod...
Edward: Library for probabilistic modeling, inference, and criticism | Dustin Tran, Columbia Uni
Просмотров 3,4 тыс.6 лет назад
Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilistic models, ranging from classical hierarchical models on small data sets to complex deep probabilistic models on large data sets. Edward fuses three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming.
Unrolled Generative Adversarial Networks, NIPS 2016 | Luke Metz, Google Brain
Просмотров 2 тыс.6 лет назад
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein arxiv.org/abs/1611.02163 NIPS 2016 Workshop on Adversarial Training Spotlight We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generat...
Semantic Segmentation using Adversarial Networks, NIPS 2016 | Pauline Luc, Facebook AI Research
Просмотров 4,7 тыс.6 лет назад
Pauline Luc, Camille Couprie, Soumith Chintala, Jakob Verbeek arxiv.org/abs/1611.08408 NIPS 2016 Workshop on Adversarial Training Spotlight Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network al...
Conditional Image Synthesis with Auxiliary Classifier GANs, NIPS 2016 | Augustus Odena, Google Brain
Просмотров 1,8 тыс.6 лет назад
Conditional Image Synthesis with Auxiliary Classifier GANs, NIPS 2016 | Augustus Odena, Google Brain
Connecting Generative Adversarial Networks and Actor Critic Methods, NIPS 2016 | David Pfau
Просмотров 1,5 тыс.6 лет назад
Connecting Generative Adversarial Networks and Actor Critic Methods, NIPS 2016 | David Pfau
Learning in Implicit Generative Models, NIPS 2016 | Shakir Mohamed, Google DeepMind
Просмотров 1,7 тыс.6 лет назад
Learning in Implicit Generative Models, NIPS 2016 | Shakir Mohamed, Google DeepMind
Convex Optimization with Abstract Linear Operators, ICCV 2015 | Stephen P. Boyd, Stanford
Просмотров 5 тыс.6 лет назад
Convex Optimization with Abstract Linear Operators, ICCV 2015 | Stephen P. Boyd, Stanford
A Connection Between GANs, Inverse Reinforcement Learning, and Energy Based Models, NIPS 2016
Просмотров 6 тыс.6 лет назад
A Connection Between GANs, Inverse Reinforcement Learning, and Energy Based Models, NIPS 2016
Adversarial Training Methods for Semi-Supervised Text Classification, NIPS 2016 | Andrew M. Dai
Просмотров 2,7 тыс.6 лет назад
Adversarial Training Methods for Semi-Supervised Text Classification, NIPS 2016 | Andrew M. Dai
Borrowing Ideas from Human Vision, ICCV 2015 PAMI Distinguished Researcher Award | David Lowe, UBC
Просмотров 7106 лет назад
Borrowing Ideas from Human Vision, ICCV 2015 PAMI Distinguished Researcher Award | David Lowe, UBC
How to train a GAN, NIPS 2016 | Soumith Chintala, Facebook AI Research
Просмотров 9 тыс.6 лет назад
How to train a GAN, NIPS 2016 | Soumith Chintala, Facebook AI Research
Energy-Based Adversarial Training and Video Prediction, NIPS 2016 | Yann LeCun, Facebook AI Research
Просмотров 2,9 тыс.6 лет назад
Energy-Based Adversarial Training and Video Prediction, NIPS 2016 | Yann LeCun, Facebook AI Research
It's Learning All the Way Down, ICCV 2015 PAMI Distinguished Researcher Award | Yann LeCun, NYU
Просмотров 2976 лет назад
It's Learning All the Way Down, ICCV 2015 PAMI Distinguished Researcher Award | Yann LeCun, NYU
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
Просмотров 151 тыс.6 лет назад
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
Deep Learning for Predicting Human Strategic Behavior, NIPS 2016 | Jason Hartford, UBC
Просмотров 3 тыс.6 лет назад
Deep Learning for Predicting Human Strategic Behavior, NIPS 2016 | Jason Hartford, UBC
Predictive Learning, NIPS 2016 | Yann LeCun, Facebook Research
Просмотров 7 тыс.6 лет назад
Predictive Learning, NIPS 2016 | Yann LeCun, Facebook Research
Using Fast Weights to Attend to the Recent Past, NIPS 2016 | Jimmy Ba, University of Toronto
Просмотров 2,3 тыс.6 лет назад
Using Fast Weights to Attend to the Recent Past, NIPS 2016 | Jimmy Ba, University of Toronto
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization, NIPS 2016
Просмотров 3,4 тыс.6 лет назад
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization, NIPS 2016
And still the building collapse
Love the little laugh at 12:58
00:05 Ian Goodfellow's journey into deep learning 02:02 Ian Goodfellow innovated GANs for generative modeling. 03:51 Goodfellow's determination to make his idea work paid off. 05:37 Games are at an important crossroads in deep learning 07:28 Importance of mastering basic math for deep learning 09:18 Evolution of AI and Deep Learning 11:19 Entering AI field without needing a PhD. 13:06 Importance of building security into machine learning algorithms from the start Crafted by Merlin AI.
I didn't know it was dog network.
This man is great ❤
He is a real hero, I am watching his lessons : Love + AI === Andrej
Two amazing teachers !
I dont even think about russia
Set playback speed to 2x to fully understand what is happening in Andrej's brain
bad presentation...fast, sloppily speaking
Great talk. Super underrated
Looks like it greatly outperforms LSTMs, so I wonder what's keeping it from being the next gold standard. Also bit of a shame it only blew up after Transformers replaced RNNs for mainstream purposes. With the recent surge of graph nets and massively multi agent learning, hope it'll get another chance to be used
I watched the video as a part of Deep Learning Specialization. Sadly, it's way way over my head to comprehend much of what was said in the video.
the good old days when Musk wasn't a total nutjob
Best research advice ever!! "Read the literature, but not too much of it."
can I say that Ian Goodfellow is the GOAT in modern computer science? T.T
You give what is a gem to the people around the world Sir Salute from Cambodia
Thanks you so much Dr.
Hey And , Do you still accept donations by any chance, I am hoping for 720p videos from now on.
very informative!
Missiles making
Great legends are talking on great things.
I so agree with Hinton: have an idea and go for it. I took this approach with something other than AI, but it also worked. What do I mean? I mean, even though my idea wasn't revolutionary and totally worthwhile, I LEARNED A LOT by just going for it and programming the heck out of it. The practical experience I gained served me well--very well--in my first jobs. Remember: your purpose is to learn, and you can do that following your intuition--which is fun--or following someone else's--which is less fun.
The two folks from which I've learned the most about AI. Thanks so much!
I think the difference between wake and sleep is during sleep it is in the testing phase and during wake it is the operative phase of learning.
This is amazing concept, we’re keeping borrowing solutions from biological systems, but no wonder, they had million of years to solve all those problems before us already
Seriously doubt Geoffrey Hinton considers himself a hero - more like Dr. Frankenstein now. He's doing his part to spread the word on the dangers of reliance on AI.
Great
Another way I learned to do textual substitution would be the same as saying x[x := (x v x+1)][ x := (x v x+1)] in which case you would get the first substitution (x v x+1)[ x := (x v x+1)] Then the second substitution (x v (x) +1) v (x+1 v (x+1) +1) Then combining and removing unnecessary parentheses and repeated x+1 gets the same value as the video x v x+1 v x+2
It took me a month to fully understand everything he discussed in this presentation (at a high level). I think this is the future. Would love to hang out and discuss if anyone is in Toronto.
I've never herd someone speak so passionately about starting at 0
So basically you guys are utilising the fleet for getting varied data and also ensuring if the model works fine and if it fails then again quickly train the model on those groups of datasets to make the model more robust!!! Interesting However how far are we from the moment where we kind of act well as humans do with just very few datasets ??? Cuz what we are doing is statistical inference on the basis of large datasets ! So it's basically good datasets and good compute as mentioned earlier
Maybe the cake... wasn't a lie
Genius play respect to genius.
What a charlatan, Elon Musk! This guy is basically full of BS, knows absolutely NOTHING about AI other than basic, layman info, yet he finds himself everywhere, pretending to know things. The real founders of Tesla talked about his ego and attention seeking mindset.
10:58 global community
10:57
Excellent interview. Down to earth, straight, and lot of information in this interview. Great work Andrew Ng with your contributions.
Parachuting in from the future to confirm that we have now unleashed this alchemy onto the public in pursuit of seed capital. What a time to be alive...
2:48 both players are neural networks
5:16
5:26 the goal of the generator is to fool the discriminator
5:30 eventually the generator is forced to produce data as true as possible
Uhrik and Putin cried bitter tears
3:10 one of the players is trained :) to do the best as possible on the worst possible input
Walt and Gale vibes
38:10 a thought is just a great big vector of neural activity
38:19 people who thought that thoughts were symbolic expressions made a huge mistake.. what comes in is a string of words, and what comes out is a string of words, and because of that, strings of words are the obvious way to represent things, so they thought what must be in between was a string of words or something alike.. Hinton thinks there's nothing like a string of words in between, he thinks thinking of it as of some kind of language is as silly as the idea that understanding the layout of a spacial scene must be in pixels :))
35:08 our relationship to computers has changed.. instead of programming them, we show them, and they figure it out
36:04 :D
18:00 2007 ignored Hinton and Bengio picked it up layer on
0:19 Godfather 😎
25:33
Geoff Hinton is legendary
2 legends in one frame
Wonderful, always a pleasure to hear you speak!
So much intuition.
10:55
What an amazing presentation! Thank you