Preserve Knowledge
Preserve Knowledge
  • Видео 147
  • Просмотров 1 109 732

Видео

Tesla AI Andrej Karpathy on Scalability in Autonomous Driving
Просмотров 8 тыс.3 года назад
Tesla's Senior Director of AI, Andrej Karpathy provides a keynote at the CVPR 2020 Scalability in Autonomous Driving Workshop.
Yann LeCun: Turing Award Lecture "The Deep Learning Revolution: The Sequel"
Просмотров 2,3 тыс.3 года назад
Yann LeCun's 2018 ACM A.M. Turing Award Lecture: "The Deep Learning Revolution: The Sequel"
Geoffrey Hinton: Turing Award Lecture "The Deep Learning Revolution"
Просмотров 7 тыс.3 года назад
Geoffrey Hinton's 2018 ACM A.M. Turing Award Lecture: "The Deep Learning Revolution"
Microsoft CEO Satya Nadella CVPR 2020
Просмотров 8543 года назад
Harry Shum chats with Satya Nadella at CVPR 2020
David Duvenaud | Reflecting on Neural ODEs | NeurIPS 2019
Просмотров 26 тыс.4 года назад
Original paper: arxiv.org/abs/1806.07366 David's homepage: www.cs.toronto.edu/~duvenaud/ Summary: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-dept...
Yoshua Bengio | From System 1 Deep Learning to System 2 Deep Learning | NeurIPS 2019
Просмотров 39 тыс.4 года назад
Slides: www.iro.umontreal.ca/~bengioy/NeurIPS-11dec2019.pdf Summary: Past progress in deep learning has concentrated mostly on learning from a static dataset, mostly for perception tasks and other System 1 tasks which are done intuitively and unconsciously by humans. However, in recent years, a shift in research direction and new tools such as soft-attention and progress in deep reinforcement l...
NeurIPS 2019 Test of Time Award - Lin Xiao
Просмотров 3,2 тыс.4 года назад
Dual Averaging Method for Regularized Stochastic Learning and Online Optimization Slides: imgur.com/a/b2AiEUI Paper: papers.nips.cc/paper/3882-dual-averaging-method-for-regularized-stochastic-learning-and-online-optimization.pdf Abstract: We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss fun...
NIPS 2017 Test of Time Award "Machine learning has become alchemy.” | Ali Rahimi, Google
Просмотров 23 тыс.6 лет назад
NIPS 2017 Test of Time Award "Machine learning has become alchemy.” | Ali Rahimi, Google
Yann LeCun, Christopher Manning on Innate Priors in Deep Learning Systems at Stanford AI
Просмотров 1,5 тыс.6 лет назад
Yann LeCun is the Chief AI Scientist at Facebook AI Research, a Silver Professor at New York University, and one of the leading voices in AI. He pioneered the early use of convolutional neural networks, which have been central to the modern success of Deep Learning. LeCun has been a leading proponent for the ability of simple but powerful neural architectures to perform sophisticated tasks with...
Meet Geoffrey Hinton, U of T's Godfather of Deep Learning
Просмотров 13 тыс.6 лет назад
Meet Geoffrey Hinton: U of T Professor Emeritus of computer science, an Engineering Fellow at Google, and Chief Scientific Adviser at the Vector Institute for Artificial Intelligence. In this interview with U of T News, Prof. Hinton discusses his career, the field of artificial intelligence and the importance of funding curiosity-driven scientific research.
Learning Representations: A Challenge for Learning Theory, COLT 2013 | Yann LeCun, NYU
Просмотров 9426 лет назад
Slides: videolectures.net/site/normal_dl/tag=800934/colt2013_lecun_theory_01.pdf Perceptual tasks such as vision and audition require the construction of good features, or good internal representations of the input. Deep Learning designates a set of supervised and unsupervised methods to construct feature hierarchies automatically by training systems composed of multiple stages of trainable mod...
Edward: Library for probabilistic modeling, inference, and criticism | Dustin Tran, Columbia Uni
Просмотров 3,4 тыс.6 лет назад
Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilistic models, ranging from classical hierarchical models on small data sets to complex deep probabilistic models on large data sets. Edward fuses three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming.
Unrolled Generative Adversarial Networks, NIPS 2016 | Luke Metz, Google Brain
Просмотров 2 тыс.6 лет назад
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein arxiv.org/abs/1611.02163 NIPS 2016 Workshop on Adversarial Training Spotlight We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generat...
Semantic Segmentation using Adversarial Networks, NIPS 2016 | Pauline Luc, Facebook AI Research
Просмотров 4,7 тыс.6 лет назад
Pauline Luc, Camille Couprie, Soumith Chintala, Jakob Verbeek arxiv.org/abs/1611.08408 NIPS 2016 Workshop on Adversarial Training Spotlight Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network al...
Conditional Image Synthesis with Auxiliary Classifier GANs, NIPS 2016 | Augustus Odena, Google Brain
Просмотров 1,8 тыс.6 лет назад
Conditional Image Synthesis with Auxiliary Classifier GANs, NIPS 2016 | Augustus Odena, Google Brain
Connecting Generative Adversarial Networks and Actor Critic Methods, NIPS 2016 | David Pfau
Просмотров 1,5 тыс.6 лет назад
Connecting Generative Adversarial Networks and Actor Critic Methods, NIPS 2016 | David Pfau
Learning in Implicit Generative Models, NIPS 2016 | Shakir Mohamed, Google DeepMind
Просмотров 1,7 тыс.6 лет назад
Learning in Implicit Generative Models, NIPS 2016 | Shakir Mohamed, Google DeepMind
Convex Optimization with Abstract Linear Operators, ICCV 2015 | Stephen P. Boyd, Stanford
Просмотров 5 тыс.6 лет назад
Convex Optimization with Abstract Linear Operators, ICCV 2015 | Stephen P. Boyd, Stanford
A Connection Between GANs, Inverse Reinforcement Learning, and Energy Based Models, NIPS 2016
Просмотров 6 тыс.6 лет назад
A Connection Between GANs, Inverse Reinforcement Learning, and Energy Based Models, NIPS 2016
Adversarial Training Methods for Semi-Supervised Text Classification, NIPS 2016 | Andrew M. Dai
Просмотров 2,7 тыс.6 лет назад
Adversarial Training Methods for Semi-Supervised Text Classification, NIPS 2016 | Andrew M. Dai
Borrowing Ideas from Human Vision, ICCV 2015 PAMI Distinguished Researcher Award | David Lowe, UBC
Просмотров 7106 лет назад
Borrowing Ideas from Human Vision, ICCV 2015 PAMI Distinguished Researcher Award | David Lowe, UBC
How to train a GAN, NIPS 2016 | Soumith Chintala, Facebook AI Research
Просмотров 9 тыс.6 лет назад
How to train a GAN, NIPS 2016 | Soumith Chintala, Facebook AI Research
Energy-Based Adversarial Training and Video Prediction, NIPS 2016 | Yann LeCun, Facebook AI Research
Просмотров 2,9 тыс.6 лет назад
Energy-Based Adversarial Training and Video Prediction, NIPS 2016 | Yann LeCun, Facebook AI Research
It's Learning All the Way Down, ICCV 2015 PAMI Distinguished Researcher Award | Yann LeCun, NYU
Просмотров 2976 лет назад
It's Learning All the Way Down, ICCV 2015 PAMI Distinguished Researcher Award | Yann LeCun, NYU
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
Просмотров 151 тыс.6 лет назад
Introduction to GANs, NIPS 2016 | Ian Goodfellow, OpenAI
Deep Learning for Predicting Human Strategic Behavior, NIPS 2016 | Jason Hartford, UBC
Просмотров 3 тыс.6 лет назад
Deep Learning for Predicting Human Strategic Behavior, NIPS 2016 | Jason Hartford, UBC
Predictive Learning, NIPS 2016 | Yann LeCun, Facebook Research
Просмотров 7 тыс.6 лет назад
Predictive Learning, NIPS 2016 | Yann LeCun, Facebook Research
Using Fast Weights to Attend to the Recent Past, NIPS 2016 | Jimmy Ba, University of Toronto
Просмотров 2,3 тыс.6 лет назад
Using Fast Weights to Attend to the Recent Past, NIPS 2016 | Jimmy Ba, University of Toronto
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization, NIPS 2016
Просмотров 3,4 тыс.6 лет назад
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization, NIPS 2016

Комментарии

  • @sorbajitgoswami4739
    @sorbajitgoswami4739 7 дней назад

    And still the building collapse

  • @jackxiao8140
    @jackxiao8140 28 дней назад

    Love the little laugh at 12:58

  • @LifeRepository
    @LifeRepository 28 дней назад

    00:05 Ian Goodfellow's journey into deep learning 02:02 Ian Goodfellow innovated GANs for generative modeling. 03:51 Goodfellow's determination to make his idea work paid off. 05:37 Games are at an important crossroads in deep learning 07:28 Importance of mastering basic math for deep learning 09:18 Evolution of AI and Deep Learning 11:19 Entering AI field without needing a PhD. 13:06 Importance of building security into machine learning algorithms from the start Crafted by Merlin AI.

  • @abdAlmajedSaleh
    @abdAlmajedSaleh Месяц назад

    I didn't know it was dog network.

  • @muhammadrayanmansoor4301
    @muhammadrayanmansoor4301 2 месяца назад

    This man is great ❤

  • @vq8gef32
    @vq8gef32 2 месяца назад

    He is a real hero, I am watching his lessons : Love + AI === Andrej

  • @alaad1009
    @alaad1009 5 месяцев назад

    Two amazing teachers !

  • @christiansmith-of7dt
    @christiansmith-of7dt 6 месяцев назад

    I dont even think about russia

  • @zardi9083
    @zardi9083 7 месяцев назад

    Set playback speed to 2x to fully understand what is happening in Andrej's brain

  • @nastaran1010
    @nastaran1010 7 месяцев назад

    bad presentation...fast, sloppily speaking

  • @daffertube
    @daffertube 7 месяцев назад

    Great talk. Super underrated

  • @revimfadli4666
    @revimfadli4666 8 месяцев назад

    Looks like it greatly outperforms LSTMs, so I wonder what's keeping it from being the next gold standard. Also bit of a shame it only blew up after Transformers replaced RNNs for mainstream purposes. With the recent surge of graph nets and massively multi agent learning, hope it'll get another chance to be used

  • @onamixt
    @onamixt 8 месяцев назад

    I watched the video as a part of Deep Learning Specialization. Sadly, it's way way over my head to comprehend much of what was said in the video.

  • @pocok5000
    @pocok5000 8 месяцев назад

    the good old days when Musk wasn't a total nutjob

  • @jsfnnyc
    @jsfnnyc 9 месяцев назад

    Best research advice ever!! "Read the literature, but not too much of it."

  • @mermich
    @mermich 9 месяцев назад

    can I say that Ian Goodfellow is the GOAT in modern computer science? T.T

  • @user-sd6lc2qn5q
    @user-sd6lc2qn5q 9 месяцев назад

    You give what is a gem to the people around the world Sir Salute from Cambodia

  • @user-sd6lc2qn5q
    @user-sd6lc2qn5q 9 месяцев назад

    Thanks you so much Dr.

  • @surkewrasoul4711
    @surkewrasoul4711 10 месяцев назад

    Hey And , Do you still accept donations by any chance, I am hoping for 720p videos from now on.

  • @ChandlerRandolph-yc5re
    @ChandlerRandolph-yc5re 10 месяцев назад

    very informative!

  • @suissdagout5153
    @suissdagout5153 10 месяцев назад

    Missiles making

  • @postnetworkacademy
    @postnetworkacademy 11 месяцев назад

    Great legends are talking on great things.

  • @briancase9527
    @briancase9527 11 месяцев назад

    I so agree with Hinton: have an idea and go for it. I took this approach with something other than AI, but it also worked. What do I mean? I mean, even though my idea wasn't revolutionary and totally worthwhile, I LEARNED A LOT by just going for it and programming the heck out of it. The practical experience I gained served me well--very well--in my first jobs. Remember: your purpose is to learn, and you can do that following your intuition--which is fun--or following someone else's--which is less fun.

  • @fabianmarin8514
    @fabianmarin8514 11 месяцев назад

    The two folks from which I've learned the most about AI. Thanks so much!

  • @PaulHigginbothamSr
    @PaulHigginbothamSr 11 месяцев назад

    I think the difference between wake and sleep is during sleep it is in the testing phase and during wake it is the operative phase of learning.

  • @Desu_Desu
    @Desu_Desu Год назад

    This is amazing concept, we’re keeping borrowing solutions from biological systems, but no wonder, they had million of years to solve all those problems before us already

  • @wk4240
    @wk4240 Год назад

    Seriously doubt Geoffrey Hinton considers himself a hero - more like Dr. Frankenstein now. He's doing his part to spread the word on the dangers of reliance on AI.

  • @YashVinayvanshi-nq2ug
    @YashVinayvanshi-nq2ug Год назад

    Great

  • @yunoletmehaveaname
    @yunoletmehaveaname Год назад

    Another way I learned to do textual substitution would be the same as saying x[x := (x v x+1)][ x := (x v x+1)] in which case you would get the first substitution (x v x+1)[ x := (x v x+1)] Then the second substitution (x v (x) +1) v (x+1 v (x+1) +1) Then combining and removing unnecessary parentheses and repeated x+1 gets the same value as the video x v x+1 v x+2

  • @gangfang8835
    @gangfang8835 Год назад

    It took me a month to fully understand everything he discussed in this presentation (at a high level). I think this is the future. Would love to hang out and discuss if anyone is in Toronto.

  • @yunoletmehaveaname
    @yunoletmehaveaname Год назад

    I've never herd someone speak so passionately about starting at 0

  • @Abhishekkumar-qj6hb
    @Abhishekkumar-qj6hb Год назад

    So basically you guys are utilising the fleet for getting varied data and also ensuring if the model works fine and if it fails then again quickly train the model on those groups of datasets to make the model more robust!!! Interesting However how far are we from the moment where we kind of act well as humans do with just very few datasets ??? Cuz what we are doing is statistical inference on the basis of large datasets ! So it's basically good datasets and good compute as mentioned earlier

  • @futureprogress
    @futureprogress Год назад

    Maybe the cake... wasn't a lie

  • @smithwill9952
    @smithwill9952 Год назад

    Genius play respect to genius.

  • @kavorka8855
    @kavorka8855 Год назад

    What a charlatan, Elon Musk! This guy is basically full of BS, knows absolutely NOTHING about AI other than basic, layman info, yet he finds himself everywhere, pretending to know things. The real founders of Tesla talked about his ego and attention seeking mindset.

  • @Gabcikovo
    @Gabcikovo Год назад

    10:58 global community

  • @shantanuraj7086
    @shantanuraj7086 Год назад

    Excellent interview. Down to earth, straight, and lot of information in this interview. Great work Andrew Ng with your contributions.

  • @calmhorizons
    @calmhorizons Год назад

    Parachuting in from the future to confirm that we have now unleashed this alchemy onto the public in pursuit of seed capital. What a time to be alive...

  • @Gabcikovo
    @Gabcikovo Год назад

    2:48 both players are neural networks

    • @Gabcikovo
      @Gabcikovo Год назад

      5:16

    • @Gabcikovo
      @Gabcikovo Год назад

      5:26 the goal of the generator is to fool the discriminator

    • @Gabcikovo
      @Gabcikovo Год назад

      5:30 eventually the generator is forced to produce data as true as possible

    • @Gabcikovo
      @Gabcikovo Год назад

      Uhrik and Putin cried bitter tears

    • @Gabcikovo
      @Gabcikovo Год назад

      3:10 one of the players is trained :) to do the best as possible on the worst possible input

  • @theLowestPointInMyLife
    @theLowestPointInMyLife Год назад

    Walt and Gale vibes

  • @Gabcikovo
    @Gabcikovo Год назад

    38:10 a thought is just a great big vector of neural activity

    • @Gabcikovo
      @Gabcikovo Год назад

      38:19 people who thought that thoughts were symbolic expressions made a huge mistake.. what comes in is a string of words, and what comes out is a string of words, and because of that, strings of words are the obvious way to represent things, so they thought what must be in between was a string of words or something alike.. Hinton thinks there's nothing like a string of words in between, he thinks thinking of it as of some kind of language is as silly as the idea that understanding the layout of a spacial scene must be in pixels :))

  • @Gabcikovo
    @Gabcikovo Год назад

    35:08 our relationship to computers has changed.. instead of programming them, we show them, and they figure it out

  • @Gabcikovo
    @Gabcikovo Год назад

    18:00 2007 ignored Hinton and Bengio picked it up layer on

  • @Gabcikovo
    @Gabcikovo Год назад

    0:19 Godfather 😎

  • @hmthanhgm
    @hmthanhgm Год назад

    Geoff Hinton is legendary

  • @shubharthaksangharsha6248
    @shubharthaksangharsha6248 Год назад

    2 legends in one frame

  • @riteshajoodha4401
    @riteshajoodha4401 Год назад

    Wonderful, always a pleasure to hear you speak!

  • @rb8049
    @rb8049 Год назад

    So much intuition.

  • @-mwolf
    @-mwolf Год назад

    10:55

  • @justchary
    @justchary Год назад

    What an amazing presentation! Thank you