Deep Learning State of the Art (2020) | MIT Deep Learning Series




[ad_1]

Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.

Website: https://deeplearning.mit.edu
Slides: http://bit.ly/2QEfbAm
References: http://bit.ly/deeplearn-sota-2020
Playlist: http://bit.ly/deep-learning-playlist

OUTLINE:
0:00 – Introduction
0:33 – AI in the context of human history
5:47 – Deep learning celebrations, growth, and limitations
6:35 – Deep learning early key figures
9:29 – Limitations of deep learning
11:01 – Hopes for 2020: deep learning community and research
12:50 – Deep learning frameworks: TensorFlow and PyTorch
15:11 – Deep RL frameworks
16:13 – Hopes for 2020: deep learning and deep RL frameworks
17:53 – Natural language processing
19:42 – Megatron, XLNet, ALBERT
21:21 – Write with transformer examples
24:28 – GPT-2 release strategies report
26:25 – Multi-domain dialogue
27:13 – Commonsense reasoning
28:26 – Alexa prize and open-domain conversation
33:44 – Hopes for 2020: natural language processing
35:11 – Deep RL and self-play
35:30 – OpenAI Five and Dota 2
37:04 – DeepMind Quake III Arena
39:07 – DeepMind AlphaStar
41:09 – Pluribus: six-player no-limit Texas hold’em poker
43:13 – OpenAI Rubik’s Cube
44:49 – Hopes for 2020: Deep RL and self-play
45:52 – Science of deep learning
46:01 – Lottery ticket hypothesis
47:29 – Disentangled representations
48:34 – Deep double descent
49:30 – Hopes for 2020: science of deep learning
50:56 – Autonomous vehicles and AI-assisted driving
51:50 – Waymo
52:42 – Tesla Autopilot
57:03 – Open question for Level 2 and Level 4 approaches
59:55 – Hopes for 2020: autonomous vehicles and AI-assisted driving
1:01:43 – Government, politics, policy
1:03:03 – Recommendation systems and policy
1:05:36 – Hopes for 2020: Politics, policy and recommendation systems
1:06:50 – Courses, Tutorials, Books
1:10:05 – General hopes for 2020
1:11:19 – Recipe for progress in AI
1:14:15 – Q&A: what made you interested in AI
1:15:21 – Q&A: Will machines ever be able to think and feel?
1:18:20 – Q&A: Is RL a good candidate for achieving AGI?
1:21:31 – Q&A: Are autonomous vehicles responsive to sound?
1:22:43 – Q&A: What does the future with AGI look like?
1:25:50 – Q&A: Will AGI systems become our masters?

CONNECT:
– If you enjoyed this video, please subscribe to this channel.
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman

Source


[ad_2]

Comment List

  • Lex Fridman
    December 6, 2020

    This is the opening lecture on recent developments in deep learning and AI, and hopes for 2020. It's humbling beyond words to have the opportunity to lecture at MIT and to be part of the AI community. Here's the outline:
    0:00 – Introduction
    0:33 – AI in the context of human history
    5:47 – Deep learning celebrations, growth, and limitations
    6:35 – Deep learning early key figures
    9:29 – Limitations of deep learning
    11:01 – Hopes for 2020: deep learning community and research
    12:50 – Deep learning frameworks: TensorFlow and PyTorch
    15:11 – Deep RL frameworks
    16:13 – Hopes for 2020: deep learning and deep RL frameworks
    17:53 – Natural language processing
    19:42 – Megatron, XLNet, ALBERT
    21:21 – Write with transformer examples
    24:28 – GPT-2 release strategies report
    26:25 – Multi-domain dialogue
    27:13 – Commonsense reasoning
    28:26 – Alexa prize and open-domain conversation
    33:44 – Hopes for 2020: natural language processing
    35:11 – Deep RL and self-play
    35:30 – OpenAI Five and Dota 2
    37:04 – DeepMind Quake III Arena
    39:07 – DeepMind AlphaStar
    41:09 – Pluribus: six-player no-limit Texas hold'em poker
    43:13 – OpenAI Rubik's Cube
    44:49 – Hopes for 2020: Deep RL and self-play
    45:52 – Science of deep learning
    46:01 – Lottery ticket hypothesis
    47:29 – Disentangled representations
    48:34 – Deep double descent
    49:30 – Hopes for 2020: science of deep learning
    50:56 – Autonomous vehicles and AI-assisted driving
    51:50 – Waymo
    52:42 – Tesla Autopilot
    57:03 – Open question for Level 2 and Level 4 approaches
    59:55 – Hopes for 2020: autonomous vehicles and AI-assisted driving
    1:01:43 – Government, politics, policy
    1:03:03 – Recommendation systems and policy
    1:05:36 – Hopes for 2020: Politics, policy and recommendation systems
    1:06:50 – Courses, Tutorials, Books
    1:10:05 – General hopes for 2020
    1:11:19 – Recipe for progress in AI
    1:13:11 – Q&A: Limitations / road-blocks of deep learning
    1:14:15 – Q&A: What made you interested in AI
    1:15:21 – Q&A: Will machines ever be able to think and feel?
    1:18:20 – Q&A: Is RL a good candidate for achieving AGI?
    1:21:31 – Q&A: Are autonomous vehicles responsive to sound?
    1:22:43 – Q&A: What does the future with AGI look like?
    1:25:50 – Q&A: Will AGI systems become our masters?

  • Lex Fridman
    December 6, 2020

    What if the robot can start replicating and cloning a bunch of themselves and then start killing humans till extinction

  • Lex Fridman
    December 6, 2020

    The question on ethics is really interesting. My first answer was “that’s crazy, robots don’t have feelings”, but the answer was great, what happens when a bot says “please don’t hurt me”? That surely messes with the human conscience, and in my opinion it is wrong to violate your conscience, so …. not so clear as I thought!

  • Lex Fridman
    December 6, 2020

    Grea lecture! though I like to disagree about AI having feelings and "conscience". A sentient being is "something" whom is something being itself. It's to have an inner space where feeling and suffering and love are meaningful in their existence. Machine can't have this because we don't how humans have this capacity. Moreover we don't know what is this capacity. For example, what is feeling the color red (Qualia), what is feeling music, what is feeling pleasure. We can program an AI to "show", to simulate, feelings and an inner space but it is not the same as the real thing.

  • Lex Fridman
    December 6, 2020

    THE WORLD WE ARE SEE NOW DOES NOT EXIST – https://youtu.be/0__UrBgrK-8

  • Lex Fridman
    December 6, 2020

    u look like psyched substance lmaoo

  • Lex Fridman
    December 6, 2020

    the credit assignment problem in research! haha

  • Lex Fridman
    December 6, 2020

    Your parents failed us.

  • Lex Fridman
    December 6, 2020

    Impressed by this guy, what a genius..!

  • Lex Fridman
    December 6, 2020

    Also looking forward to more abstractions to scale DL application beyond those who can invest in learning the syntax and complex setup and usage of hyper parameters. Those who can spend more time on understanding causal relationships and other context related to the problem, could really help add far more value than we've seen from the most technical data scientists.

  • Lex Fridman
    December 6, 2020

    I Wasn’t Controlling the intelligent. The unknown Storm intelligent Was. They didn’t inquire about our deep intelligent—they assumed it was a ho-hum affair like theirs—so we left it right there. The birth of novel super intelligent by a storm intelligent occurred unstoppable "ALIVE" and thinking clusters of robots. Scary without brains in any bio form or base evolved and extinction of human intelligent i.e primitive hatred , wars, clonizations and taking beers and drugs for the bio.

  • Lex Fridman
    December 6, 2020

    20 minutes, ROFLMAO, I can't listen to you for 20 minutes,,,, and todays Millennials, 20 seconds would be an amazing accomplishment. snooooore! does MIT stand for, Mindless interspatial tripe

  • Lex Fridman
    December 6, 2020

    The first time a roomba says please don't hurt me, that's when we start to have serious conversations about the ethics.

  • Lex Fridman
    December 6, 2020

    Slide 31 at 47:00 – yeah yeah everything is fascinating. It’s like you have two unknown but build 5 equations only to realize afterwards that you only need two equations to solve the problem.

  • Lex Fridman
    December 6, 2020

    this man said “shoutout to the soviet union”

  • Lex Fridman
    December 6, 2020

    AI killing people:

    2 scenarios:

    1. I tead that the military is working on autonomous attack drones, able to detect threats (including gun welding humans) and can make the decision to execute a kill order. Yes it may require a human approval, but it may not. Will these ne used in civilian/police environment?

    2. Recommendation systems, used by social mendia, on an ignorant & intolerant public; can sway public opinion to the point of social unrest – ie: in the Trump era! (And it may not go away soon). It’s not killing: but in a aggressive and well-armed society like America, the killing is a small step away!

  • Lex Fridman
    December 6, 2020

    If I read these slides I'll sound a little smarter

  • Lex Fridman
    December 6, 2020

    very high level not so useful for any specific group. i thought you would discuss more papers in each field

  • Lex Fridman
    December 6, 2020

    Providing some official benchmarks for AI or NLP that all of these language processors can measure themselves against would be nice.

  • Lex Fridman
    December 6, 2020

    So suffering does not exist independently from the ability to communicate it? Lex! The falling tree example is not pertinent!

  • Lex Fridman
    December 6, 2020

    Bro, what's the perquisite for understanding the video?
    Please reply

  • Lex Fridman
    December 6, 2020

    Love all of the LEXures! 🙂

  • Lex Fridman
    December 6, 2020

    01:25:46 . 20 years guys!!

    Make it count….

  • Lex Fridman
    December 6, 2020

    I hope to be half as smart as Lex one day. Thanks for sharing.

Write a comment