#97 – Sertac Karaman: Robots That Fly and Robots That Drive

Lex Fridman Podcast

Sertac Karaman is a professor at MIT, co-founder of the autonomous vehicle company Optimus Ride, and is one of top roboticists in the world, including robots that drive and robots that fly. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Sertac’s Website: http://sertac.scripts.mit.edu/web/ Sertac’s Twitter: https://twitter.com/sertackaraman Optimus Ride: https://www.optimusride.com/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 01:44 – Autonomous flying vs autonomous driving 06:37 – Flying cars 10:27 – Role of simulation in robotics 17:35 – Game theory and robotics 24:30 – Autonomous vehicle company strategies 29:46 – Optimus Ride 47:08 – Waymo, Tesla, Optimus Ride timelines 53:22 – Achieving the impossible 53:50 – Iterative learning 58:39 – Is Lidar is a crutch? 1:03:21 – Fast autonomous flight 1:18:06 – Most beautiful idea in robotics

83分钟
1
5年前

#95 – Dawn Song: Adversarial Machine Learning and Computer Security

Lex Fridman Podcast

Dawn Song is a professor of computer science at UC Berkeley with research interests in security, most recently with a focus on the intersection between computer security and machine learning. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Dawn’s Twitter: https://twitter.com/dawnsongtweets Dawn’s Website: https://people.eecs.berkeley.edu/~dawnsong/ Oasis Labs: https://www.oasislabs.com This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 01:53 – Will software always have security vulnerabilities? 09:06 – Human are the weakest link in security 16:50 – Adversarial machine learning 51:27 – Adversarial attacks on Tesla Autopilot and self-driving cars 57:33 – Privacy attacks 1:05:47 – Ownership of data 1:22:13 – Blockchain and cryptocurrency 1:32:13 – Program synthesis 1:44:57 – A journey from physics to computer science 1:56:03 – US and China 1:58:19 – Transformative moment 2:00:02 – Meaning of life

133分钟
17
5年前

#94 – Ilya Sutskever: Deep Learning

Lex Fridman Podcast

Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Ilya’s Twitter: https://twitter.com/ilyasut Ilya’s Website: https://www.cs.toronto.edu/~ilya/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:23 – AlexNet paper and the ImageNet moment 08:33 – Cost functions 13:39 – Recurrent neural networks 16:19 – Key ideas that led to success of deep learning 19:57 – What’s harder to solve: language or vision? 29:35 – We’re massively underestimating deep learning 36:04 – Deep double descent 41:20 – Backpropagation 42:42 – Can neural networks be made to reason? 50:35 – Long-term memory 56:37 – Language models 1:00:35 – GPT-2 1:07:14 – Active learning 1:08:52 – Staged release of AI systems 1:13:41 – How to build AGI? 1:25:00 – Question to AGI 1:32:07 – Meaning of life

97分钟
99+
5年前

#93 – Daphne Koller: Biomedicine and Machine Learning

Lex Fridman Podcast

Daphne Koller is a professor of computer science at Stanford University, a co-founder of Coursera with Andrew Ng and Founder and CEO of insitro, a company at the intersection of machine learning and biomedicine. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Daphne’s Twitter: https://twitter.com/daphnekoller Daphne’s Website: https://ai.stanford.edu/users/koller/index.html Insitro: http://insitro.com This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:22 – Will we one day cure all disease? 06:31 – Longevity 10:16 – Role of machine learning in treating diseases 13:05 – A personal journey to medicine 16:25 – Insitro and disease-in-a-dish models 33:25 – What diseases can be helped with disease-in-a-dish approaches? 36:43 – Coursera and education 49:04 – Advice to people interested in AI 50:52 – Beautiful idea in deep learning 55:10 – Uncertainty in AI 58:29 – AGI and AI safety 1:06:52 – Are most people good? 1:09:04 – Meaning of life

72分钟
10
5年前

#90 – Dmitry Korkin: Computational Biology of Coronavirus

Lex Fridman Podcast

Dmitry Korkin is a professor of bioinformatics and computational biology at Worcester Polytechnic Institute, where he specializes in bioinformatics of complex disease, computational genomics, systems biology, and biomedical data analytics. I came across Dmitry’s work when in February his group used the viral genome of the COVID-19 to reconstruct the 3D structure of its major viral proteins and their interactions with human proteins, in effect creating a structural genomics map of the coronavirus and making this data open and available to researchers everywhere. We talked about the biology of COVID-19, SARS, and viruses in general, and how computational methods can help us understand their structure and function in order to develop antiviral drugs and vaccines. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Dmitry’s Website: http://korkinlab.org/ Dmitry’s Twitter: https://twitter.com/dmkorkin Dmitry’s Paper that we discuss: https://bit.ly/3eKghEM This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:33 – Viruses are terrifying and fascinating 06:02 – How hard is it to engineer a virus? 10:48 – What makes a virus contagious? 29:52 – Figuring out the function of a protein 53:27 – Functional regions of viral proteins 1:19:09 – Biology of a coronavirus treatment 1:34:46 – Is a virus alive? 1:37:05 – Epidemiological modeling 1:55:27 – Russia 2:02:31 – Science bobbleheads 2:06:31 – Meaning of life

129分钟
18
5年前

#88 – Eric Weinstein: Geometric Unity and the Call for New Ideas, Leaders & Institutions

Lex Fridman Podcast

Eric Weinstein is a mathematician with a bold and piercing intelligence, unafraid to explore the biggest questions in the universe and shine a light on the darkest corners of our society. He is the host of The Portal podcast, a part of which, he recently released his 2013 Oxford lecture on his theory of Geometric Unity that is at the center of his lifelong efforts in arriving at a theory of everything that unifies the fundamental laws of physics. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Eric’s Twitter: https://twitter.com/EricRWeinstein Eric’s YouTube: https://www.youtube.com/ericweinsteinphd The Portal podcast: https://podcasts.apple.com/us/podcast/the-portal/id1469999563 Graph, Wall, Tome wiki: https://theportal.wiki/wiki/Graph,_Wall,_Tome This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:08 – World War II and the Coronavirus Pandemic 14:03 – New leaders 31:18 – Hope for our time 34:23 – WHO 44:19 – Geometric unity 1:38:55 – We need to get off this planet 1:40:47 – Elon Musk 1:46:58 – Take Back MIT 2:15:31 – The time at Harvard 2:37:01 – The Portal 2:42:58 – Legacy

167分钟
4
5年前

#87 – Richard Dawkins: Evolution, Intelligence, Simulation, and Memes

Lex Fridman Podcast

Richard Dawkins is an evolutionary biologist, and author of The Selfish Gene, The Blind Watchmaker, The God Delusion, The Magic of Reality, The Greatest Show on Earth, and his latest Outgrowing God. He is the originator and popularizer of a lot of fascinating ideas in evolutionary biology and science in general, including funny enough the introduction of the word meme in his 1976 book The Selfish Gene, which in the context of a gene-centered view of evolution is an exceptionally powerful idea. He is outspoken, bold, and often fearless in his defense of science and reason, and in this way, is one of the most influential thinkers of our time. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Richard’s Website: https://www.richarddawkins.net/ Richard’s Twitter: https://twitter.com/RichardDawkins Richard’s Books: – Selfish Gene: https://amzn.to/34tpHQy – The Magic of Reality: https://amzn.to/3c0aqZQ – The Blind Watchmaker: https://amzn.to/2RqV5tH – The God Delusion: https://amzn.to/2JPrxlc – Outgrowing God: https://amzn.to/3ebFess – The Greatest Show on Earth: https://amzn.to/2Rp2j1h This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:31 – Intelligent life in the universe 05:03 – Engineering intelligence (are there shortcuts?) 07:06 – Is the evolutionary process efficient? 10:39 – Human brain and AGI 15:31 – Memes 26:37 – Does society need religion? 33:10 – Conspiracy theories 39:10 – Where do morals come from in humans? 46:10 – AI began with the ancient wish to forge the gods 49:18 – Simulation 56:58 – Books that influenced you 1:02:53 – Meaning of life

67分钟
5
5年前

#86 – David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning

Lex Fridman Podcast

David Silver leads the reinforcement learning research group at DeepMind and was lead researcher on AlphaGo, AlphaZero and co-lead on AlphaStar, and MuZero and lot of important work in reinforcement learning. Support this podcast by signing up with these sponsors: – MasterClass: https://masterclass.com/lex – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Reinforcement learning (book): https://amzn.to/2Jwp5zG This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 04:09 – First program 11:11 – AlphaGo 21:42 – Rule of the game of Go 25:37 – Reinforcement learning: personal journey 30:15 – What is reinforcement learning? 43:51 – AlphaGo (continued) 53:40 – Supervised learning and self play in AlphaGo 1:06:12 – Lee Sedol retirement from Go play 1:08:57 – Garry Kasparov 1:14:10 – Alpha Zero and self play 1:31:29 – Creativity in AlphaZero 1:35:21 – AlphaZero applications 1:37:59 – Reward functions 1:40:51 – Meaning of life

108分钟
40
5年前

#83 – Nick Bostrom: Simulation and Superintelligence

Lex Fridman Podcast

Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Nick’s website: https://nickbostrom.com/ Future of Humanity Institute: – https://twitter.com/fhioxford – https://www.fhi.ox.ac.uk/ Books: – Superintelligence: https://amzn.to/2JckX83 Wikipedia: – https://en.wikipedia.org/wiki/Simulation_hypothesis – https://en.wikipedia.org/wiki/Principle_of_indifference – https://en.wikipedia.org/wiki/Doomsday_argument – https://en.wikipedia.org/wiki/Global_catastrophic_risk This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:48 – Simulation hypothesis and simulation argument 12:17 – Technologically mature civilizations 15:30 – Case 1: if something kills all possible civilizations 19:08 – Case 2: if we lose interest in creating simulations 22:03 – Consciousness 26:27 – Immersive worlds 28:50 – Experience machine 41:10 – Intelligence and consciousness 48:58 – Weighing probabilities of the simulation argument 1:01:43 – Elaborating on Joe Rogan conversation 1:05:53 – Doomsday argument and anthropic reasoning 1:23:02 – Elon Musk 1:25:26 – What’s outside the simulation? 1:29:52 – Superintelligence 1:47:27 – AGI utopia 1:52:41 – Meaning of life

117分钟
5
5年前

#82 – Simon Sinek: Leadership, Hard Work, Optimism and the Infinite Game

Lex Fridman Podcast

Simon Sinek is an author of several books including Start With Why, Leaders Eat Last, and his latest The Infinite Game. He is one of the best communicators of what it takes to be a good leader, to inspire, and to build businesses that solve big difficult challenges. Support this podcast by signing up with these sponsors: – MasterClass: https://masterclass.com/lex – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Simon twitter: https://twitter.com/simonsinek Simon facebook: https://www.facebook.com/simonsinek Simon website: https://simonsinek.com/ Books: – Infinite Game: https://amzn.to/2WxBH1i – Leaders Eat Last: https://amzn.to/2xf70Ds – Start with Why: https://amzn.to/2WxBH1i This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 0:00 – Introduction 3:50 – Meaning of life as an infinite game 10:13 – Optimism 13:30 – Mortality 17:52 – Hard work 26:38 – Elon Musk, Steve Jobs, and leadership

38分钟
15
5年前

#81 – Anca Dragan: Human-Robot Interaction and Reward Engineering

Lex Fridman Podcast

Anca Dragan is a professor at Berkeley, working on human-robot interaction — algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings. Support this podcast by supporting the sponsors and using the special code: – Download Cash App on the App Store or Google Play & use code “LexPodcast” EPISODE LINKS: Anca’s Twitter: https://twitter.com/ancadianadragan Anca’s Website: https://people.eecs.berkeley.edu/~anca/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:26 – Interest in robotics 05:32 – Computer science 07:32 – Favorite robot 13:25 – How difficult is human-robot interaction? 32:01 – HRI application domains 34:24 – Optimizing the beliefs of humans 45:59 – Difficulty of driving when humans are involved 1:05:02 – Semi-autonomous driving 1:10:39 – How do we specify good rewards? 1:17:30 – Leaked information from human behavior 1:21:59 – Three laws of robotics 1:26:31 – Book recommendation 1:29:02 – If a doctor gave you 5 years to live… 1:32:48 – Small act of kindness 1:34:31 – Meaning of life

99分钟
42
5年前
EarsOnMe

加入我们的 Discord

与播客爱好者一起交流

立即加入

扫描微信二维码

添加微信好友,获取更多播客资讯

微信二维码

播放列表

自动播放下一个

播放列表还是空的

去找些喜欢的节目添加进来吧