Arxiv Insights
Arxiv Insights
  • Видео 13
  • Просмотров 2 334 691
AlphaFold and the Grand Challenge to solve protein folding
If you want to support this channel, here is my patreon link:
patreon.com/ArxivInsights --- You are amazing!! ;)
If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbrugge
--------------------------------
AlphaFold is DeepMinds latest breakthrough addressing the protein folding problem. Using an advanced Deep Learning architecture that achieves end-to-end learning of protein structures, this work is arguably one of the most influential papers of this decade and is likely to spark enormous advanced in computational biology and protein design. This video covers the entire architecture of the model as w...
Просмотров: 60 856

Видео

The Molecular Basis of Life
Просмотров 18 тыс.2 года назад
If you want to support this channel, here is my patreon link: patreon.com/ArxivInsights You are amazing!! ;) If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbrugge Life is a molecular marvel of astounding complexity. In this video we take a dive into the world of molecular engines, proteins and the...
Editing Faces using Artificial Intelligence
Просмотров 371 тыс.4 года назад
Link to Notebooks: drive.google.com/open?id=1LBWcmnUPoHDeaYlRiHokGyjywIdyhAQb Link to the StyleGAN paper: arxiv.org/abs/1812.04948 Link to GAN blogpost: hunterheidenreich.com/blog/gan-objective-functions/ If you want to support this channel, here is my patreon link: patreon.com/ArxivInsights You are amazing!! ;) If you have questions you would like to discuss with me personally, you can book a ...
'How neural networks learn' - Part III: Generalization and Overfitting
Просмотров 42 тыс.5 лет назад
In this third episode on "How neural nets learn" I dive into a bunch of academical research that tries to explain why neural networks generalize as wel as they do. We first look at the remarkable capability of DNNs to simply memorize huge amounts of (random) data. We then see how this picture is more subtle when training on real data and finally dive into some beautiful analysis from the viewpo...
An introduction to Policy Gradient methods - Deep Reinforcement Learning
Просмотров 192 тыс.5 лет назад
In this episode I introduce Policy Gradient methods for Deep Reinforcement Learning. After a general overview, I dive into Proximal Policy Optimization: an algorithm designed at OpenAI that tries to find a balance between sample efficiency and code complexity. PPO is the algorithm used to train the OpenAI Five system and is also used in a wide range of other challenges like Atari and robotic co...
OpenAI Five: When AI beats professional gamers
Просмотров 25 тыс.5 лет назад
In this episode I discuss OpenAI Five, a Machine Learning system that was able to defeat professional gamers in the popular video game Dota 2: - How was the system built? - What does this mean for AI progress? - What real world applications can be built on this succes? You can find all the OpenAI blogposts here: blog.openai.com/ If you enjoy my videos, all support is super welcome! www.patreon....
Reinforcement Learning with sparse rewards
Просмотров 115 тыс.6 лет назад
In this video I dive into three advanced papers that addres the problem of the sparse reward setting in Deep Reinforcement Learning and pose interesting research directions for mastering unsupervised learning in autonomous agents. Papers discussed: Reinforcement Learning with Unsupervised Auxiliary Tasks - DeepMind: arxiv.org/abs/1611.05397 Curiosity Driven Exploration - UC Berkeley: arxiv.org/...
An introduction to Reinforcement Learning
Просмотров 644 тыс.6 лет назад
This episode gives a general introduction into the field of Reinforcement Learning: - High level description of the field - Policy gradients - Biggest challenges (sparse rewards, reward shaping, ...) This video forms the basis for a series on RL where I will dive much deeper into technical details of state-of-the-art methods for RL. Links: - "Pong from Pixels - Karpathy": karpathy.github.io/201...
Variational Autoencoders
Просмотров 481 тыс.6 лет назад
In this episode, we dive into Variational Autoencoders, a class of neural networks that can learn to compress data completely unsupervised! VAE's are a very hot topic right now in unsupervised modelling of latent variables and provide a unique solution to the curse of dimensionality. This video starts with a quick intro into normal autoencoders and then goes into VAE's and disentangled beta-VAE...
'How neural networks learn' - Part II: Adversarial Examples
Просмотров 54 тыс.6 лет назад
In this episode we dive into the world of adversarial examples: images specifically engineered to fool neural networks into making completely wrong decisions! Link to the first part of this series: ruclips.net/video/McgxRxi2Jqo/видео.html If you want to support this channel, here is my patreon link: patreon.com/ArxivInsights You are amazing!! ;) If you have questions you would like to discuss w...
'How neural networks learn' - Part I: Feature Visualization
Просмотров 105 тыс.6 лет назад
Interpreting what neural networks are doing is a tricky problem. In this video I dive into the approach of feature visualisation. From simple neuron excitation to the Deep Visualisation Toolbox and the Google DeepDream project, let's open up the black box! Links: Distill.pub post on Feature Visualisation: distill.pub/2017/feature-visualization/ Sander Dieleman post on music recommendation: bena...
Why humans learn so much faster than AI
Просмотров 49 тыс.6 лет назад
- Link to edited game versions: rach0012.github.io/humanRL_website/ - Link to the Paper: openreview.net/pdf?id=Hk91SGWR- "Why are humans such incredibly fast learners?" This is the core question of this paper. By leveraging powerful prior knowledge about how the world works, humans are able to quickly figure out efficient strategies in new and unseen environments. Current state-of-the-art Reinf...
AlphaGo - How AI mastered the hardest boardgame in history
Просмотров 179 тыс.6 лет назад
In this episode I dive into the technical details of the AlphaGo Zero paper by Google DeepMind. This AI system uses Reinforcement Learning to beat the world's Go champion using only self-play, a remarkable display of clever engineering on the path to stronger AI systems. DeepMind Blogpost: deepmind.com/blog/alphago-zero-learning-scratch/ AlphaGo Zero paper: storage.googleapis.com/deepmind-media...

Комментарии

  • @soundninja99
    @soundninja99 2 дня назад

    I wanna try pretraining the RL model with supervised learning to see if it can circumvent some of the problems with reward shaping

  • @khansa1436
    @khansa1436 3 дня назад

    i'm glad I watched this video

  • @HarutakaShimizu
    @HarutakaShimizu 4 дня назад

    Wow, this was a very clearly explained video, thanks!

  • @AryanMathur-gh6df
    @AryanMathur-gh6df 13 дней назад

    Thank you so much for this video, helped a lot

  • @sancelot88
    @sancelot88 16 дней назад

    You explain something you are mastering. However in order for other people to understand you are speaking too fast. And more difficult to understand when English is not your native language

  • @husseinalmansory7370
    @husseinalmansory7370 26 дней назад

    i think without know math you will dive in sea

  • @OriginalJetForMe
    @OriginalJetForMe 27 дней назад

    You should watch the section on dangers and politics now, six years later. I’d be curious to know your opinions now. 😂

  • @mister_meatloaf
    @mister_meatloaf Месяц назад

    This is brilliant. Thank you.

  • @yinghaohu8784
    @yinghaohu8784 Месяц назад

    very good explanations

  • @luxliquidlumenvideoproduct5425
    @luxliquidlumenvideoproduct5425 Месяц назад

    One must stress what you say at the end of the video at 28:20, that although AlohaFold 2.0 can predict native confirmation of an amino acid sequence, there are other contributing factors, and the algorithm isn’t able to answer the why, nor how proteins find their native state out of the vast combinatorial complexity of native confrontation structures. Levinthal’s Paradox.

  • @anishahandique4815
    @anishahandique4815 Месяц назад

    After going through most of the RUclips videos on this topic. This one was one of the best out of all. Very clear and crisp explanation. Thank you ❤

  • @muhammadhelmy5575
    @muhammadhelmy5575 Месяц назад

    4:00

  • @tugrulz
    @tugrulz 2 месяца назад

    subscribed

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 2 месяца назад

    1:00

  • @bishnuprasadnayak9520
    @bishnuprasadnayak9520 2 месяца назад

    Amazing

  • @conlanrios
    @conlanrios 3 месяца назад

    Great breakdown and links for additional resources

  • @ViewsfromVick
    @ViewsfromVick 3 месяца назад

    Bro! you were soo ahead of your time! Like Scooby Doo

  • @teegeevee42
    @teegeevee42 3 месяца назад

    This is so good. Thank you!

  • @noahgsolomon
    @noahgsolomon 3 месяца назад

    GOAT

  • @lamborghinicentenario2497
    @lamborghinicentenario2497 3 месяца назад

    12:28 what did you use to connect the machine learning to a 3d model?

  • @bikrammajhi3020
    @bikrammajhi3020 3 месяца назад

    This is gold!!

  • @azizbekibnhamid642
    @azizbekibnhamid642 3 месяца назад

    Great work

  • @iwanttobreakfree701
    @iwanttobreakfree701 3 месяца назад

    6 years ago and I now use this video as a guidance to understanding StableDiffusion

    • @commenterdek3241
      @commenterdek3241 2 месяца назад

      an you help me out as well? I have so many questions but no one to answer them.

  • @zzewt
    @zzewt 4 месяца назад

    This is cool, but after the third random jumpscare sound I couldn't pay attention to what you were saying--all I could think about was when the next one would be. Gave up halfway through since it was stressing me out

  • @sELFhATINGiNDIAN
    @sELFhATINGiNDIAN 4 месяца назад

    this guy too hadnsome, itlain hands

  • @BooleanDisorder
    @BooleanDisorder 4 месяца назад

    Rest in peace Tishby

  • @Matthew8473
    @Matthew8473 4 месяца назад

    This is a marvel. I read a book with similar content, and it was a marvel to behold. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn

  • @LilliHerveau
    @LilliHerveau 4 месяца назад

    feel like beta should be decreased as training progresses and the learning rate decreases too. Sounds like hyperparameter tuning though

  • @NoobsDeSroobs
    @NoobsDeSroobs 4 месяца назад

    Figuratively exploded*

  • @LuisFernandoGaido
    @LuisFernandoGaido 5 месяцев назад

    Five years later and RL is a dream's product. Nothing was really solved in real world. I think there's pratical areas of IA better than that.

  • @p4k7
    @p4k7 5 месяцев назад

    Great video, and the algorithm is finally recognizing it! Come back and produce more videos?

  • @user-xz6ld7nl2l
    @user-xz6ld7nl2l 5 месяцев назад

    This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.

  • @obensustam3574
    @obensustam3574 5 месяцев назад

    Very good video

  • @erickgomez7775
    @erickgomez7775 5 месяцев назад

    If you dont understand this explanation, the fault is on you.

  • @SurferDudex99
    @SurferDudex99 5 месяцев назад

    Lmao this must be a joke. Anyone who supports this theory has no understanding of the exponentially nature of how AI learns.

  • @alaad1009
    @alaad1009 6 месяцев назад

    Excellent video

  • @infoman6500
    @infoman6500 6 месяцев назад

    Very interesting. It looks like Nature is alive -very much alive.

  • @infoman6500
    @infoman6500 6 месяцев назад

    Glad to see that human biological computer network is still much efficient than machine with artificial neural network.

  • @infoman6500
    @infoman6500 6 месяцев назад

    Excellent educational video on artificial and deep neural network learning.

  • @infoman6500
    @infoman6500 6 месяцев назад

    Excellent video education on bio-molecular technology.

  • @alexanderkurz2409
    @alexanderkurz2409 6 месяцев назад

    Another amazing video ... thanks ... any chance of some new videos coming out on recent papers?

  • @alexanderkurz2409
    @alexanderkurz2409 6 месяцев назад

    5:03 "to test the presence and influence of different kinds of human priors" ... this is pretty cool ...

  • @alexanderkurz2409
    @alexanderkurz2409 6 месяцев назад

    3:12 This reminds me of Chomsky's critique of AI and LLMs. Any comments?

  • @yonistoller1
    @yonistoller1 6 месяцев назад

    Thanks for sharing this! I may be misunderstanding something, but it seems like there might be a mistake in the description. Specifically, the claim in 12:50 that "this is the only region where the unclipped part... has a lower value than the clipped version". I think this claim might be wrong, because there could be another case where the unclipped version would be selected: For example, if the ratio is e.g 0.5 (and we assume epsilon is 0.2), that would mean the ratio is smaller than the clipped version (which would be 0.8), and it would be selected. Is that not the case?

  • @moozzzmann
    @moozzzmann 6 месяцев назад

    Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work

  • @bowenjing3674
    @bowenjing3674 6 месяцев назад

    I didn't forget the subscrip, but you seems to forget updating

  • @hosseinaboutalebi9998
    @hosseinaboutalebi9998 6 месяцев назад

    Why have you stopped doing wonderful tutorial? I wish you would have continued your channel.

  • @kaiz6997
    @kaiz6997 7 месяцев назад

    extremely amazing, thanks for creating this incredible vedio

  • @negatopoji7
    @negatopoji7 7 месяцев назад

    The term "activation" in the context of neural networks generally refers to the output of a neuron, regardless of whether the network is recognizing a specific pattern. The activation is indeed a numerical value that represents the result of applying the neuron's activation function to the weighted sum of its inputs. Just posting here what ChatGPT told me, because the definition of "activation" in this video confused me

  • @davidenders9107
    @davidenders9107 7 месяцев назад

    Thank you! This was comprehensive and comprehendible.