• AIPressRoom
  • Posts
  • OpenAI CLIP | Machine Learning Coding Series

OpenAI CLIP | Machine Learning Coding Series

Become The AI Epiphany Patreonhttps://www.patreon.com/theaiepiphany

Join our Discord communityhttps://discord.gg/peBrCpheKE

Kicking off a series of videos where I’ll be going through the actual code of many of the papers I’ve covered over the last few years!

In this video I do a code walkthrough of OpenAI’s CLIP model from the “Learning Transferable Visual Models From Natural Language Supervision” paper.

Let me know what you’d like me to cover next!

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ GitHub: https://github.com/openai/CLIP Paper: https://arxiv.org/abs/2103.00020

Unicode: https://dmitripavlutin.com/what-every-javascript-developer-should-know-about-unicode/▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

Timetable:00:00:00 Intro00:02:00 High level overview: Interacting with CLIP00:26:11 High level overview: Prompt engineering for ImageNet00:40:25 Deep dive starts: vocabulary and byte-pair encoding00:49:00 Vision Transformer & Text Transformer explained01:02:00 Tokenization walkthrough01:09:25 Encoding the image01:15:15 Encoding the text01:23:15 Learning a linear probe01:27:00 Tokenization of the (brain emoji)01:29:56 Outro

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ BECOME A PATREON OF THE AI EPIPHANY

If these videos, GitHub projects, and blogs help you,consider helping me out by supporting me on Patreon!

Huge thank you to these AI Epiphany patreons:Eli MahlerKevin StonePetar Veličković

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

#CLIP #contrastive #codewalkthrough