Introducing Multimodal Llama 3.2

Software > Computer Software > Educational Software DeepLearning.AI

Course Overview

What You'll Learn

  • Join our new short course, Introducing Multimodal Llama 3.2, and learn from Amit Sangani, Senior Director of AI Partner Engineering at Meta, to learn all about the latest additions to the Llama models 3.1 and 3.2, from custom tool calling to multimodality and the new Llama stack.
  • Open models are a key building block of AI and a key enabler of AI research.
  • With Meta’s family of open models, anyone can download, customize, fine-tune, or build new applications on top of them, allowing AI innovation.

Join our new short course, Introducing Multimodal Llama 3.2, and learn from Amit Sangani, Senior Director of AI Partner Engineering at Meta, to learn all about the latest additions to the Llama models 3.1 and 3.2, from custom tool calling to multimodality and the new Llama stack. Open models are a key building block of AI and a key enabler of AI research. With Meta’s family of open models, anyone can download, customize, fine-tune, or build new applications on top of them, allowing AI innovation. The Llama model family now ranges from 1B model parameters to its 405B foundation model, allowing for diverse use cases and applications. In this course, you’ll learn about the new vision capabilities that Llama 3.2 brings to the Llama family. You’ll learn how to leverage this along with tool-calling, and Llama Stack, which is an open-source orchestration layer for building on top of the Llama family of models. In detail, you’ll: 1. Learn about the new models, how they were trained, their features, and how they fit into the Llama family. 2. Understand how to do multimodal prompting with Llama and work on advanced image reasoning use cases such as understanding errors on a car dashboard, adding up the total of three restaurant receipts, grading written math homework, and many more. 3. Learn different roles—system, user, assistant, ipython—in the Llama 3.1 and 3.2 family and the prompt format that identifies those roles. 4. Understand how Llama uses the tiktoken tokenizer, and how it has expanded to a 128k vocabulary size that improves encoding efficiency and enables support for seven non-English languages. 5. Learn how to prompt Llama to call both built-in and custom tools with examples for web search and solving math equations. 6. Learn about ‘Llama Stack API’, which is a standardized interface for canonical toolchain components like fine-tuning or synthetic data generation to customize Llama models and build agentic applications. Start building exciting applications on Llama!

Course FAQs

Is this an accredited online course?

Accreditation for 'Introducing Multimodal Llama 3.2' is determined by the provider, DeepLearning.AI. For online college courses or degree programs, we strongly recommend you verify the accreditation status directly on the provider's website to ensure it meets your requirements.

Can this course be used for continuing education credits?

Many of the courses listed on our platform are suitable for professional continuing education. However, acceptance for credit varies by state and licensing board. Please confirm with your board and {course.provider} that this specific course qualifies.

How do I enroll in this online school program?

To enroll, click the 'ENROLL NOW' button on this page. You will be taken to the official page for 'Introducing Multimodal Llama 3.2' on the DeepLearning.AI online class platform, where you can complete your registration.