LlamaIndex Webinar: Efficient Parallel Function Calling Agents with LLMCompiler
LlamaIndex
View ChannelAbout
Official YouTube Channel for LlamaIndex - the platform to build document agents
Latest Posts
No results found. Try different keywords.
Video Description
LLMs are great at reasoning and taking actions. But previous frameworks for agentic reasoning (e.g. ReAct) were primarily focused on sequential reasoning, leading to higher latency/cost, and even poorer performance due to the lack of long-term planning. LLMCompiler is a new framework by Kim et al. that introduces a compiler for multi-function calling. Given a task, the framework plans out a DAG. This planning both allows for long-term thinking (which boosts performance) but also determination of which steps can be massively parallelized. We're excited to host paper co-authors Sehoon Kim and Amir Gholami to present this paper and discuss the future of agents. LLMCompiler paper: https://arxiv.org/pdf/2312.04511.pdf LlamaPack: https://llamahub.ai/l/llama_packs-agents-llm_compiler?from=llama_packs Notebook: https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/agents/llm_compiler/llm_compiler.ipynb Timeline: 00:00-34:30 - LLMCompiler Presentation 34:30-37:50 - Short LlamaIndex + LLMCompiler demo 37:50: Q&A
Boost Your Coding Efficiency
AI-recommended products based on this video

Corsair MP600 PRO LPX 4TB M.2 NVMe PCIe x4 Gen4 SSD - Optimized for PS5 (Up to 7,100MB/sec Sequential Read & 6,800MB/sec Sequential Write Speeds, High-Speed Interface, Compact Form Factor) Black

ANCEL AD310 Classic Enhanced Universal OBD II Scanner Car Engine Fault Code Reader CAN Diagnostic Tool (Black) Global Recycled Standard




