Course onboarding
Executive Summary
This program begins with immediate, hands-on engagement: participants install and run Large Language Models (LLMs) locally using Ollama, skipping lengthy introductions to deliver value from day one.
Key demonstration: Build a free, local AI language tutor (e.g., Spanish/French) replicating commercial apps — showcasing how open-source LLMs can deliver real business value without SaaS costs.
Strategic Implications
Democratizing AI
-
Local deployment removes cloud dependencies and licensing fees.
-
Enables rapid prototyping and internal innovation.
Cost Efficiency
-
Open-source models reduce recurring costs of paid APIs.
Model Selection as Core Skill
-
Experiment with multiple models (Llama, Qwen, Phi, Gemma) to identify task-specific performance and ROI.
Hardware Performance Awareness
-
Apple M1 vs. PC emulation impacts model speed; informs infrastructure investment decisions.
Program Overview (8 Weeks)
Week 1
-
Explore frontier models (GPT-4o, Claude 3.5, etc.).
-
Build first commercial project using APIs.
Week 2
-
Rapid UI prototyping with Gradio.
-
Multimodal assistants: text, audio, image.
Week 3
-
Open-source deep dive: Hugging Face pipelines and advanced APIs.
Week 4
-
Model benchmarking + selection frameworks.
-
Case study: Python-to-C++ translation for speed gains (60,000x).
Week 5
-
Build Retrieval-Augmented Generation (RAG) pipelines for internal data Q&A.
Weeks 6–8
-
Capstone: Agentic AI solution — multi-agent collaboration, internet search, push notifications.
Immediate Hands-On Setup
Install & Run Ollama
-
Install Ollama (Windows/Mac).
-
Run Llama 3.2 or similar model locally.
First Project
-
Build a language tutor in your chosen language.
-
Replicate commercial features (chat, flashcards) — entirely free and local.
Commercial Value
-
Cost Savings: Replace SaaS subscriptions with in-house capabilities.
-
Rapid Prototyping: Validate ideas quickly without cloud latency.
-
Internal AI Literacy: Build cross-functional skills for engineering, product, and leadership teams.
Environment Setup
Option 1: Anaconda (Recommended)
-
Creates an isolated environment ensuring compatibility with course demos.
conda env create -f environment.yml
conda activate llms
Option 2: Virtualenv (Lightweight Alternative)
-
Use
python -m venv venv+pip install -r requirements.txt.
API Keys & Secrets
-
Obtain an OpenAI API key (for frontier models).
-
Store in
.envfile at project root:
OPENAI_API_KEY=sk-xxxxx
Ensure
.envis not committed to Git (gitignore enabled).
First LLM Project: Website Summarizer
Goal: Build a “Reader’s Digest” web browser:
-
Scrape any website.
-
Strip irrelevant elements (scripts, styles).
-
Generate a concise Markdown summary via GPT or Ollama.
Business Use Cases:
-
Summarize industry news.
-
Competitive intelligence from public sites.
-
Financial report condensing.
-
Resume summarization in HR.
Technical Concepts Introduced
-
System vs. User Prompts: Foundation of LLM prompting.
-
Messages Object (Chat API format): Widely adopted across providers.
-
Cost Trade-offs: GPT-4o-mini (cheap) vs. Llama 3.2 (free local).
Upcoming Milestones
-
Day 2: Replace OpenAI API calls with local Ollama inference.
-
Day 3: Benchmark 6 frontier models for speed and quality.
-
Week 2: Prototype multimodal UI (text/audio/image).
-
Week 5: RAG pipeline for proprietary data.
-
Week 8: Deploy agentic AI capstone project.
Action Items
-
Clone course repository.
-
Install Ollama & run
llama3.2. -
Create
.envwith API keys. -
Launch JupyterLab and verify environment.
-
Complete first summarizer project.
-
Prepare for rapid prototyping in Week 2.
Comments
Post a Comment