Duration
5 days · 35 contact hours
Dates
15–19 June 2026
Location
London, United Kingdom
Fee
£3,950 + VAT

Software engineers ready to ship LLM features.

Backend engineers, ML engineers, and senior developers building production systems who need to add generative AI capabilities responsibly.

What you'll build.

DAY01

Foundations & prompt engineering

A practical mental model of how an LLM produces text — and the controls you actually have.

  • Tokenisation, context windows, attention — what changes when you change them
  • Decoding strategies: temperature, top-p, top-k, beam — when each matters
  • System / user / assistant message contracts; structured outputs
  • Few-shot, chain-of-thought, and prompt patterns that survive model updates
LabBuild a JSON-strict structured-output classifier with explicit validation, retry logic, and a prompt that gracefully degrades across three model providers.
DAY02

Tools, function calling, and agent loops

Move beyond chatbots: have the model take actions in your systems, safely.

  • Function calling across providers (OpenAI, Anthropic, local) — common ground
  • Tool schemas, argument validation, sandboxing, and audit trails
  • Agent loop architectures: ReAct, plan-and-execute, when not to agentise
  • Cost, latency, and failure-mode analysis for multi-turn agents
LabBuild a code-review agent that reads a Git diff, calls a linter and a test runner as tools, and produces a structured PR comment with citations to specific lines.
DAY03

Retrieval-augmented generation in practice

The most common production pattern — done with the rigour it deserves.

  • Embedding models compared; when fine-tuned embeddings actually help
  • Chunking strategies: fixed, semantic, hierarchical — and how to evaluate them
  • Hybrid search (BM25 + dense), reranking, and metadata filtering
  • Citation discipline: forcing the model to ground answers in retrieved passages
LabBuild a documentation-Q&A RAG over a real open-source codebase. Measure recall@k on a hand-built golden set; iterate the chunker until you beat the baseline.
DAY04

Evaluation, observability, and cost control

The discipline that separates a demo from a system you can put your name on.

  • Building golden datasets that survive prompt and model changes
  • LLM-as-judge: when it works, when it lies, and how to keep it honest
  • Tracing and observability: spans, prompts, costs, latencies, and replay
  • Caching layers, prompt compression, and choosing the right model per task
LabWire an evaluation harness around yesterday's RAG. Add tracing, cost-per-query metrics, and a CI check that fails the PR if quality drops below threshold.
DAY05

Safety, governance, and capstone

The realities you cannot ship without addressing.

  • Prompt injection & indirect injection; defences that actually work
  • PII detection and redaction in inputs and outputs
  • Abuse detection, rate limiting, and per-tenant isolation
  • Mapping your system to EU AI Act risk categories — practical exercise
CapstoneTake one of your real workplace problems and present an end-to-end design: data flow, prompts, retrieval, evaluation, monitoring, and a one-page risk register.

Shady Ali

Shady Ali

Co-Director · Sadiqoon Technologies

AI architect with deep production experience designing and operating sovereign LLM infrastructure for regulated industries. Teaches the patterns he ships.

Inclusions.

  • 35 hours of live, instructor-led tuition
  • Full lab environment + cloud credits for the week
  • Printed bilingual workbook + digital code repository
  • Daily lunch & refreshments at the venue
  • Welcome dinner on Day 1; cohort dinner on Day 4
  • UK visa support letter on confirmed registration
  • 30 days of post-course Q&A access to the instructor
  • Sadiqoon Institute Certificate of Completion

Reserve your seat.

Submit the form below and we will respond within two working days with a formal proforma invoice and the full registration pack.

Opens your email client → [email protected]