LIVEAI Bootcamps · May 2026 · 🇫🇷 CET
Academy · 6-week cohort/Live Q&A/Replays/Templates/300+ students/4.7/5
FREE · NEXT COHORT OPENS MAY

HUGGING FACE TRAINING: MASTER AI WITHOUT STARTING FROM SCRATCH

Hack'celeration offers a Hugging Face training to learn how to use the largest open-source machine learning platform. Want to integrate AI into your projects without spending months training models? That's exactly what we'll cover together.

Concretely, we'll explore the Models Hub to find and use pre-trained models, master the Transformers library and pipelines, do fine-tuning to adapt models to your needs, deploy demos with Spaces and Gradio, and connect the Inference API to your applications.

Whether you're a developer discovering ML, a data scientist wanting to accelerate your projects, or a product team wanting to add AI without building everything from scratch — this training is for you.

Our approach: 100% practical. No theoretical slides about the history of transformers. You code, you test, you deploy. At the end, you're autonomous to use Hugging Face in your real projects.

MTA+
300+ students trained
★★★★★ 4.7/5 satisfaction
Hack'celeration Academy

Start learning for free.

✓ 6 weeks · ✓ replays · ✓ live Q&A
Live Session
Live session
Trainer speaking
Hugging Face Training — live session extract.
★★★★★★★★★★4.7300+ students
Format
6 weeks
Self-paced + 1h live Q&A weekly
Modules
06
DISCOVER THE HUGGING · MASTER TRANSFORMERS · WORK WITH DATASETS · FINE-TUNE A MODEL · DEPLOY WITH SPACES A · PRACTICAL CASES AND
Price
FREE
Preview cohort · no commitment
For
Builders
No-code creators & low-code devs
Why this training

Why take a Hugging Face training?

Because Hugging Face has democratized access to AI. Instead of spending weeks training models, you can use state-of-the-art models in just a few lines of code.

The problem? The platform is massive. Over 500,000 models, dozens of libraries, dense technical documentation. Without a method, you waste time searching for the right model, struggle with tokenizers, and don't know how to go from "it works locally" to "it runs in prod".

Here's what you'll master:

  • Navigate the Models Hub: You'll know how to find the right model for your use case (NLP, vision, audio), understand metrics, and avoid abandoned models.
  • Use Transformers and pipelines: You'll learn to load a model, run it with pipelines, and understand tokenizers to adapt inputs.
  • Fine-tune a model: You'll adapt a pre-trained model to your own data with the Trainer API, without needing a GPU cluster.
  • Deploy with Spaces and the Inference API: You'll create demos with Gradio and Spaces, and connect the API to your applications.
  • Manage datasets: You'll use the Datasets library to load, transform, and prepare your data efficiently with tools like Airtable.

 

Whether you're starting from scratch or have already tinkered with Hugging Face, we give you the right reflexes to use the platform efficiently and avoid common pitfalls.

Outcome 01
DISCOVER THE HUGGING FACE ECOSYSTEM
We start by understanding how Hugging Face is organized. Not just "it's a model
Outcome 02
MASTER TRANSFORMERS AND PIPELINES
We dive into the heart of Hugging Face: the Transformers library. It's what allo
Outcome 03
WORK WITH DATASETS
Models are great, but without clean data, you can't do anything. We see how to u
Outcome 04
FINE-TUNE A MODEL
This is where it gets powerful. You take a pre-trained model and adapt it to you
Curriculum

What you'll learn in our Hugging Face training

06Modules · curriculum
01

MODULE 1: DISCOVER THE HUGGING FACE ECOSYSTEM

We start by understanding how Hugging Face is organized. Not just "it's a model platform", but really how everything fits together.

You'll explore the Models Hub: how to search for a model, read a model card, understand metrics (downloads, likes, benchmarks), and identify if a model is maintained or abandoned.

We also see the different libraries: Transformers for language and vision models, Datasets for data, Diffusers for image generation, and how they integrate together.

You create your account, configure your environment (access tokens, local cache), and run your first model in a few lines.

At the end of this module, you know how to navigate the ecosystem and you've run an NLP model and a vision model.

02

MODULE 2: MASTER TRANSFORMERS AND PIPELINES

We dive into the heart of Hugging Face: the Transformers library. It's what allows you to use all these models easily.

You'll understand pipelines: these abstractions that let you do classification, NER, translation, Q&A, text generation... in one line of code. We see how to use them, configure them, and understand their limits.

We then look under the hood: tokenizers. You learn how text is transformed into tokens, why it's important, and how to handle special cases (long texts, special languages).

You also discover how to load a model manually with AutoModel and AutoTokenizer, to have more control when pipelines aren't enough.

At the end of this module, you know how to use pipelines for 90% of cases, and you understand how to go further when needed.

03

MODULE 3: WORK WITH DATASETS

Models are great, but without clean data, you can't do anything. We see how to use the Datasets library to manage your data efficiently.

You learn to load datasets from the Hub (there are thousands), but especially to load your own data: CSV, JSON, local files, databases.

We see transformations: filter, map, shuffle, split into train/test. All optimized (lazy loading, cache, Arrow format) to handle millions of rows without exploding your RAM.

You also discover how to prepare your data for fine-tuning: batch tokenization, padding, truncation, and creating DataLoaders compatible with PyTorch.

At the end of this module, you know how to prepare any dataset for training or inference, properly and efficiently.

04

MODULE 4: FINE-TUNE A MODEL

This is where it gets powerful. You take a pre-trained model and adapt it to your specific use case.

We use the Hugging Face Trainer API: you configure your TrainingArguments (learning rate, batch size, epochs), load your model and data, and launch training.

You learn best practices: how to choose a base model, how much data you need, how to avoid overfitting, and how to evaluate your model with the right metrics.

We also see options for training without a local GPU: Google Colab, Hugging Face AutoTrain, and cloud solutions. Because not everyone has an RTX 4090 under their desk.

You do a complete fine-tuning: text classification on your own categories. At the end of this module, you have a custom model running that you can reuse.

05

MODULE 5: DEPLOY WITH SPACES AND THE INFERENCE API

A model running locally is good. A model accessible in prod is better. We see how to deploy with the Hugging Face ecosystem.

You create a Space with Gradio: a web interface to test your model, shareable with one click. We see how to customize the interface, handle inputs/outputs, and optimize performance.

We explore the Inference API: you connect a model directly to your application via REST API. We see authentication, rate limits, and how to handle errors.

You also discover Inference Endpoints for dedicated deployment: when you need more control, dedicated GPU, or guaranteed latency.

At the end of this module, you have a live demo and you know how to connect Hugging Face to any application.

06

MODULE 6: PRACTICAL CASES AND INTEGRATIONS

We put everything together with concrete projects adapted to real use cases.

Case 1: Create a support ticket classification system. You fine-tune a model on your categories, deploy an API, connect to your tool (Make, n8n, or direct).

Case 2: Build a semantic search engine. You use embeddings and Sentence Transformers to search by meaning, not just keywords.

Case 3: Integrate text generation. You connect an LLM via the Inference API to your automation workflow.

We also see integrations with LangChain, how to combine multiple models, and best practices for going to production (monitoring, versioning, fallbacks).

At the end of this module, you have functional projects and a method to tackle any AI use case with Hugging Face.

Why us

Why train with Hack'celeration?

AN EXPERT AGENCY THAT USES HUGGING FACE FOR ITS CLIENTS DAILY

Discover our Hugging Face Agency

Frequently asked questions

01Is it really free?+
Yes. You're among the first to benefit from the program in preview. No hidden fees, no commitment. Just complete access to the 6 modules, replays, and support from our experts.
02How long does it last?+
6 weeks. You progress at your own pace with 2-hour training blocks (videos, exercises, templates). Plus 1 group session of 1 hour per week to ask questions and work on practical cases with our trainers.
03Is it live or recorded?+
Both. The training content is recorded so you can progress whenever you want. Q&A sessions are live, but also recorded if you miss a session.
04How do I register?+
Registration form on this page. Once registered, you receive a confirmation email with platform access, the session schedule, and the first content to get started.
05Do I need to know how to code to follow the Hugging Face training?+
Yes, Python basics are necessary. You must know how to write functions, manipulate lists and dictionaries, and install packages with pip. No need to be an expert, but if you've never coded, start with a Python introduction first. We don't teach Python in this training.
06Hugging Face vs OpenAI: when to choose Hugging Face?+
OpenAI is simple and powerful, but it's a paid black box per usage. Hugging Face is open source: you can fine-tune models, host them yourself, and keep control over your data. Choose Hugging Face if you want flexibility, predictable costs, or if you have confidentiality constraints. Choose OpenAI if you want quick plug-and-play without worrying about infrastructure.
07Can I use Hugging Face without a GPU?+
For inference on small models, yes, your CPU is enough. For fine-tuning or large models (LLMs, Stable Diffusion), you'll need a GPU. In the training, we see how to use Google Colab (free with GPU), Hugging Face Spaces, or the Inference API to work around this limitation.
08What's the difference between Transformers, Diffusers, and Datasets?+
Transformers is for language and vision models (BERT, GPT, ViT). Diffusers is for image generation (Stable Diffusion, DALL-E style). Datasets is for loading and preparing your data. All three are separate Python libraries but work together. We mainly cover Transformers and Datasets in the training, with an intro to Diffusers.
09How to integrate Hugging Face with Make or n8n?+
You use the Hugging Face Inference API, which exposes any model as a REST API. In Make or n8n, you make an HTTP call with your authentication token, send your input (text, image), and retrieve the prediction. We cover this in detail in module 5 and do a complete practical case in module 6.
10Does the training cover LLMs and text generation?+
Yes. We see how to use generation models (GPT-2, Mistral, Llama via the Hub), how to configure them (temperature, top_p, max_tokens), and how to connect them to your applications. We don't do LLM fine-tuning (it requires too many resources), but we see how to use them effectively via the Inference API and Endpoints.
Hack'celeration Academy

Start learning for free.

✓ 6 weeks · ✓ replays · ✓ live Q&A