I run through OpenAI’s new interface for distillation. It allows you to store responses from a stronger model (e.g. o1) and use them to fine-tune a weaker model, like GPT-4o-mini.
I also show some results from a comprehensive fine-tuning on distilled data, and then comment on Google’s emerging fine-tuning platform (still quite clunky).
And there’s a Google Colab notebook that takes you through all the steps for fine-tuning with OpenAI.
Let me know of any comments, on YouTube or here, cheers, Ronan
Ronan McGovern