From the course: Responsible Generative AI and Local LLMs
Unlock this course with a free trial
Join today to access over 24,800 courses taught by industry experts.
The continuous build binary
From the course: Responsible Generative AI and Local LLMs
The continuous build binary
- [Instructor] Continuous build is a very good concept to dive into for LLMOps because you need to figure out how to have a binary that's going to target the right architecture and also how to create an artifact that you can use later. So first up here we have a source repo, and this could be where your framework lives, and maybe it's a framework like Rust Candle, and as you're making changes to it, it would trigger the build system. Now, in the case of LLMOps where would you actually take this build system towards? Well, first what you would do is you would trigger a cloud host because the cloud host would have a GPU that could have the correct drivers installed. For example, the CUDS drivers, and this would be a build host that would be remotely connected to the build system. The build system could be a system like GitHub Actions or it could be, let's say, AWS Code Build or some other cloud-based build system. It really doesn't matter. But once you go through and you compile the…
Contents
-
-
Coding ELO in Python4m 7s
-
Coding ELO in Rust3m 49s
-
Coding ELO in R3m 31s
-
(Locked)
Coding ELO in Julia3m 5s
-
(Locked)
Profit sharing concepts5m 40s
-
(Locked)
Tragedy of the commons4m
-
(Locked)
Deploying LLMs with Lorax and SkyPilot3m 56s
-
(Locked)
Fine-tune Mistral and Ludwig3m 22s
-
(Locked)
Game theory in generative AI4m 45s
-
(Locked)
Perfect competition2m 45s
-
(Locked)
Negative externalities3m 23s
-
(Locked)
Regulatory entrepreneurship4m 18s
-
(Locked)
Creating reinforcement bias3m 59s
-
(Locked)
Getting started with Mozilla llamafile3m 36s
-
(Locked)
Developing cosmopolitan4m 29s
-
(Locked)
Building blocks for generative AI with whisper.cpp2m 53s
-
(Locked)
Transcribing with Whisper2m 56s
-
(Locked)
Portable phrase CLI3m 34s
-
(Locked)
Candle hello world2m 56s
-
(Locked)
Exploring StarCoder in Rust5m 54s
-
(Locked)
Whisper Candle transcriber5m 51s
-
(Locked)
Local system metrics3m 4s
-
(Locked)
Exploring remote development on AWS2m 15s
-
(Locked)
Rust for large language models (LLMs)2m
-
(Locked)
The continuous build binary2m 6s
-
(Locked)
Serverless inference1m 56s
-
(Locked)
Rust CLI inference2m 7s
-
(Locked)
Rust chat inference2m 3s
-
(Locked)
The chat loop with StarCoder2m 4s
-
(Locked)
Invoke an LLM on an AWS G5 instance, part 14m 36s
-
(Locked)
Invoke an LLM on an AWS G5 instance, part 22m 58s
-