Issue #329 - The ML Engineer 🤖
Thank you for being part of over 70,000+ ML professionals and enthusiasts who receive weekly articles & tutorials on Machine Learning & MLOps 🤖 You can join the newsletter for free at https://ethical.institute/mle.html ⭐
If you like the content please support the newsletter by sharing with your friends via ✉️ Email, 🐦 Twitter, 💼 Linkedin and 📕 Facebook!
This week in Machine Learning:
If you're looking for an interesting career opportunity, I'm hiring for a few roles including Data Science Manager (Forecasting), as well as Data Scientist (Forecasting) - check them out and please do share with your network!
Thrilled that our survey is featured in TheNewStack which dives into the state of production ML and uncovers key insights across challenges, tech stacks, trends and demographics. There are some actionable insights for practitioners, such as the challenges in Prod ML on observability and monitoring as well as the operational challenges required as applications scale for robust Day 1 and Day 2 practices. We continue to see a lot of key trends in MLOps such as favoring custom-built solutions over vendor tools across their tech stacks, as well as products such as MLflow leading in model tracking and Airflow in workflow orchestration. Check it out to get a refresher on the state of prod ML!
This is a fantastic (free) 450 page book on machine learning theory and practice covering the full foundation of the domain: This is a great deep dive into foundational topics such as bias-variance tradeoffs, VC-dimension, PAC learning, but also extending to core concepts such as convex optimization, generalization bounds and much more. Whether you are a seasoned practitioner or an interested enthusiast, this is a great resource to dive into the algorithmic paradigms that set the backbone of the field such as stochastic gradient descent, boosting, support vector machines, and kernel methods, while also covering essential topics like model selection, regularization, and validation techniques.
The 13 laws of tech come up more often that you may think, so it's definitely worth a quick refresher: 1. Parkinson’s law: Work expands to fill the available time. 2. Hofstadter’s Law: It always takes longer than you expect. 3. Brooks’ law: Adding manpower to a late software project makes it later. 4. Conway’s law: Organizations produce designs which are copies of the communication structures of these organizations. 5. Cunningham’s law: The best way to get the right answer on the internet is to post the wrong answer. 6. Sturgeon’s Law: 90% of everything is crap. 7. Zawinski’s Law: Programs which cannot expand are replaced by ones that can. 8. Hyrum’s Law: With a sufficient number of users of an API, it does not matter what you promise in the contract. 9. Price's law: 50% of the work is done by the square root number of people. 10. Ringelmann effect: The tendency for individual members of a group to become increasingly less productive as the size of their group increases. 11. Goodhart’s law: When a measure becomes a target, it ceases to be a good measure. 12. Gilb’s law: Anything you need to quantify can be measured in some way that is superior to not measuring it at all. 13. Murphy’s Law: Anything that can go wrong will go wrong. This is a great compilation of the laws of software, as they truly do appear more often than one would like on a day-to-day basis.
Meta has released Llama 4! It is quite exciting to see the continuous contribution to the ML community, particularly with the increasing competition (e.g. China). This release comes with two 17B-parameter models which leveraging a mixture-of-experts architecture: Llama 4 Scout has a 16-MoE architecture, and fits on a single NVIDIA H100 GPU (efficiency seems to be a growing trend) and supports a huge 10M token context window. Llama 4 Maverick has a 128-MoE architecture which claims to beat ChatGPT 4.5 across reasoning, coding, and visual benchmarks, however we don't see comparisons to recent chinese models such as Tencent's and DeepSeek's models. It is also interesting to see the adoption of safety and bias mitigation strategies, it will be interesting to see what the community is able to build from these.
What better way to dive into the field of Reinforcement Learning than by diving into the internals of some of the core foundational concepts in the field: This is a great resource which puts together approachable tutorials across reinforcement learning by building core components from scratch in Python. This is targeted for ML practitioners, but it seems it can be approachable by anyone that is interested to learn more about this important field (which is also powering some of the most innovative GenAI models). The repo is setup in detailed Jupyter notebooks covering everything from basic exploration and tabular methods (like Q-Learning and SARSA) to advanced techniques (such as PPO, DDPG, and multi-agent algorithms).
Recommended by LinkedIn
Upcoming MLOps Events
The MLOps ecosystem continues to grow at break-neck speeds, making it ever harder for us as practitioners to stay up to date with relevant developments. A fantsatic way to keep on-top of relevant resources is through the great community and events that the MLOps and Production ML ecosystem offers. This is the reason why we have started curating a list of upcoming events in the space, which are outlined below.
Upcoming conferences where we're speaking:
Other upcoming MLOps conferences in 2025:
In case you missed our talks:
Open Source MLOps Tools
Check out the fast-growing ecosystem of production ML tools & frameworks at the github repository which has reached over 10,000 ⭐ github stars. We are currently looking for more libraries to add - if you know of any that are not listed, please let us know or feel free to add a PR. Four featured libraries in the GPU acceleration space are outlined below.
If you know of any open source and open community events that are not listed do give us a heads up so we can add them!
OSS: Policy & Guidelines
As AI systems become more prevalent in society, we face bigger and tougher societal challenges. We have seen a large number of resources that aim to takle these challenges in the form of AI Guidelines, Principles, Ethics Frameworks, etc, however there are so many resources it is hard to navigate. Because of this we started an Open Source initiative that aims to map the ecosystem to make it simpler to navigate. You can find multiple principles in the repo - some examples include the following:
If you know of any guidelines that are not in the "Awesome AI Guidelines" list, please do give us a heads up or feel free to add a pull request!
About us
The Institute for Ethical AI & Machine Learning is a European research centre that carries out world-class research into responsible machine learning.
Meta's LLaMa 4.0 release is setting the stage for some serious ML innovation! From deep dives into production ML to RL from scratch, this week’s newsletter is packed with game-changing content. Do you think 2025 will be the year we finally see widespread adoption of production-ready ML models, or are we still refining the process for scalability and efficiency?