I didn't die, or get sick the past month and a half, I had some urgent things to get done to be prepared for me newest kid. My wife and I welcomed a new family member this past week, a beautiful baby boy, 7lbs 6 oz if memory serves. Now every time we have a baby I remember I have mild panic attacks thinking they aren't breathing. I went and had chatgpt deepresearch go look and see if I could implement something which could let me run a homebrew where the computer is constantly monitoring breathing. See the export of that report below. In short there's some projects where people implemented algorithms but nobody has done it as a open source project per-se. Part 2 will have my process
Great. I’ll compile a ranked list of open-source hobbyist projects designed for newborn and baby care, with a particular focus on those that use software (especially with 1080p cameras or microphones) for monitoring breathing, heartbeat, or other vital signs. I’ll also include general tracking apps if relevant, emphasizing active and well-supported projects.
I’ll let you know once I’ve gathered everything for your review.
Open-Source Baby Monitoring Projects (Hobbyist DIY Solutions)
Introduction
This report surveys open-source hobbyist projects for monitoring infants, especially those using cameras or microphones to track vital signs (like breathing) or baby activities. We focus on community-supported, actively developed projects rather than commercial or FDA-approved devices. Each project is ranked by community engagement and development activity. We provide a brief summary, key technical details (platform, language, dependencies), a link to the project, plus noted strengths and limitations.
(Note: These DIY projects are not medically certified; they should be used with caution and are not substitutes for professional monitors.)
Top Open-Source Baby Monitoring Projects (Ranked)
1. Baby Buddy (Baby Care Tracker)
- GitHub: babybuddy/babybuddy – Stars: ~2.3k, Contributors: 90+. Highly active (regular commits, latest in 2025) with excellent documentation and community support (translations in 20+ languages).
- Tech Stack: Python (Django web app). Deployable via Docker or Python on any server (SQLite by default).
- Features: Comprehensive logging of baby care activities – sleep, feedings, diaper changes, tummy time, pumping, etc. – with a clean web UI and charts. It helps caregivers visualize patterns and “predict” needs (e.g. next feeding) from recorded data. Multi-user support allows multiple caregivers to input/view data. There is a REST API and even Home Assistant integration (community-supported) for smart home use.
- Strengths: Very mature and well-supported project. Provides a centralized, self-hosted baby journal with rich analytics and a responsive interface (desktop & mobile views). Documentation is thorough, including a demo site and an extensive user guide. The large community means bugs are addressed and new features/languages continuously added. Great for tracking long-term trends in growth, sleep, etc.
- Limitations: Not a real-time monitor – it does not capture live audio/video or automatically detect vitals. All data is manually entered, which can be labor-intensive (though some automations exist via API). Requires self-hosting a web server (though lightweight, it’s an extra step for non-technical users). Focused on tracking and analytics rather than immediate alerts.
2. Track & Graph (Android Tracking App)
- GitHub: SamAmco/track-and-graph – Stars: ~500. Active development with 1.5k+ commits and updates as of 2024. Available on Google Play and F-Droid.
- Tech Stack: Android (Kotlin/Java). Runs offline on an Android phone/tablet (no server needed).
- Features: A flexible tracking and graphing app not limited to babies, but commonly used for baby metrics. Users define any parameters to track (e.g. weight, height, sleep hours, feed volume, mood). The app then logs entries with timestamp, value, notes, and produces graphs over time. Essentially, it’s a customizable personal data logger that can serve as a baby tracker (and can track parents’ or siblings’ data in parallel by grouping trackers).
- Strengths: Very flexible and open-source (no ads or cloud): you can track exactly what you want. Supports multiple trackers and displays trends graphically. Being mobile, it’s convenient for on-the-go logging. Works offline and keeps data local (privacy-friendly). The community around it is decent for a general-purpose tracker, and it’s maintained.
- Limitations: Not specifically designed for baby care, so it lacks built-in baby-specific features or presets – everything must be configured by the user. No built-in interpretations (you have to infer patterns from graphs yourself). It also does not support live monitoring (no camera or audio functions). Essentially, it’s a manual data tracker, albeit a powerful one. Users looking for plug-and-play baby apps might find the setup non-intuitive.
Link: Track & Graph on GitHub – Android app (Google Play & F-Droid).
A 3D-printed Fruitnanny baby monitor with IR LED ring for night vision.
3. Fruitnanny (Raspberry Pi Video Monitor)
Fruitnanny is a well-known DIY baby monitor project – essentially a Raspberry Pi based baby cam with environment sensors. It provides a web interface for real-time video/audio streaming and room condition monitoring. Launched in 2017, it garnered a strong following among makers (detailed build guides are available). Key features and health of the project are:
- GitHub: ivadim/fruitnanny – Stars: ~140, Forks: 50+. Developed by Dmitry Ivanov, with last major commits in 2017–2018. Not much recent activity, but many have successfully built it (the repository includes instructions and a Docker setup).
- Tech Stack: Node.js backend with a custom WebRTC implementation for streaming. Runs on Raspberry Pi (originally Pi 3 or Zero W) with a NoIR camera module. Utilizes a Janus WebRTC gateway and Nginx, usually deployed via Docker Compose. Hardware includes infrared LEDs for night vision and a DHT22 sensor for temperature/humidity.
- Features: Streams live video (HD 720p/1080p) and two-way audio with low latency (WebRTC in browser). Works on any device with Chrome/Firefox (PC or mobile) – no special client needed. Monitors ambient temperature and humidity in the baby’s room and displays those in the web UI. Night vision is enabled via IR LEDs and the Pi NoIR camera. The system can play lullabies or sounds via the Pi’s audio (if configured). All streaming is local (or self-hosted if accessed remotely), addressing privacy concerns of cloud cams.
- Strengths: Real-time video/audio monitoring comparable to commercial baby cams, but fully under the user’s control. The use of WebRTC means latency is minimal, which is crucial for monitoring. Multi-platform support (viewable from any OS or phone) is a plus. It also extends beyond a basic camera by logging environmental data (temperature/humidity) to help ensure the nursery conditions are comfortable. The project’s documentation (blog posts and GitHub README) provides a complete DIY guide – from hardware wiring to software setup.
- Limitations: The project is not actively maintained now. While it still works, newer Raspberry Pi models or software updates might need troubleshooting by the user community. Setup is relatively complex: it requires assembling hardware (camera, sensors, possibly a custom case) and dealing with Docker, certificates (for HTTPS WebRTC), etc. There is no advanced analysis of the video feed – it won’t detect motion, breathing, or crying on its own (it’s essentially a high-quality DIY camera system). Also, remote access requires configuring your network or a VPN, since it’s meant to be private by default.
Link: Fruitnanny on GitHub – includes build guide and software.
4. Raspberry Pi Sleep Monitor (Video Streamer)
- GitHub: srinathava/raspberry-pi-sleep-monitor – Stars: ~101, Forks: 33. Last commit was a few years ago (project started around 2018). It’s often cited as a base for other DIY monitors (e.g. other developers call it “awesome” and have forked/modified it).
- Tech & Platform: Uses a mix of JavaScript (likely Node or client JS) and some Python on a Raspberry Pi. It sets up a low-latency MJPEG or similar video pipeline. The focus is on streaming video from the Pi’s camera to a web browser with minimal delay. Likely uses sockets or a lightweight webserver to push frames efficiently.
- Features: Provides a simple live video feed of a baby sleeping, accessible via web. Emphasis on low latency – so parents can see movements almost in real time. The project’s name suggests it’s intended to monitor sleep (i.e., keep an eye on the baby’s position and movements). It doesn’t log data or have fancy UI elements; it’s more of a backend + basic viewer setup.
- Strengths: Simplicity and reliability – by doing one thing well (video streaming), it achieved popularity. The latency is lower than typical DIY solutions, which is important for monitoring subtle movements. Many people have used this as a starting point for their own customizations, which speaks to its solid design. The setup instructions (via the project’s wiki) are straightforward, making it a good choice for those who want a quick DIY camera feed.
- Limitations: It’s quite barebones. For instance, audio streaming is not mentioned (likely not supported out-of-the-box), so it may be video-only. No built-in alert system or analysis – one must watch the feed to notice if baby wakes or cries. As it’s not updated recently, any bugs or compatibility issues with newer Pi OS might need community fixes. Essentially, it’s a foundation to build on (and indeed others have extended it) rather than a full-featured monitor with GUI, but it serves the purpose of a private, always-on video baby monitor.
Link: Raspberry Pi Sleep Monitor on GitHub (setup instructions in repository Wiki).
5. DeepInfant (AI Baby Cry Classifier)
- GitHub: skytells-research/DeepInfant – Stars: 43【30†L1-L9**, Forks: 7. This is a research-driven project (developed at KTH in 2023) and still under active development for academic purposes. It has multiple releases (DeepInfant V2 is the latest, with improved accuracy).
- Tech Stack: Python (TensorFlow/PyTorch). It’s essentially a machine learning model and scripts. No dedicated GUI; intended to be integrated into other applications or run via command line. The repo includes training code, a pre-trained model, and a research paper draft.
- Features: Classifies the reason for a baby’s cry using AI. DeepInfant analyzes acoustic features of an infant’s cry and predicts categories such as hunger, pain, discomfort, tiredness, or burping need. It achieves about 89% accuracy distinguishing five types of cries in testing. In real-time use, it could listen through a microphone and output what the likely cause of crying is. It also detects the presence of crying vs silence with high accuracy. This goes beyond simple “crying/not crying” – it attempts to decode the meaning of cries using a hybrid CNN-LSTM model.
- Strengths: It’s an innovative open-source implementation of something usually only found in advanced commercial products. Rich technical documentation is provided (including an upcoming paper), explaining how certain cry features correlate to needs (e.g. pain cries having higher energy). For tech-savvy parents or developers, it offers the ability to integrate this model into a custom baby monitor or smartphone app for real-time insights. The model has been trained on a large dataset (10k+ cry samples, augmented) for robustness.
- Limitations: As a standalone, it’s not a full monitoring system – there’s no plug-and-play device or interface. One would need to set up a microphone, run the model inference continuously and possibly build a notification system around it. This requires programming skills. Additionally, while accuracy is high in controlled conditions, real-world performance may vary with background noise or different baby behaviors. Community support is relatively small (being a niche research project), so users might be on their own for implementation issues. It also doesn’t address video at all – only the audio aspect of monitoring.
Link: DeepInfant on GitHub – includes model and documentation.
6. OpenBabyMonitor (Raspberry Pi Crying Monitor)
- GitHub: lars-frogner/OpenBabyMonitor – Stars: 27, Forks: 6. Developed in 2019 by a new parent, but no longer actively maintained (author acknowledges limited time). It’s still fully open-source and available for tinkering, but users should be aware of its experimental nature.
- Tech Stack: Bash/Python for installation, and Flask (Python) for the web interface. Designed for Raspberry Pi (Zero W or others) with a camera and microphone. Provides a web app that the Pi hosts on the local network (or the Pi can act as a hotspot). Installation was simplified by offering pre-built disk images to flash onto an SD card.
- Features: Audio and video streaming up to 1080p from the Pi’s camera, viewable via any web browser on the same network. You can switch between audio-only mode or audio+video. Crucially, it has built-in baby cry detection: the software can listen passively and notify you when the baby is crying. Cry detection can work in two modes – a simple noise threshold (if sound volume exceeds a level for some time, assume crying) or a more advanced ML mode using a neural network trained on Google’s AudioSet to distinguish actual crying from other sounds. The project can run on battery (and is optimized for low power draw), allowing a portable DIY baby monitor for several hours. Networking is flexible: it can join your Wi-Fi or create its own if you’re outside home.
- Strengths: Turn-key DIY solution – the author provided ready disk images, so one could get a Pi Zero W, camera, USB mic, and have a working smart baby monitor fairly quickly. The web UI means no special app required, and multiple devices can connect. Cry detection using ML is a standout feature that many basic commercial monitors lack. It also logs sound events and can estimate sleep duration by when the baby was quiet or crying (some basic logging of when baby fell asleep/woke, indirectly). The use of a Raspberry Pi makes it cheap (~$30 of hardware) and very customizable.
- Limitations: The project has an official disclaimer that it’s not reliable enough to trust for safety-critical monitoring. Being unmaintained, some installation hurdles or bugs won’t be fixed unless the community steps in. The video feed, while up to 1080p, may not be as smooth or low-latency as specialized solutions (it’s unclear what streaming method is used, possibly MJPEG or MJPEG over Flask which could have some latency). The cry detection, while clever, could have false positives/negatives and hasn’t been extensively validated. No breathing or movement monitoring is included – it’s primarily sound-based alerts. Use of it as a hotspot (access point) might require Linux networking knowledge if not using the pre-built image.
Link: OpenBabyMonitor on GitHub (includes source and setup guide).
7. Eulerian Video Magnification Baby Monitor (Breathing Detection)
- GitHub: miguelfreitas/eulerian-baby-monitor – Stars: 20, Forks: 7. A highly experimental project (circa 2018) using MIT’s Eulerian Video Magnification technique for baby monitoring. It’s a proof-of-concept codebase rather than a user-friendly application. The author explicitly notes it’s for those with programming skills and is not an end-user product.
- Tech Stack: Python scripts and a fork of the Realtime-Video-Magnification library. It requires an IP camera that outputs an RTSP stream and a Linux machine (could be a PC or possibly a Pi with enough power) to run the analysis. Uses ffmpeg for ingesting video, OpenCV for processing, and Keras (TensorFlow) for the neural network model.
- Features: Contact-free breathing monitor via video. It amplifies subtle motions in the video feed – specifically the periodic motion of an infant’s chest rising and falling – using Eulerian Video Magnification. The output is a real-time “motion scalar” that correlates with breathing activity. A neural network then classifies this to a binary state: breathing vs not-breathing. If no breathing is detected for ~30 seconds, an alarm sounds to alert the parent. There’s also a simple web page to allow the user to start/stop the alarm from a phone (since the processing might run on a separate computer). In summary, it tries to do what some high-end monitors do (like detecting if a baby stops moving/breathing) but just by analyzing video frames.
- Strengths: It’s one of the few open-source attempts at vital sign monitoring without wearables. By leveraging Eulerian Video Magnification, it can detect extremely small motions (or even color changes for pulse, in theory) – something a human eye might miss. This is the same principle used in some research to measure pulse or breathing via camera. As a result, it could potentially alert to dangerous situations like apnea. The approach is clever and rooted in published research. For hackers and researchers, this provides a base to experiment further (e.g., improving the model or integrating with a GUI).
- Limitations: Very technical and not plug-and-play. Setting this up involves configuring an IP camera’s RTSP feed, using command-line ffmpeg and ensuring the Python environment with all dependencies is correct. The neural network may need retraining or calibration for different camera positions or babies (the provided model was trained on the author’s data). False alarms can happen if the baby moves out of frame or if there are other motions (like a fan or parent’s hand in frame). With only ~6 commits, the project is more a personal hack shared publicly than a supported tool. No out-of-the-box packaging or continuous community improvements are present. It also does only breathing detection – it doesn’t stream video to you (though you could run another app in parallel for that) and doesn’t detect cries or anything else.
8. Neonatal Camera Respiration Monitor (Research Algorithm)
- GitHub: cezius/Neonatal-Respiration-Monitoring-Algorithm – Stars: 4. An open-source implementation of an academic algorithm (from an MDPI 2021 paper) for estimating infant respiration rate via camera. Not aimed at hobbyist end-users, but relevant as it shares methodology that could inspire DIY solutions.
- Tech Stack: Python (with Jupyter Notebook). It’s structured for offline analysis: the input is a recorded video (or a sequence of frames in a HDF5 file) of a neonatal infant, and the output is a series of respiration rate values. Utilizes a Convolutional Neural Network to detect the infant’s Region of Interest (ROI) – typically the chest or belly – in each frame, and then computes the respiratory motion from that ROI. The algorithm was tailored for NICU footage (newborns in incubators).
- Features: Given a video of a baby, it will estimate the breathing rate in breaths-per-minute for each time window. It can process in near real-time, but it is not integrated with any camera capture directly (the assumption is you have video files). The method is explained in the associated paper – essentially tracking the subtle up-and-down movement of the chest or abdomen. No GUI; one would run the provided Jupyter notebook or python script on the video file to get a respiration graph or values.
- Strengths: It’s a peer-reviewed algorithm, so it has some validation behind it. For developers interested in infant vital signs, this repository provides a blueprint of how to do it: detect the baby’s torso with a CNN and then apply motion analysis. As such, it could be a starting point for a more real-time monitor. The code being open-source means one can experiment with their own videos or even adapt it to run live (with effort).
- Limitations: Not a turnkey project at all. It’s not designed to run continuously on a camera feed without significant modification. Also, due to privacy and complexity, the sample data is not included, so users would need to gather their own test videos to use it. With only 4 stars, it has a tiny user base; support or community-driven improvements are minimal. In short, it’s more of an academic reference implementation – useful for learning or integrating by advanced users, but not something you deploy as a home monitor unless you significantly extend it.
Link: Neonatal Respiration Algorithm on GitHub (contains references to the research article and code).
9. Other Notable Open-Source Projects (Honorable Mentions)
- Baby Cry Detector (Audio only): Several smaller projects focus on detecting when a baby is crying using microphones and ML. For example, “baby-cry-detector” by navjotts (25★) provides a trained model and simple web demo to alert if crying is detected. It doesn’t classify the type of cry, but can be used to trigger notifications (similar to what OpenBabyMonitor does, but standalone). These tend to be single-contributor projects and may lack extensive documentation, but they show how to use spectrogram analysis and neural nets to distinguish baby cries from noise.
- Homebrew Monitor with IP Cameras: kvmonitor by Kenton Varda (45★) is a unique project that repurposes existing IP security cameras as baby monitors. It runs a small C++ server that mixes audio from multiple RTSP camera streams and serves them via a webpage. This allows a parent with, say, two cameras (one in each child’s room) to listen to both audio feeds merged, on their phone, without cloud services. It addresses range and privacy issues by using home network cameras. While very niche, it’s actively used by its creator and has inspired others who want multi-room audio monitoring.
- Secure Pi Cam (Self-hosted): danmacnaughtan/baby-monitor (31★) is an example focusing on security/privacy of a Pi-based camera setup. It streams video from a Raspberry Pi to a self-hosted web app with TLS encryption and authentication. Essentially, it’s a DIY equivalent of a Nest cam but fully private. It doesn’t add ML features, but it demonstrates good practices (using encryption, tokens, etc.) for those concerned about spying or hacking of baby cams.
- Others: There are many DIY guides and small codebases available (for instance, older projects like bbPiCam or babycam_pi, etc.). These typically involve a Raspberry Pi with IR camera and maybe some sensors, streaming via protocols like RTSP or using tools like motion or ffmpeg. They often overlap in functionality (video streaming, maybe temperature display). The ones highlighted above are those with relatively more community interest or unique features.
Conclusion
In summary, the open-source ecosystem offers a variety of baby monitoring solutions, from software for logging and analyzing baby care routines (like Baby Buddy, Track & Graph) to DIY hardware-based monitors (various Raspberry Pi camera projects) and even cutting-edge experiments in vital sign detection (Eulerian magnification for breathing, cry analysis with AI). Each project has its own focus and trade-offs. Hobbyist projects benefit from being inexpensive and customizable, with no vendor lock-in or cloud dependencies, but they also require more technical skill to set up and are not medically certified. Community support ranges from very robust (Baby Buddy’s active contributor base) to minimal (one-off research code), so choosing a project may depend on your comfort with technology and which aspect of baby monitoring is most important to you (general tracking, live video, audio alerts, breathing monitoring, etc.).
All these projects underscore that with some ingenuity, parents can build solutions that fit their exact needs – whether it’s a simple webcam baby monitor or an ambitious AI-powered nanny. Just remember that these tools should augment, not replace, safe parenting practices and attentive care. Always test any DIY monitor thoroughly to ensure it works reliably for your situation before counting on it.