DagsHub’s cover photo
DagsHub

DagsHub

Software Development

Everything you need to manage multimodal AI data & models.

About us

DagsHub allows you to curate and annotate multimodal datasets, track experiments, and manage models on a single platform. With DagsHub you can transform petabytes of vision, audio, and LLM data into golden datasets to improve your AI models.

Industry
Software Development
Company size
11-50 employees
Headquarters
San Francisco
Type
Privately Held
Specialties
MLOps, Data Science, Machine Learning, DataOps, Data Labeling, and AI platform

Products

Locations

Employees at DagsHub

Updates

  • View organization page for DagsHub

    8,488 followers

    Enterprises face increasing challenges in bringing AI to production while maintaining security, compliance, and scalability. Many teams work with unstructured data like computer vision, audio, text, and LLMs, requiring a solution that operates securely on-premise. DagsHub is now integrated with Red Hat OpenShift and OpenShift AI, providing an end-to-end machine learning platform that covers: • Dataset curation and annotation • Experiment tracking and model management • Secure, scalable MLOps workflows With this integration, AI teams can develop, iterate, and deploy models within their own infrastructure without compromising security or performance. Read the full announcement: https://lnkd.in/dPwZffZJ

    • No alternative text description for this image
  • What a RAG system looks like from the inside

    View profile for Dean Pleban

    Co-Founder & CEO at DagsHub | AI Data Development Platform

    What Does a RAG (Retrieval-Augmented Generation) System Look Like from the Inside? RAG frameworks combine the strengths of large language models (LLMs) with external knowledge bases. By combining what #LLMs have learned during their training with real-time information from external sources, RAG greatly improves what these models can do. This approach enables models to give more accurate and current responses by using both their learned knowledge and new external information, leading to the development of diverse RAG applications and three distinct RAG paradigms: 1. Naive RAG: Combines model text with simple data retrieval. 2. Advanced RAG: Deeply integrates retrieved data for precise responses. 3. Modular RAG: Uses specialized modules for flexible response generation. At DagsHub, we enable the development and evaluation of #RAG systems. Our platform provides tools for creating high-quality #datasets, integrating human expertise in the evaluation process, and tracking prompt engineering efforts.

    • No alternative text description for this image
  • Object detection is going to be pretty much everywhere

    View profile for Dean Pleban

    Co-Founder & CEO at DagsHub | AI Data Development Platform

    If you didn’t already know, nearly every action you take in the future will leverage #objectdetection technology. When you drive to the supermarket, your autonomous car will identify traffic signs. Inside the supermarket, cameras will track your behavior to analyze customer patterns and product placement. Meanwhile, at home, your security camera will discern whether there’s a potential threat approaching. This technology will be integral to our #security, economy, and daily lives. Accuracy and speed in object detection are crucial for automating these tasks. Whether you're a data engineer, an enthusiast, or just curious, these models will play a role in your life. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘁𝗵𝗲 𝘁𝗼𝗽 𝗺𝗼𝗱𝗲𝗹𝘀 𝗳𝗼𝗿 𝟮𝟬𝟮𝟰: 𝟭) 𝗬𝗢𝗟𝗢 is a popular object detection model that processes images in a single stage, dividing them into cells to identify objects and their probabilities. 𝟮) 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁𝗗𝗲𝘁, optimizes model depth, width, and resolution for scalability, enhancing performance within memory and FLOPs limits. 𝟯) 𝗥𝗲𝘁𝗶𝗻𝗮𝗡𝗲𝘁'𝘀 "focused loss" function reduces class imbalance by assigning lower weights to easy negatives, improving focus on positive and challenging examples. 𝟰) 𝗙𝗮𝘀𝘁𝗲𝗿 𝗥-𝗖𝗡𝗡'𝘀 Region of Interest (ROI) pooling technique segments images for classification, requiring fewer training images. 𝟱) 𝗠𝗮𝘀𝗸 𝗥-𝗖𝗡𝗡 builds on Faster R-CNN by adding instance segmentation, using FPN and ROIAlign for precise pixel-level object detection. DagsHub accelerates your computer vision projects from model selection to deployment, offering end-to-end solutions for object detection and staying ahead in #deeplearning.

    • No alternative text description for this image
  • DagsHub reposted this

    View profile for Dean Pleban

    Co-Founder & CEO at DagsHub | AI Data Development Platform

    LLMs are versatile tools that require specialized training to reach their full potential. Fine-tuning is the process of adapting a general-purpose LLM to excel at specific tasks or within particular domains. Similar to customizing a recipe with unique spices, fine-tuning infuses an LLM with the knowledge and abilities necessary to meet specific organizational needs. Without fine-tuning, LLMs function as broad knowledge bases, often lacking the depth or focus required for practical applications. This can result in irrelevant, inaccurate, or even harmful outputs. In business settings where precision and reliability are paramount, the consequences of an unrefined #LLM can be severe. DagsHub provides a centralized workspace for #datascientists to manage their entire project lifecycle, from #data to models, while fostering open collaboration.

  • DagsHub reposted this

    View profile for Nilesh Barla

    Researcher @Adaline | Finding the limits of LLMs

    For quite a long time I have been focused on writing a lengthy and detailed article on different approaches to develop a robust ML model one of which is "Continual learning or CL". The idea of CL arises from the fact as to how humans are capable of learning complex matters while preserving the old information. We also tend to leverage the these old information to learn new information quickly. We are adaptable. But it is not the same with the ML systems. They have to be retrained again on a new set of data. This of course is time-consuming and potentially expensive. In AI continual learning is the process of injecting or adding new information to a trained model while preserving the old information, mimicking human cognitive processes. I got an opportunity to write this article on CL with DagsHub along with Michał Oleszak and Daniel Tannor where we explained the various elements involved in CL -- types, approaches, and challenges -- as well as provided a practical approach to learning CL in PyTorch. You will learn a lot of valuable insights from this article. You can find the article link in the comment below.

    • Diagram showing the mapping of each continual learning type with its respective scenario and its associated approaches
  • DagsHub reposted this

    View profile for Dean Pleban

    Co-Founder & CEO at DagsHub | AI Data Development Platform

    Why are transformers so good at understanding language? The answer is Self-Attention. Self-Attention lets transformers focus on different parts of the inputs all at once instead of one piece at a time. It's kind of like giving the model the ability to understand the big picture by mapping the relationships between all of the little pieces within the data. And this is how they pick up on complex patterns and connections. One cool detail is that Self-Attention actually lets the model learn about the order and the spacing of the words itself without providing it explicitly. And that's part of why it's so powerful. So in other words, Self-Attention is not just another tool. It's actually what unlocks a lot of the power of modern transformers in LLMs.

  • DagsHub reposted this

    View profile for Dean Pleban

    Co-Founder & CEO at DagsHub | AI Data Development Platform

    We've enhanced our experiment tracking to let you see your model's predictions and outputs as they evolve during training. Visual insight into model behavior is critical, yet often overlooked in ML workflows. So we're introducing an integrated experiment artifacts view on DagsHub. Key benefits: - Real-time visual feedback: Watch your model learn through images, audio, and even 3D visualizations - Comprehensive artifact support: View text, model files, and even CSV files alongside metrics - Seamless integration: Works with the OSS MLflow API you're already using - Coming soon - HTML, Notebooks, artifact diffing and more. How it works: 1. Use mlflow.log_artifacts() to attach files to your experiment 2. Go to the experiments tab in your DagsHub repo 3. Visualize artifacts directly in the experiment view, no context-switching required. As ML practitioners, we know that numbers alone don't tell the whole story. Now you can literally see your model's progress, catching potential issues early and gaining deeper insights. What other visual tools would enhance your ML workflow? Share your thoughts below! Thanks Tal for building, Anna for design, and the entire team for shipping ⛴️🙏. Also, thanks MLflow for being awesome!

  • Check out this awesome post about image embeddings benefits, industry use cases and best practices. Thanks Ignacio Peletier Ribera

  • View organization page for DagsHub

    8,488 followers

    We're very lucky to be working with the top data scientists at MACSO Check out the full case study

    View profile for Dean Pleban

    Co-Founder & CEO at DagsHub | AI Data Development Platform

    I’m really proud to share our amazing partnership results with MACSO. Their ambitious ML team led by Hwan is doing mind-blowing work at the intersection of AI, edge computing, AgTech, and more. From pinpointing sources of air pollution to revolutionizing livestock monitoring, MACSO is proving that huge breakthroughs can happen. I'm proud that DagsHub gets to partner with them on this journey of innovation. By providing intuitive tools for experiment tracking, data management, and seamless collaboration, we've been able to help MACSO: 🚀 Increase experiment speed by 30% 🚀 Reduce data prep time by 50% 🚀 Boost team collaboration efficiency by 30% As Hwan put it: "DagsHub has been a game-changer for us. It not only streamlined our ML workflows but also ignited our team's creative potential, allowing us to experiment fearlessly and innovate rapidly. DagsHub is not just a tool; it's a catalyst for transformation in ML development.” From all of us at DagsHub, we're honored to lock arms 🤝 with the brilliant minds at MACSO. Their ability to reimagine what's possible in AI, AgTech, and edge computing is amazing. Check out the comments for the full case study #machinelearning #mlops #edgeai #agritech #datascience #startup

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

DagsHub 3 total rounds

Last Round

Seed
See more info on crunchbase