Flume Health is a software company that connects the fragmented healthcare data ecosystem for more efficient health plan administration. As a single, cloud-native integration platform, Flume’s Relay platform allows companies to easily connect various systems and vendors for efficient data exchange that’s increasingly demanded of the modern health plan. Payers, third-party administrators, prescription benefits managers, and health solutions are provided the simplicity, speed, and security they need to automate data integration and movement between relevant stakeholders. Relay supports multiple data transmission protocols, data types, and file types. By streamlining data flows between payers and solutions, a world of opportunity exists to improve access to healthcare.
The Role:
At Flume Health, we’re leveraging AI to reimagine how healthcare data is mapped, validated, and integrated faster, more reliably, and at scale. We’re looking for a Software Engineer focused on AI workflows to help build and orchestrate pipelines that use foundation models like OpenAI’s or Google’s large language models (LLMs) to generate and iteratively refine data mappings.
You won’t be training models from scratch but you will be at the forefront of building robust workflows that interface with LLM APIs, run validation checks on outputs, and implement automated loops for refinement. If you enjoy piecing together modular AI processes, designing data-driven feedback loops, and working in a fast-paced, compliance-aware environment, this role is for you.
What You’ll Do:
Architect AI workflows that interpret user-provided specifications and generate data mappings using LLM APIs (e.g., OpenAI, Vertex AI), refining outputs based on validation feedback.
Implement intelligent retry and refinement loops to improve mapping accuracy using techniques like prompt chaining, structured validation, and contextual injection.
Develop prompt engineering strategies with support for domain-specific augmentation (e.g., via vector search or RAG).
Orchestrate pipelines with tools like Python and Airflow, integrating structured QA steps to monitor output quality and flag failures.
Collaborate across Data, Platform, and Product teams to embed these workflows into Flume’s core systems, ensuring performance, traceability, and compliance.
Stay up to date on the evolving LLM ecosystem understanding tradeoffs in cost, latency, and capability across vendors.
What You’ll Need:
3+ years of software engineering experience, ideally in data, ML, or AI-driven product development.
Demonstrated experience building or orchestrating workflows that integrate with LLM APIs (e.g., OpenAI, Anthropic, Google).
Proficiency with Python and orchestration tools (e.g., Airflow, Dagster, Prefect).
Familiarity with prompt engineering concepts and iterative refinement techniques (e.g., looped retries, RAG, validation-based reruns).
Comfortable working in cloud-native environments (GCP or AWS) using Docker and Kubernetes.
Strong communication skills able to translate business needs into repeatable, explainable, and traceable AI workflows.