SlideShare a Scribd company logo
Paul Dix
InfluxData – CTO & co-founder
paul@influxdata.com
@pauldix
InfluxDB IOx - a new columnar
time series database (update)
© 2021 InfluxData. All rights reserved.
2
API
• InfluxDB 2.x with Line Protocol
• HTTP Query with JSON, CSV, Print
• Arrow Flight
• Move over to gRCP for management (and CLI)
– Create Databases
– Start to defining replication/sharding
– Readme for gRPCurl
• gRCP Health
© 2021 InfluxData. All rights reserved.
3
CLI & Config
• Write Line Protocol from File
• Create Database
• Object Store parameters
© 2021 InfluxData. All rights reserved.
4
Query
• Queries now work across Mutable Buffer & Read Buffer
• Data Fusion (features)
• Massive infusion of postgres string functions (lpad, rpad, ascii, chr, ltrim, etc)
• Support for EXTRACT (e.g. `EXTRACT hour from date_col`)
• Data Fusion (performance)
• Optimized function implementation for scalar values and columns
• improved join indicies, support for more advanced statistics, expression
rewriting
© 2021 InfluxData. All rights reserved.
5
Path to OSS Builds
• Not until we think it’s useful/interesting to test
• Dogfood our monitoring
1. In-memory 2.4M values/sec
2. Basic proxied/distributed query
3. Mutable Buffer to Read Buffer lifecycle (basic)
4. WAL Buffering/persistence
5. Subscriptions
6. Parquet Persistence
7. Recovery
• Single Server Steady State
• CLI for configuration
• Documentation
Introduction to DataFusion
An Embeddable Query Engine
Written in Rust
CC BY-SA
Today: IOx Team at InfluxData
Past life 1: Query Optimizer @ Vertica, also
on Oracle DB server
Past life 2: Chief Architect + VP Engineering
roles at some ML startups
Talk Outline
What is a Query Engine
Introduction to DataFusion / Apache Arrow
DataFusion Architectural Overview
Motivation
Data is stored
somewhere
Users who want to
access data
without writing a
program
Motivation
Users who want to
access data
without writing a
program
UIs (visual and
textual)
Data is stored
somewhere
Motivation
Users who want to
access data
without writing a
program
UIs (visual and
textual)
Data is stored
somewhere
Query Engine
SQL is the
common
interface
DataFusion Use Cases
1. Data engineering / ETL:
a. Construct fast and efficient data pipelines (~ Spark)
2. Data Science
a. Prepare data for ML / other tasks (~ Pandas)
3. Database Systems:
a. E.g. IOx, Ballista, Cloudfuse Buzz, various internal systems
Why DataFusion?
High Performance: Memory (no GC) and Performance, leveraging Rust/Arrow
Easy to Connect: Interoperability with other tools via Arrow, Parquet and Flight
Easy to Embed: Can extend data sources, functions, operators
First Class Rust: High quality Query / SQL Engine entirely in Rust
High Quality: Extensive tests and integration tests with Arrow ecosystems
My goal: DataFusion to be *the* choice for any SQL support in Rust
DBMS vs Query Engine ( , )
Database Management Systems (DBMS) are full featured systems
● Storage system (stores actual data)
● Catalog (store metadata about what is in the storage system)
● Query Engine (query, and retrieve requested data)
● Access Control and Authorization (users, groups, permissions)
● Resource Management (divide resources between uses)
● Administration utilities (monitor resource usage, set policies, etc)
● Clients for Network connectivity (e.g. implement JDBC, ODBC, etc)
● Multi-node coordination and management
DataFusion
What is DataFusion?
“DataFusion is an in-memory query engine
that uses Apache Arrow as the memory
model” - crates.io
● In Apache Arrow github repo
● Apache licensed
● Not part of the Arrow spec, uses Arrow
● Initially implemented and donated by
Andy Grove; design based on How
Query Engines Work
DataFusion + Arrow + Parquet
arrow
datafusion
parquet
arrow-flight
DataFusion Extensibility 🧰
● User Defined Functions
● User Defined Aggregates
● User Defined Optimizer passes
● User Defined LogicalPlan nodes
● User Defined ExecutionPlan nodes
● User Defined TableProvider for tables
* Built in data persistence using parquet and CSV files
What is a Query Engine?
1. Frontend
a. Query Language + Parser
2. Intermediate Query Representation
a. Expression / Type system
b. Query Plan w/ Relational Operators (Data Flow Graph)
c. Rewrites / Optimizations on that graph
3. Concrete Execution Operators
a. Allocate resources (CPU, Memory, etc)
b. Pushed bytes around, vectorized calculations, etc
��
DataFusion is a Query Engine!
SQLStatement
1. Frontend
LogicalPlan
Expr
ExecutionPlan
RecordBatches
Rust struct
2. Intermediate Query Representation
3. Concrete Execution Operators
DataFusion Input / Output Diagram
SQL Query
SELECT status, COUNT(1)
FROM http_api_requests_total
WHERE path = '/api/v2/write'
GROUP BY status;
RecordBatches
DataFrame
ctx.read_table("http")?
.filter(...)?
.aggregate(..)?;
RecordBatches
Catalog information:
tables, schemas, etc
OR
DataFusion in Action
DataFusion CLI
> CREATE EXTERNAL TABLE
http_api_requests_total
STORED AS PARQUET
LOCATION
'http_api_requests_total.parquet';
+--------+-----------------+
| status | COUNT(UInt8(1)) |
+--------+-----------------+
| 4XX | 73621 |
| 2XX | 338304 |
+--------+-----------------+
> SELECT status, COUNT(1)
FROM http_api_requests_total
WHERE path = '/api/v2/write'
GROUP BY status;
EXPLAIN Plan
Gets a textual representation of LogicalPlan
+--------------+----------------------------------------------------------+
| plan_type | plan |
+--------------+----------------------------------------------------------+
| logical_plan | Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] |
| | Selection: #path Eq Utf8("/api/v2/write") |
| | TableScan: http_api_requests_total projection=None |
+--------------+----------------------------------------------------------+
> explain SELECT status, COUNT(1) FROM http_api_requests_total
WHERE path = '/api/v2/write' GROUP BY status;
Plans as DataFlow graphs
Filter:
#path Eq Utf8("/api/v2/write")
Aggregate:
groupBy=[[#status]],
aggr=[[COUNT(UInt8(1))]]
TableScan: http_api_requests_total
projection=None
Step 2: Predicate is applied
Step 1: Parquet file is read
Step 3: Data is aggregated
Data flows up from the
leaves to the root of the
tree
More than initially meets the eye
Use EXPLAIN VERBOSE to see optimizations applied
> EXPLAIN VERBOSE SELECT status, COUNT(1) FROM http_api_requests_total
WHERE path = '/api/v2/write' GROUP BY status;
+----------------------+----------------------------------------------------------------+
| plan_type | plan |
+----------------------+----------------------------------------------------------------+
| logical_plan | Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] |
| | Selection: #path Eq Utf8("/api/v2/write") |
| | TableScan: http_api_requests_total projection=None |
| projection_push_down | Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] |
| | Selection: #path Eq Utf8("/api/v2/write") |
| | TableScan: http_api_requests_total
projection=Some([6, 8]) |
| type_coercion | Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] |
| | Selection: #path Eq Utf8("/api/v2/write") |
| | TableScan: http_api_requests_total
projection=Some([6, 8]) |
...
+----------------------+----------------------------------------------------------------+
Optimizer “pushed” down
projection so only status
and path columns from
file were read from
parquet
Data Representation
Array + Record Batches + Schema
+--------+--------+
| status | COUNT |
+--------+--------+
| 4XX | 73621 |
| 2XX | 338304 |
| 5XX | 42 |
| 1XX | 3 |
+--------+--------+
4XX
2XX
5XX
* StringArray representation is somewhat misleading as it actually has a fixed length portion and the character data in different locations
StringArray
1XX
StringArray
73621
338304
42
UInt64Array
3
UInt64Array
Schema:
fields[0]: “status”, Utf8
fields[1]: “COUNT()”, UInt64
RecordBatch
cols:
schema:
RecordBatch
cols:
schema:
Query Planning
DataFusion Planning Flow
SQL Query
SELECT status, COUNT(1)
FROM http_api_requests_total
WHERE path = '/api/v2/write'
GROUP BY status;
LogicalPlan
ExecutionPlan
RecordBatches
Parsing/Planning
Optimization
Execution
“Query Plan”
PG:” Query Tree”
“Access Plan”
“Operator Tree”
PG: “Plan Tree”
DataFusion Logical Plan Creation
● Declarative: Describe WHAT you want; system figures out HOW
○ Input: “SQL” text (postgres dialect)
● Procedural Describe HOW directly
○ Input is a program to build up the plan
○ Two options:
■ Use a LogicalPlanBuilder, Rust style builder
■ DataFrame - model popularized by Pandas and Spark
SQL → LogicalPlan
SQL Parser
SQL Query
SELECT status, COUNT(1)
FROM http_api_requests_total
WHERE path = '/api/v2/write'
GROUP BY status;
Planner
Query {
ctes: [],
body: Select(
Select {
distinct: false,
top: None,
projection: [
UnnamedExpr(
Identifier(
Ident {
value: "status",
quote_style: None,
},
),
),
...
Parsed
Statement
LogicalPlan
“DataFrame” → Logical Plan
Rust Code
let df = ctx
.read_table("http_api_requests_total")?
.filter(col("path").eq(lit("/api/v2/write")))?
.aggregate([col("status")]), [count(lit(1))])?;
DataFrame
(Builder)
LogicalPlan
Supported Logical Plan operators (source link)
Projection
Filter
Aggregate
Sort
Join
Repartition
TableScan
EmptyRelation
Limit
CreateExternalTable
Explain
Extension
Query Optimization Overview
Compute the same (correct) result, only faster
Optimizer
Pass 1
LogicalPlan
(intermediate)
“Optimizer”
Optimizer
Pass 2
LogicalPlan
(input)
LogicalPlan
(output)
…
Other
Passes
...
Built in DataFusion Optimizer Passes (source link)
ProjectionPushDown: Minimize the number of columns passed from node to node
to minimize intermediate result size (number of columns)
FilterPushdown (“predicate pushdown”): Push filters as close to scans as possible
to minimize intermediate result size
HashBuildProbeOrder (“join reordering”): Order joins to minimize the intermediate
result size and hash table sizes
ConstantFolding: Partially evaluates expressions at plan time. Eg. ColA && true
→ ColA
Expression Evaluation
Expression Evaluation
Arrow Compute Kernels typically operate on 1 or 2 arrays and/or scalars.
Partial list of included comparison kernels:
eq Perform left == right operation on two arrays.
eq_scalar Perform left == right operation on an array and a scalar value.
eq_utf8 Perform left == right operation on StringArray / LargeStringArray.
eq_utf8_scalar Perform left == right operation on StringArray / LargeStringArray and a scalar.
and Performs AND operation on two arrays. If either left or right value is null then the result is also null.
is_not_null Returns a non-null BooleanArray with whether each value of the array is not null.
or Performs OR operation on two arrays. If either left or right value is null then the result is also null.
...
Exprs for evaluating arbitrary expressions
path = '/api/v2/write' OR path IS NULL
Column
path
Literal
ScalarValue::Utf8
'/api/v2/write'
Column
path
IsNull
BinaryExpr
op: Eq
left right
BinaryExpr
op: Or
left right
col(“path”)
.eq(lit(‘api/v2/write’))
.or(col(“path”).is_null())
Expression Builder API
Expr Vectorized Evaluation
Column
path
Literal
ScalarValue::Utf8
'/api/v2/write'
Column
path
IsNull
BinaryExpr
op: Eq
BinaryExpr
op: Or
Expr Vectorized Evaluation
Literal
ScalarValue::Utf8
'/api/v2/write'
Column
path
IsNull
BinaryExpr
op: Eq
BinaryExpr
op: Or
/api/v2/write
/api/v1/write
/api/v2/read
/api/v2/write
…
/api/v2/write
/foo/bar
StringArray
Expr Vectorized Evaluation
Column
path
IsNull
BinaryExpr
op: Eq
BinaryExpr
op: Or
/api/v2/write
/api/v1/write
/api/v2/read
/api/v2/write
…
/api/v2/write
/foo/bar
StringArray
ScalarValue::Utf8(
Some(
“/api/v2/write”
)
)
Expr Vectorized Evaluation
Column
path
IsNull
BinaryExpr
op: Eq
BinaryExpr
op: Or
/api/v2/write
/api/v1/write
/api/v2/read
/api/v2/write
…
/api/v2/write
/foo/bar
StringArray
ScalarValue::Utf8(
Some(
“/api/v2/write”
)
)
Call: eq_utf8_scalar
Expr Vectorized Evaluation
Column
path
IsNull
BinaryExpr
op: Or
True
False
False
True
…
True
False
BooleanArray
Expr Vectorized Evaluation
IsNull
BinaryExpr
op: Or
True
False
False
True
…
True
False
BooleanArray
/api/v2/write
/api/v1/write
/api/v2/read
/api/v2/write
…
/api/v2/write
/foo/bar
StringArray
Expr Vectorized Evaluation
BinaryExpr
op: Or
True
False
False
True
…
True
False
BooleanArray
False
False
False
False
…
False
False
BooleanArray
Expr Vectorized Evaluation
True
False
False
True
…
True
False
BooleanArray
Type Coercion
sqrt(col)
sqrt(col) → sqrt(CAST col as Float32)
col is Int8, but sqrt implemented for Float32 or Float64
⇒ Type Coercion: adds typecast cast so the implementation can be called
Note: Coercion is lossless; if col was Float64, would not coerce to Float32
Source Code: coercion.rs
Execution Plans
Plan Execution Overview
Typically called the “execution engine” in database systems
DataFusion features:
● Async: Mostly avoids blocking I/O
● Vectorized: Process RecordBatch at a time, configurable batch size
● Eager Pull: Data is produced using a pull model, natural backpressure
● Partitioned: each operator produces partitions, in parallel
● Multi-Core*
* Uses async tasks; still some unease about this / if we need another thread pool
Plan Execution
LogicalPlan
ExecutionPlan
collect
SendableRecordBatchStream
Partitions
ExecutionPlan nodes allocate resources
(buffers, hash tables, files, etc)
RecordBatches
execute produces an
iterator-style thing that produces
Arrow RecordBatches for each
partition
create_physical_plan
execute
create_physical_plan
Filter:
#path Eq Utf8("/api/v2/write")
Aggregate:
groupBy=[[#status]],
aggr=[[COUNT(UInt8(1))]]
TableScan: http_api_requests_total
projection=None
HashAggregateExec (1 partition)
AggregateMode::Final
SUM(1), GROUP BY status
HashAggregateExec (2 partitions)
AggregateMode::Partial
COUNT(1), GROUP BY status
FilterExec (2 partitions)
path = “/api/v2/write”
ParquetExec (2 partitions)
files = file1, file2
LogicalPlan ExecutionPlan
MergeExec (1 partition)
execute
ExecutionPlan SendableRecordBatchStream
GroupHash
AggregateStream
GroupHash
AggregateStream
GroupHash
AggregateStream
FilterExecStream FilterExecStream
“ParquetStream”*
For file1
“ParquetStream”*
For file2
* this is actually a channel getting results from a different thread, as parquet reader is not yet async
HashAggregateExec (1 partition)
AggregateMode::Final
SUM(1), GROUP BY status
HashAggregateExec (2 partitions)
AggregateMode::Partial
COUNT(1), GROUP BY status
FilterExec (2 partitions)
path = “/api/v2/write”
ParquetExec (2 partitions)
files = file1, file2
MergeExec
MergeStream
execute(0)
execute(0)
execute(0)
execute(0)
execute(0)
execute(1)
execute(1)
execute(1)
next()
SendableRecordBatchStream
GroupHash
AggregateStream
FilterExecStream
“ParquetStream”*
For file1
Ready to produce values! 😅
Rust Stream: an async iterator that
produces record batches
Execution of GroupHash starts
eagerly (before next() is called on it)
next().await
next().await
RecordBatch
RecordBatch
Step 2:
Data is
filtered
Step 1: Data read from parquet
and returned
Step 3: data
is fed into a
hash table
Step 0: new task spawned, starts
computing input immediately
Step 5: output is requested RecordBatch
Step 6:
returned to
caller
Step 4:
hash done,
output
produced
next()
GroupHash
AggregateStream
GroupHash
AggregateStream
GroupHash
AggregateStream
next().await
Step 1: output is requested
MergeStream
MergeStream eagerly
starts on its own task, back
pressure via bounded
channels
Step 0: new task spawned, starts
computing input
RecordBatch
Step 2: eventually RecordBatch is
produced from downstream and returned
Step 0: new task spawned, starts
computing input immediately next().await
next().await
Step 0: new task spawned, starts
computing input
next().await
Step 4: data
is fed into a
hash table
RecordBatch
Step 3: Merge
passes on
RecordBatch
RecordBatch
Step 5:
hash done,
output
produced
Step 6:
returned to
caller
Get Involved
Check out the project Apache Arrow
Join the mailing list (links on project page)
Test out Arrow (crates.io) and DataFusion (crates.io) in your projects
Help out with the docs/code/tickets on GitHub
Thank You!!!!
Ad

More Related Content

What's hot (20)

The Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemThe Apache Spark File Format Ecosystem
The Apache Spark File Format Ecosystem
Databricks
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
colorant
 
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...
InfluxData
 
Apache Arrow: In Theory, In Practice
Apache Arrow: In Theory, In PracticeApache Arrow: In Theory, In Practice
Apache Arrow: In Theory, In Practice
Dremio Corporation
 
Making Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeMaking Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta Lake
Databricks
 
Delta Lake: Optimizing Merge
Delta Lake: Optimizing MergeDelta Lake: Optimizing Merge
Delta Lake: Optimizing Merge
Databricks
 
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
StreamNative
 
A Rusty introduction to Apache Arrow and how it applies to a time series dat...
A Rusty introduction to Apache Arrow and how it applies to a  time series dat...A Rusty introduction to Apache Arrow and how it applies to a  time series dat...
A Rusty introduction to Apache Arrow and how it applies to a time series dat...
Andrew Lamb
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
Ryan Blue
 
Optimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache SparkOptimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache Spark
Databricks
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
Spark Summit
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
Databricks
 
Evening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in FlinkEvening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in Flink
Flink Forward
 
Iceberg: a fast table format for S3
Iceberg: a fast table format for S3Iceberg: a fast table format for S3
Iceberg: a fast table format for S3
DataWorks Summit
 
Using ClickHouse for Experimentation
Using ClickHouse for ExperimentationUsing ClickHouse for Experimentation
Using ClickHouse for Experimentation
Gleb Kanterov
 
Building a Real-Time Analytics Application with Apache Pulsar and Apache Pinot
Building a Real-Time Analytics Application with  Apache Pulsar and Apache PinotBuilding a Real-Time Analytics Application with  Apache Pulsar and Apache Pinot
Building a Real-Time Analytics Application with Apache Pulsar and Apache Pinot
Altinity Ltd
 
Delta Lake Streaming: Under the Hood
Delta Lake Streaming: Under the HoodDelta Lake Streaming: Under the Hood
Delta Lake Streaming: Under the Hood
Databricks
 
Delta from a Data Engineer's Perspective
Delta from a Data Engineer's PerspectiveDelta from a Data Engineer's Perspective
Delta from a Data Engineer's Perspective
Databricks
 
Apache Arrow Flight: A New Gold Standard for Data Transport
Apache Arrow Flight: A New Gold Standard for Data TransportApache Arrow Flight: A New Gold Standard for Data Transport
Apache Arrow Flight: A New Gold Standard for Data Transport
Wes McKinney
 
The Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemThe Apache Spark File Format Ecosystem
The Apache Spark File Format Ecosystem
Databricks
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
colorant
 
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...
InfluxDB IOx Tech Talks: Replication, Durability and Subscriptions in InfluxD...
InfluxData
 
Apache Arrow: In Theory, In Practice
Apache Arrow: In Theory, In PracticeApache Arrow: In Theory, In Practice
Apache Arrow: In Theory, In Practice
Dremio Corporation
 
Making Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeMaking Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta Lake
Databricks
 
Delta Lake: Optimizing Merge
Delta Lake: Optimizing MergeDelta Lake: Optimizing Merge
Delta Lake: Optimizing Merge
Databricks
 
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
Trino: A Ludicrously Fast Query Engine - Pulsar Summit NA 2021
StreamNative
 
A Rusty introduction to Apache Arrow and how it applies to a time series dat...
A Rusty introduction to Apache Arrow and how it applies to a  time series dat...A Rusty introduction to Apache Arrow and how it applies to a  time series dat...
A Rusty introduction to Apache Arrow and how it applies to a time series dat...
Andrew Lamb
 
Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
Ryan Blue
 
Optimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache SparkOptimizing Delta/Parquet Data Lakes for Apache Spark
Optimizing Delta/Parquet Data Lakes for Apache Spark
Databricks
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
Spark Summit
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
Databricks
 
Evening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in FlinkEvening out the uneven: dealing with skew in Flink
Evening out the uneven: dealing with skew in Flink
Flink Forward
 
Iceberg: a fast table format for S3
Iceberg: a fast table format for S3Iceberg: a fast table format for S3
Iceberg: a fast table format for S3
DataWorks Summit
 
Using ClickHouse for Experimentation
Using ClickHouse for ExperimentationUsing ClickHouse for Experimentation
Using ClickHouse for Experimentation
Gleb Kanterov
 
Building a Real-Time Analytics Application with Apache Pulsar and Apache Pinot
Building a Real-Time Analytics Application with  Apache Pulsar and Apache PinotBuilding a Real-Time Analytics Application with  Apache Pulsar and Apache Pinot
Building a Real-Time Analytics Application with Apache Pulsar and Apache Pinot
Altinity Ltd
 
Delta Lake Streaming: Under the Hood
Delta Lake Streaming: Under the HoodDelta Lake Streaming: Under the Hood
Delta Lake Streaming: Under the Hood
Databricks
 
Delta from a Data Engineer's Perspective
Delta from a Data Engineer's PerspectiveDelta from a Data Engineer's Perspective
Delta from a Data Engineer's Perspective
Databricks
 
Apache Arrow Flight: A New Gold Standard for Data Transport
Apache Arrow Flight: A New Gold Standard for Data TransportApache Arrow Flight: A New Gold Standard for Data Transport
Apache Arrow Flight: A New Gold Standard for Data Transport
Wes McKinney
 

Similar to InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in Apache Arrow (20)

Introduction to DataFusion An Embeddable Query Engine Written in Rust
Introduction to DataFusion  An Embeddable Query Engine Written in RustIntroduction to DataFusion  An Embeddable Query Engine Written in Rust
Introduction to DataFusion An Embeddable Query Engine Written in Rust
Andrew Lamb
 
Structuring Spark: DataFrames, Datasets, and Streaming by Michael Armbrust
Structuring Spark: DataFrames, Datasets, and Streaming by Michael ArmbrustStructuring Spark: DataFrames, Datasets, and Streaming by Michael Armbrust
Structuring Spark: DataFrames, Datasets, and Streaming by Michael Armbrust
Spark Summit
 
Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Mi...
Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Mi...Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Mi...
Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Mi...
Databricks
 
Postgresql Database Administration Basic - Day2
Postgresql  Database Administration Basic  - Day2Postgresql  Database Administration Basic  - Day2
Postgresql Database Administration Basic - Day2
PoguttuezhiniVP
 
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Databricks
 
Data-and-Compute-Intensive processing Use Case: Lucene Domain Index
Data-and-Compute-Intensive processing Use Case: Lucene Domain IndexData-and-Compute-Intensive processing Use Case: Lucene Domain Index
Data-and-Compute-Intensive processing Use Case: Lucene Domain Index
Marcelo Ochoa
 
Using existing language skillsets to create large-scale, cloud-based analytics
Using existing language skillsets to create large-scale, cloud-based analyticsUsing existing language skillsets to create large-scale, cloud-based analytics
Using existing language skillsets to create large-scale, cloud-based analytics
Microsoft Tech Community
 
Oracle DBA Training in Hyderabad
Oracle DBA Training in HyderabadOracle DBA Training in Hyderabad
Oracle DBA Training in Hyderabad
united global soft
 
Oracle DBA Training in Hyderabad
Oracle DBA Training in HyderabadOracle DBA Training in Hyderabad
Oracle DBA Training in Hyderabad
Ugs8008
 
Orcale dba training
Orcale dba trainingOrcale dba training
Orcale dba training
Ugs8008
 
Structuring Spark: DataFrames, Datasets, and Streaming
Structuring Spark: DataFrames, Datasets, and StreamingStructuring Spark: DataFrames, Datasets, and Streaming
Structuring Spark: DataFrames, Datasets, and Streaming
Databricks
 
Large scale, interactive ad-hoc queries over different datastores with Apache...
Large scale, interactive ad-hoc queries over different datastores with Apache...Large scale, interactive ad-hoc queries over different datastores with Apache...
Large scale, interactive ad-hoc queries over different datastores with Apache...
jaxLondonConference
 
The life of a query (oracle edition)
The life of a query (oracle edition)The life of a query (oracle edition)
The life of a query (oracle edition)
maclean liu
 
Writing Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark APIWriting Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark API
Databricks
 
Oracle DBA Training in Hyderabad
Oracle DBA Training in HyderabadOracle DBA Training in Hyderabad
Oracle DBA Training in Hyderabad
united global soft
 
Oracle DBA Online Trainingin India
Oracle DBA Online Trainingin IndiaOracle DBA Online Trainingin India
Oracle DBA Online Trainingin India
united global soft
 
Writing Continuous Applications with Structured Streaming in PySpark
Writing Continuous Applications with Structured Streaming in PySparkWriting Continuous Applications with Structured Streaming in PySpark
Writing Continuous Applications with Structured Streaming in PySpark
Databricks
 
Spark Sql and DataFrame
Spark Sql and DataFrameSpark Sql and DataFrame
Spark Sql and DataFrame
Prashant Gupta
 
Day 1 - Technical Bootcamp azure synapse analytics
Day 1 - Technical Bootcamp azure synapse analyticsDay 1 - Technical Bootcamp azure synapse analytics
Day 1 - Technical Bootcamp azure synapse analytics
Armand272
 
Oracle DBA Online Training in India,Oracle DBA Training in USA,Oracle DBA Tra...
Oracle DBA Online Training in India,Oracle DBA Training in USA,Oracle DBA Tra...Oracle DBA Online Training in India,Oracle DBA Training in USA,Oracle DBA Tra...
Oracle DBA Online Training in India,Oracle DBA Training in USA,Oracle DBA Tra...
united global soft
 
Introduction to DataFusion An Embeddable Query Engine Written in Rust
Introduction to DataFusion  An Embeddable Query Engine Written in RustIntroduction to DataFusion  An Embeddable Query Engine Written in Rust
Introduction to DataFusion An Embeddable Query Engine Written in Rust
Andrew Lamb
 
Structuring Spark: DataFrames, Datasets, and Streaming by Michael Armbrust
Structuring Spark: DataFrames, Datasets, and Streaming by Michael ArmbrustStructuring Spark: DataFrames, Datasets, and Streaming by Michael Armbrust
Structuring Spark: DataFrames, Datasets, and Streaming by Michael Armbrust
Spark Summit
 
Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Mi...
Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Mi...Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Mi...
Structuring Apache Spark 2.0: SQL, DataFrames, Datasets And Streaming - by Mi...
Databricks
 
Postgresql Database Administration Basic - Day2
Postgresql  Database Administration Basic  - Day2Postgresql  Database Administration Basic  - Day2
Postgresql Database Administration Basic - Day2
PoguttuezhiniVP
 
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Databricks
 
Data-and-Compute-Intensive processing Use Case: Lucene Domain Index
Data-and-Compute-Intensive processing Use Case: Lucene Domain IndexData-and-Compute-Intensive processing Use Case: Lucene Domain Index
Data-and-Compute-Intensive processing Use Case: Lucene Domain Index
Marcelo Ochoa
 
Using existing language skillsets to create large-scale, cloud-based analytics
Using existing language skillsets to create large-scale, cloud-based analyticsUsing existing language skillsets to create large-scale, cloud-based analytics
Using existing language skillsets to create large-scale, cloud-based analytics
Microsoft Tech Community
 
Oracle DBA Training in Hyderabad
Oracle DBA Training in HyderabadOracle DBA Training in Hyderabad
Oracle DBA Training in Hyderabad
united global soft
 
Oracle DBA Training in Hyderabad
Oracle DBA Training in HyderabadOracle DBA Training in Hyderabad
Oracle DBA Training in Hyderabad
Ugs8008
 
Orcale dba training
Orcale dba trainingOrcale dba training
Orcale dba training
Ugs8008
 
Structuring Spark: DataFrames, Datasets, and Streaming
Structuring Spark: DataFrames, Datasets, and StreamingStructuring Spark: DataFrames, Datasets, and Streaming
Structuring Spark: DataFrames, Datasets, and Streaming
Databricks
 
Large scale, interactive ad-hoc queries over different datastores with Apache...
Large scale, interactive ad-hoc queries over different datastores with Apache...Large scale, interactive ad-hoc queries over different datastores with Apache...
Large scale, interactive ad-hoc queries over different datastores with Apache...
jaxLondonConference
 
The life of a query (oracle edition)
The life of a query (oracle edition)The life of a query (oracle edition)
The life of a query (oracle edition)
maclean liu
 
Writing Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark APIWriting Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark API
Databricks
 
Oracle DBA Training in Hyderabad
Oracle DBA Training in HyderabadOracle DBA Training in Hyderabad
Oracle DBA Training in Hyderabad
united global soft
 
Oracle DBA Online Trainingin India
Oracle DBA Online Trainingin IndiaOracle DBA Online Trainingin India
Oracle DBA Online Trainingin India
united global soft
 
Writing Continuous Applications with Structured Streaming in PySpark
Writing Continuous Applications with Structured Streaming in PySparkWriting Continuous Applications with Structured Streaming in PySpark
Writing Continuous Applications with Structured Streaming in PySpark
Databricks
 
Spark Sql and DataFrame
Spark Sql and DataFrameSpark Sql and DataFrame
Spark Sql and DataFrame
Prashant Gupta
 
Day 1 - Technical Bootcamp azure synapse analytics
Day 1 - Technical Bootcamp azure synapse analyticsDay 1 - Technical Bootcamp azure synapse analytics
Day 1 - Technical Bootcamp azure synapse analytics
Armand272
 
Oracle DBA Online Training in India,Oracle DBA Training in USA,Oracle DBA Tra...
Oracle DBA Online Training in India,Oracle DBA Training in USA,Oracle DBA Tra...Oracle DBA Online Training in India,Oracle DBA Training in USA,Oracle DBA Tra...
Oracle DBA Online Training in India,Oracle DBA Training in USA,Oracle DBA Tra...
united global soft
 
Ad

More from InfluxData (20)

Announcing InfluxDB Clustered
Announcing InfluxDB ClusteredAnnouncing InfluxDB Clustered
Announcing InfluxDB Clustered
InfluxData
 
Best Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow EcosystemBest Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow Ecosystem
InfluxData
 
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
InfluxData
 
Power Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDBPower Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDB
InfluxData
 
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
InfluxData
 
Build an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING StackBuild an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING Stack
InfluxData
 
Meet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using RustMeet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using Rust
InfluxData
 
Introducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud DedicatedIntroducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud Dedicated
InfluxData
 
Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB
InfluxData
 
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
InfluxData
 
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
InfluxData
 
Introducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage EngineIntroducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage Engine
InfluxData
 
Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena
InfluxData
 
Understanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage EngineUnderstanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage Engine
InfluxData
 
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDBStreamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
InfluxData
 
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
InfluxData
 
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
InfluxData
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
InfluxData
 
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
InfluxData
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
InfluxData
 
Announcing InfluxDB Clustered
Announcing InfluxDB ClusteredAnnouncing InfluxDB Clustered
Announcing InfluxDB Clustered
InfluxData
 
Best Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow EcosystemBest Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow Ecosystem
InfluxData
 
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
InfluxData
 
Power Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDBPower Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDB
InfluxData
 
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
InfluxData
 
Build an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING StackBuild an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING Stack
InfluxData
 
Meet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using RustMeet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using Rust
InfluxData
 
Introducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud DedicatedIntroducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud Dedicated
InfluxData
 
Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB
InfluxData
 
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
InfluxData
 
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
InfluxData
 
Introducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage EngineIntroducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage Engine
InfluxData
 
Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena
InfluxData
 
Understanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage EngineUnderstanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage Engine
InfluxData
 
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDBStreamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
InfluxData
 
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
InfluxData
 
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
InfluxData
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
InfluxData
 
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
InfluxData
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
InfluxData
 
Ad

Recently uploaded (20)

How to Install & Activate ListGrabber - eGrabber
How to Install & Activate ListGrabber - eGrabberHow to Install & Activate ListGrabber - eGrabber
How to Install & Activate ListGrabber - eGrabber
eGrabber
 
Web and Graphics Designing Training in Rajpura
Web and Graphics Designing Training in RajpuraWeb and Graphics Designing Training in Rajpura
Web and Graphics Designing Training in Rajpura
Erginous Technology
 
Does Pornify Allow NSFW? Everything You Should Know
Does Pornify Allow NSFW? Everything You Should KnowDoes Pornify Allow NSFW? Everything You Should Know
Does Pornify Allow NSFW? Everything You Should Know
Pornify CC
 
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptxDevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
Justin Reock
 
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
James Anderson
 
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Safe Software
 
Build With AI - In Person Session Slides.pdf
Build With AI - In Person Session Slides.pdfBuild With AI - In Person Session Slides.pdf
Build With AI - In Person Session Slides.pdf
Google Developer Group - Harare
 
Q1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor PresentationQ1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor Presentation
Dropbox
 
Bepents tech services - a premier cybersecurity consulting firm
Bepents tech services - a premier cybersecurity consulting firmBepents tech services - a premier cybersecurity consulting firm
Bepents tech services - a premier cybersecurity consulting firm
Benard76
 
Agentic Automation - Delhi UiPath Community Meetup
Agentic Automation - Delhi UiPath Community MeetupAgentic Automation - Delhi UiPath Community Meetup
Agentic Automation - Delhi UiPath Community Meetup
Manoj Batra (1600 + Connections)
 
AsyncAPI v3 : Streamlining Event-Driven API Design
AsyncAPI v3 : Streamlining Event-Driven API DesignAsyncAPI v3 : Streamlining Event-Driven API Design
AsyncAPI v3 : Streamlining Event-Driven API Design
leonid54
 
Canadian book publishing: Insights from the latest salary survey - Tech Forum...
Canadian book publishing: Insights from the latest salary survey - Tech Forum...Canadian book publishing: Insights from the latest salary survey - Tech Forum...
Canadian book publishing: Insights from the latest salary survey - Tech Forum...
BookNet Canada
 
Zilliz Cloud Monthly Technical Review: May 2025
Zilliz Cloud Monthly Technical Review: May 2025Zilliz Cloud Monthly Technical Review: May 2025
Zilliz Cloud Monthly Technical Review: May 2025
Zilliz
 
Cybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and MitigationCybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and Mitigation
VICTOR MAESTRE RAMIREZ
 
UiPath Agentic Automation: Community Developer Opportunities
UiPath Agentic Automation: Community Developer OpportunitiesUiPath Agentic Automation: Community Developer Opportunities
UiPath Agentic Automation: Community Developer Opportunities
DianaGray10
 
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptxReimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
John Moore
 
How analogue intelligence complements AI
How analogue intelligence complements AIHow analogue intelligence complements AI
How analogue intelligence complements AI
Paul Rowe
 
Kit-Works Team Study_팀스터디_김한솔_nuqs_20250509.pdf
Kit-Works Team Study_팀스터디_김한솔_nuqs_20250509.pdfKit-Works Team Study_팀스터디_김한솔_nuqs_20250509.pdf
Kit-Works Team Study_팀스터디_김한솔_nuqs_20250509.pdf
Wonjun Hwang
 
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Markus Eisele
 
Viam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdfViam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdf
camilalamoratta
 
How to Install & Activate ListGrabber - eGrabber
How to Install & Activate ListGrabber - eGrabberHow to Install & Activate ListGrabber - eGrabber
How to Install & Activate ListGrabber - eGrabber
eGrabber
 
Web and Graphics Designing Training in Rajpura
Web and Graphics Designing Training in RajpuraWeb and Graphics Designing Training in Rajpura
Web and Graphics Designing Training in Rajpura
Erginous Technology
 
Does Pornify Allow NSFW? Everything You Should Know
Does Pornify Allow NSFW? Everything You Should KnowDoes Pornify Allow NSFW? Everything You Should Know
Does Pornify Allow NSFW? Everything You Should Know
Pornify CC
 
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptxDevOpsDays SLC - Platform Engineers are Product Managers.pptx
DevOpsDays SLC - Platform Engineers are Product Managers.pptx
Justin Reock
 
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
GDG Cloud Southlake #42: Suresh Mathew: Autonomous Resource Optimization: How...
James Anderson
 
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Integrating FME with Python: Tips, Demos, and Best Practices for Powerful Aut...
Safe Software
 
Q1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor PresentationQ1 2025 Dropbox Earnings and Investor Presentation
Q1 2025 Dropbox Earnings and Investor Presentation
Dropbox
 
Bepents tech services - a premier cybersecurity consulting firm
Bepents tech services - a premier cybersecurity consulting firmBepents tech services - a premier cybersecurity consulting firm
Bepents tech services - a premier cybersecurity consulting firm
Benard76
 
AsyncAPI v3 : Streamlining Event-Driven API Design
AsyncAPI v3 : Streamlining Event-Driven API DesignAsyncAPI v3 : Streamlining Event-Driven API Design
AsyncAPI v3 : Streamlining Event-Driven API Design
leonid54
 
Canadian book publishing: Insights from the latest salary survey - Tech Forum...
Canadian book publishing: Insights from the latest salary survey - Tech Forum...Canadian book publishing: Insights from the latest salary survey - Tech Forum...
Canadian book publishing: Insights from the latest salary survey - Tech Forum...
BookNet Canada
 
Zilliz Cloud Monthly Technical Review: May 2025
Zilliz Cloud Monthly Technical Review: May 2025Zilliz Cloud Monthly Technical Review: May 2025
Zilliz Cloud Monthly Technical Review: May 2025
Zilliz
 
Cybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and MitigationCybersecurity Threat Vectors and Mitigation
Cybersecurity Threat Vectors and Mitigation
VICTOR MAESTRE RAMIREZ
 
UiPath Agentic Automation: Community Developer Opportunities
UiPath Agentic Automation: Community Developer OpportunitiesUiPath Agentic Automation: Community Developer Opportunities
UiPath Agentic Automation: Community Developer Opportunities
DianaGray10
 
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptxReimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
Reimagine How You and Your Team Work with Microsoft 365 Copilot.pptx
John Moore
 
How analogue intelligence complements AI
How analogue intelligence complements AIHow analogue intelligence complements AI
How analogue intelligence complements AI
Paul Rowe
 
Kit-Works Team Study_팀스터디_김한솔_nuqs_20250509.pdf
Kit-Works Team Study_팀스터디_김한솔_nuqs_20250509.pdfKit-Works Team Study_팀스터디_김한솔_nuqs_20250509.pdf
Kit-Works Team Study_팀스터디_김한솔_nuqs_20250509.pdf
Wonjun Hwang
 
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Enterprise Integration Is Dead! Long Live AI-Driven Integration with Apache C...
Markus Eisele
 
Viam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdfViam product demo_ Deploying and scaling AI with hardware.pdf
Viam product demo_ Deploying and scaling AI with hardware.pdf
camilalamoratta
 

InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in Apache Arrow

  • 1. Paul Dix InfluxData – CTO & co-founder paul@influxdata.com @pauldix InfluxDB IOx - a new columnar time series database (update)
  • 2. © 2021 InfluxData. All rights reserved. 2 API • InfluxDB 2.x with Line Protocol • HTTP Query with JSON, CSV, Print • Arrow Flight • Move over to gRCP for management (and CLI) – Create Databases – Start to defining replication/sharding – Readme for gRPCurl • gRCP Health
  • 3. © 2021 InfluxData. All rights reserved. 3 CLI & Config • Write Line Protocol from File • Create Database • Object Store parameters
  • 4. © 2021 InfluxData. All rights reserved. 4 Query • Queries now work across Mutable Buffer & Read Buffer • Data Fusion (features) • Massive infusion of postgres string functions (lpad, rpad, ascii, chr, ltrim, etc) • Support for EXTRACT (e.g. `EXTRACT hour from date_col`) • Data Fusion (performance) • Optimized function implementation for scalar values and columns • improved join indicies, support for more advanced statistics, expression rewriting
  • 5. © 2021 InfluxData. All rights reserved. 5 Path to OSS Builds • Not until we think it’s useful/interesting to test • Dogfood our monitoring 1. In-memory 2.4M values/sec 2. Basic proxied/distributed query 3. Mutable Buffer to Read Buffer lifecycle (basic) 4. WAL Buffering/persistence 5. Subscriptions 6. Parquet Persistence 7. Recovery • Single Server Steady State • CLI for configuration • Documentation
  • 6. Introduction to DataFusion An Embeddable Query Engine Written in Rust CC BY-SA
  • 7. Today: IOx Team at InfluxData Past life 1: Query Optimizer @ Vertica, also on Oracle DB server Past life 2: Chief Architect + VP Engineering roles at some ML startups
  • 8. Talk Outline What is a Query Engine Introduction to DataFusion / Apache Arrow DataFusion Architectural Overview
  • 9. Motivation Data is stored somewhere Users who want to access data without writing a program
  • 10. Motivation Users who want to access data without writing a program UIs (visual and textual) Data is stored somewhere
  • 11. Motivation Users who want to access data without writing a program UIs (visual and textual) Data is stored somewhere Query Engine SQL is the common interface
  • 12. DataFusion Use Cases 1. Data engineering / ETL: a. Construct fast and efficient data pipelines (~ Spark) 2. Data Science a. Prepare data for ML / other tasks (~ Pandas) 3. Database Systems: a. E.g. IOx, Ballista, Cloudfuse Buzz, various internal systems
  • 13. Why DataFusion? High Performance: Memory (no GC) and Performance, leveraging Rust/Arrow Easy to Connect: Interoperability with other tools via Arrow, Parquet and Flight Easy to Embed: Can extend data sources, functions, operators First Class Rust: High quality Query / SQL Engine entirely in Rust High Quality: Extensive tests and integration tests with Arrow ecosystems My goal: DataFusion to be *the* choice for any SQL support in Rust
  • 14. DBMS vs Query Engine ( , ) Database Management Systems (DBMS) are full featured systems ● Storage system (stores actual data) ● Catalog (store metadata about what is in the storage system) ● Query Engine (query, and retrieve requested data) ● Access Control and Authorization (users, groups, permissions) ● Resource Management (divide resources between uses) ● Administration utilities (monitor resource usage, set policies, etc) ● Clients for Network connectivity (e.g. implement JDBC, ODBC, etc) ● Multi-node coordination and management DataFusion
  • 15. What is DataFusion? “DataFusion is an in-memory query engine that uses Apache Arrow as the memory model” - crates.io ● In Apache Arrow github repo ● Apache licensed ● Not part of the Arrow spec, uses Arrow ● Initially implemented and donated by Andy Grove; design based on How Query Engines Work
  • 16. DataFusion + Arrow + Parquet arrow datafusion parquet arrow-flight
  • 17. DataFusion Extensibility 🧰 ● User Defined Functions ● User Defined Aggregates ● User Defined Optimizer passes ● User Defined LogicalPlan nodes ● User Defined ExecutionPlan nodes ● User Defined TableProvider for tables * Built in data persistence using parquet and CSV files
  • 18. What is a Query Engine? 1. Frontend a. Query Language + Parser 2. Intermediate Query Representation a. Expression / Type system b. Query Plan w/ Relational Operators (Data Flow Graph) c. Rewrites / Optimizations on that graph 3. Concrete Execution Operators a. Allocate resources (CPU, Memory, etc) b. Pushed bytes around, vectorized calculations, etc ��
  • 19. DataFusion is a Query Engine! SQLStatement 1. Frontend LogicalPlan Expr ExecutionPlan RecordBatches Rust struct 2. Intermediate Query Representation 3. Concrete Execution Operators
  • 20. DataFusion Input / Output Diagram SQL Query SELECT status, COUNT(1) FROM http_api_requests_total WHERE path = '/api/v2/write' GROUP BY status; RecordBatches DataFrame ctx.read_table("http")? .filter(...)? .aggregate(..)?; RecordBatches Catalog information: tables, schemas, etc OR
  • 22. DataFusion CLI > CREATE EXTERNAL TABLE http_api_requests_total STORED AS PARQUET LOCATION 'http_api_requests_total.parquet'; +--------+-----------------+ | status | COUNT(UInt8(1)) | +--------+-----------------+ | 4XX | 73621 | | 2XX | 338304 | +--------+-----------------+ > SELECT status, COUNT(1) FROM http_api_requests_total WHERE path = '/api/v2/write' GROUP BY status;
  • 23. EXPLAIN Plan Gets a textual representation of LogicalPlan +--------------+----------------------------------------------------------+ | plan_type | plan | +--------------+----------------------------------------------------------+ | logical_plan | Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] | | | Selection: #path Eq Utf8("/api/v2/write") | | | TableScan: http_api_requests_total projection=None | +--------------+----------------------------------------------------------+ > explain SELECT status, COUNT(1) FROM http_api_requests_total WHERE path = '/api/v2/write' GROUP BY status;
  • 24. Plans as DataFlow graphs Filter: #path Eq Utf8("/api/v2/write") Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] TableScan: http_api_requests_total projection=None Step 2: Predicate is applied Step 1: Parquet file is read Step 3: Data is aggregated Data flows up from the leaves to the root of the tree
  • 25. More than initially meets the eye Use EXPLAIN VERBOSE to see optimizations applied > EXPLAIN VERBOSE SELECT status, COUNT(1) FROM http_api_requests_total WHERE path = '/api/v2/write' GROUP BY status; +----------------------+----------------------------------------------------------------+ | plan_type | plan | +----------------------+----------------------------------------------------------------+ | logical_plan | Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] | | | Selection: #path Eq Utf8("/api/v2/write") | | | TableScan: http_api_requests_total projection=None | | projection_push_down | Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] | | | Selection: #path Eq Utf8("/api/v2/write") | | | TableScan: http_api_requests_total projection=Some([6, 8]) | | type_coercion | Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] | | | Selection: #path Eq Utf8("/api/v2/write") | | | TableScan: http_api_requests_total projection=Some([6, 8]) | ... +----------------------+----------------------------------------------------------------+ Optimizer “pushed” down projection so only status and path columns from file were read from parquet
  • 27. Array + Record Batches + Schema +--------+--------+ | status | COUNT | +--------+--------+ | 4XX | 73621 | | 2XX | 338304 | | 5XX | 42 | | 1XX | 3 | +--------+--------+ 4XX 2XX 5XX * StringArray representation is somewhat misleading as it actually has a fixed length portion and the character data in different locations StringArray 1XX StringArray 73621 338304 42 UInt64Array 3 UInt64Array Schema: fields[0]: “status”, Utf8 fields[1]: “COUNT()”, UInt64 RecordBatch cols: schema: RecordBatch cols: schema:
  • 29. DataFusion Planning Flow SQL Query SELECT status, COUNT(1) FROM http_api_requests_total WHERE path = '/api/v2/write' GROUP BY status; LogicalPlan ExecutionPlan RecordBatches Parsing/Planning Optimization Execution “Query Plan” PG:” Query Tree” “Access Plan” “Operator Tree” PG: “Plan Tree”
  • 30. DataFusion Logical Plan Creation ● Declarative: Describe WHAT you want; system figures out HOW ○ Input: “SQL” text (postgres dialect) ● Procedural Describe HOW directly ○ Input is a program to build up the plan ○ Two options: ■ Use a LogicalPlanBuilder, Rust style builder ■ DataFrame - model popularized by Pandas and Spark
  • 31. SQL → LogicalPlan SQL Parser SQL Query SELECT status, COUNT(1) FROM http_api_requests_total WHERE path = '/api/v2/write' GROUP BY status; Planner Query { ctes: [], body: Select( Select { distinct: false, top: None, projection: [ UnnamedExpr( Identifier( Ident { value: "status", quote_style: None, }, ), ), ... Parsed Statement LogicalPlan
  • 32. “DataFrame” → Logical Plan Rust Code let df = ctx .read_table("http_api_requests_total")? .filter(col("path").eq(lit("/api/v2/write")))? .aggregate([col("status")]), [count(lit(1))])?; DataFrame (Builder) LogicalPlan
  • 33. Supported Logical Plan operators (source link) Projection Filter Aggregate Sort Join Repartition TableScan EmptyRelation Limit CreateExternalTable Explain Extension
  • 34. Query Optimization Overview Compute the same (correct) result, only faster Optimizer Pass 1 LogicalPlan (intermediate) “Optimizer” Optimizer Pass 2 LogicalPlan (input) LogicalPlan (output) … Other Passes ...
  • 35. Built in DataFusion Optimizer Passes (source link) ProjectionPushDown: Minimize the number of columns passed from node to node to minimize intermediate result size (number of columns) FilterPushdown (“predicate pushdown”): Push filters as close to scans as possible to minimize intermediate result size HashBuildProbeOrder (“join reordering”): Order joins to minimize the intermediate result size and hash table sizes ConstantFolding: Partially evaluates expressions at plan time. Eg. ColA && true → ColA
  • 37. Expression Evaluation Arrow Compute Kernels typically operate on 1 or 2 arrays and/or scalars. Partial list of included comparison kernels: eq Perform left == right operation on two arrays. eq_scalar Perform left == right operation on an array and a scalar value. eq_utf8 Perform left == right operation on StringArray / LargeStringArray. eq_utf8_scalar Perform left == right operation on StringArray / LargeStringArray and a scalar. and Performs AND operation on two arrays. If either left or right value is null then the result is also null. is_not_null Returns a non-null BooleanArray with whether each value of the array is not null. or Performs OR operation on two arrays. If either left or right value is null then the result is also null. ...
  • 38. Exprs for evaluating arbitrary expressions path = '/api/v2/write' OR path IS NULL Column path Literal ScalarValue::Utf8 '/api/v2/write' Column path IsNull BinaryExpr op: Eq left right BinaryExpr op: Or left right col(“path”) .eq(lit(‘api/v2/write’)) .or(col(“path”).is_null()) Expression Builder API
  • 40. Expr Vectorized Evaluation Literal ScalarValue::Utf8 '/api/v2/write' Column path IsNull BinaryExpr op: Eq BinaryExpr op: Or /api/v2/write /api/v1/write /api/v2/read /api/v2/write … /api/v2/write /foo/bar StringArray
  • 41. Expr Vectorized Evaluation Column path IsNull BinaryExpr op: Eq BinaryExpr op: Or /api/v2/write /api/v1/write /api/v2/read /api/v2/write … /api/v2/write /foo/bar StringArray ScalarValue::Utf8( Some( “/api/v2/write” ) )
  • 42. Expr Vectorized Evaluation Column path IsNull BinaryExpr op: Eq BinaryExpr op: Or /api/v2/write /api/v1/write /api/v2/read /api/v2/write … /api/v2/write /foo/bar StringArray ScalarValue::Utf8( Some( “/api/v2/write” ) ) Call: eq_utf8_scalar
  • 43. Expr Vectorized Evaluation Column path IsNull BinaryExpr op: Or True False False True … True False BooleanArray
  • 44. Expr Vectorized Evaluation IsNull BinaryExpr op: Or True False False True … True False BooleanArray /api/v2/write /api/v1/write /api/v2/read /api/v2/write … /api/v2/write /foo/bar StringArray
  • 45. Expr Vectorized Evaluation BinaryExpr op: Or True False False True … True False BooleanArray False False False False … False False BooleanArray
  • 47. Type Coercion sqrt(col) sqrt(col) → sqrt(CAST col as Float32) col is Int8, but sqrt implemented for Float32 or Float64 ⇒ Type Coercion: adds typecast cast so the implementation can be called Note: Coercion is lossless; if col was Float64, would not coerce to Float32 Source Code: coercion.rs
  • 49. Plan Execution Overview Typically called the “execution engine” in database systems DataFusion features: ● Async: Mostly avoids blocking I/O ● Vectorized: Process RecordBatch at a time, configurable batch size ● Eager Pull: Data is produced using a pull model, natural backpressure ● Partitioned: each operator produces partitions, in parallel ● Multi-Core* * Uses async tasks; still some unease about this / if we need another thread pool
  • 50. Plan Execution LogicalPlan ExecutionPlan collect SendableRecordBatchStream Partitions ExecutionPlan nodes allocate resources (buffers, hash tables, files, etc) RecordBatches execute produces an iterator-style thing that produces Arrow RecordBatches for each partition create_physical_plan execute
  • 51. create_physical_plan Filter: #path Eq Utf8("/api/v2/write") Aggregate: groupBy=[[#status]], aggr=[[COUNT(UInt8(1))]] TableScan: http_api_requests_total projection=None HashAggregateExec (1 partition) AggregateMode::Final SUM(1), GROUP BY status HashAggregateExec (2 partitions) AggregateMode::Partial COUNT(1), GROUP BY status FilterExec (2 partitions) path = “/api/v2/write” ParquetExec (2 partitions) files = file1, file2 LogicalPlan ExecutionPlan MergeExec (1 partition)
  • 52. execute ExecutionPlan SendableRecordBatchStream GroupHash AggregateStream GroupHash AggregateStream GroupHash AggregateStream FilterExecStream FilterExecStream “ParquetStream”* For file1 “ParquetStream”* For file2 * this is actually a channel getting results from a different thread, as parquet reader is not yet async HashAggregateExec (1 partition) AggregateMode::Final SUM(1), GROUP BY status HashAggregateExec (2 partitions) AggregateMode::Partial COUNT(1), GROUP BY status FilterExec (2 partitions) path = “/api/v2/write” ParquetExec (2 partitions) files = file1, file2 MergeExec MergeStream execute(0) execute(0) execute(0) execute(0) execute(0) execute(1) execute(1) execute(1)
  • 53. next() SendableRecordBatchStream GroupHash AggregateStream FilterExecStream “ParquetStream”* For file1 Ready to produce values! 😅 Rust Stream: an async iterator that produces record batches Execution of GroupHash starts eagerly (before next() is called on it) next().await next().await RecordBatch RecordBatch Step 2: Data is filtered Step 1: Data read from parquet and returned Step 3: data is fed into a hash table Step 0: new task spawned, starts computing input immediately Step 5: output is requested RecordBatch Step 6: returned to caller Step 4: hash done, output produced
  • 54. next() GroupHash AggregateStream GroupHash AggregateStream GroupHash AggregateStream next().await Step 1: output is requested MergeStream MergeStream eagerly starts on its own task, back pressure via bounded channels Step 0: new task spawned, starts computing input RecordBatch Step 2: eventually RecordBatch is produced from downstream and returned Step 0: new task spawned, starts computing input immediately next().await next().await Step 0: new task spawned, starts computing input next().await Step 4: data is fed into a hash table RecordBatch Step 3: Merge passes on RecordBatch RecordBatch Step 5: hash done, output produced Step 6: returned to caller
  • 55. Get Involved Check out the project Apache Arrow Join the mailing list (links on project page) Test out Arrow (crates.io) and DataFusion (crates.io) in your projects Help out with the docs/code/tickets on GitHub Thank You!!!!
  翻译: