Oracle Database Observability
We just released version 1.2.1 of our metrics exporter for Oracle Database, which is part of our Unified Observability project so it seems like a good time to talk about what it does, how it can be used, and what's next in the roadmap.
The aim of the project is to provide exporters and samples for unified observability for data-centric application development and microservices. What does that mean?
Well, by "unified observability" we mean that you should be able monitor your database in the exact same way that you monitor your microservices, using the same tools and techniques. For example, you might use Prometheus to gather metrics, Grafana to visualize them using dashbaords, Loki to manage logs, and maybe Jaeger or OpenTelemetry for distributed traces. We want to make it possible to use those exact same tools for your database too! And you should be able to query and correlate across metrics, logs and traces, from your database and microservices applications in the same queries. Why not?
As of today, we have a metrics exporter that allows you to export metrics from your Oracle database in the de facto standard Prometheus format that is understood by most monitoring tools. It includes a set of standard, pre-defined metrics, and it allows you to define your own custom metrics. You can create metrics from basically anything you can write a query for. So they don't have to be just database system type of metrics - they can also be application-specific things that make sense in the context of your applications and data.
We are continuing to develop and enhance the metrics exporter, and we have a roadmap published here. Some important upcoming features are refinement and addition of more standard metrics, multiple database support - that is the ability for one instance of the exporter to collect metrics from mutliple databases, the ability to specify how often individual metrics should be collected - so that metrics that are expensive to collect can be collected less often than those that are not expensive (in terms of resources), and connection storm protection.
Did I mention custom metrics? Yes, I did. Here's a great example of how you might use them - monitoring performance of Transactional Event Queues - check out the link to see how easy it is to define your own metrics and create dashboards using them.
We are also planning to add support for logs - so you can make your Oracle alert logs available to be collected, for example by Promtail, and stored in your logging aggregator, for example Loki.
We've also added native support for OpenTelemetry in Oracle Database 23c, which means you can trace into the database itself! Distributed traces allow you to follow a single logical "transaction" or service invocation from a client from the point of entry into your environment (an API Gateway for example) through however many microservices are involved, and even down into resources like databases and across asynchronous queues. Here's an example of how you can visualize and understand the relationships between components:
Recommended by LinkedIn
We also have the ability for your Java applications to pass their tracing spans into the database through their JDBC connection (see here for more information), or using the OpenTelemetry tracing agent if you prefer. Either way, this allows you to trace deep down into the database to understand what kinds of operations are being run. This is especially interesting if you are using an Object Relational Mapper (or similar) that generates SQL for you. It gives you the opportunity to see what code is being executed and to understand how you could optimize it.
If you are using the Oracle Database Operator for Kubernetes look out for support for deploying the metrics exporter right alongside your databases coming in the next release!
Unified Observability supports a number of important use cases. It's not just for monitoring your applications. It's also great for troubleshooting, performance tuning, capacity planning, alerting and high availability.
Want to try it out? It's easy! You can just fire up the whole thing using Docker Compose - no need to install or configure anything. See the simple instructions here and you can start exploring right away.
Curious about the impact of running the exporter? Well, you'll be happy to hear that we do have a large deployment that is monitoring over 2,000 database instances, and we have done a lot of work on the memory footprint and performance of the exporter itself. You can read about some of this in the documentation. Of course, if you define custom metrics you do need to be mindful of the queries you write, and how often you run them, but overall we think that you can rest easy knowing that the exporte has been put through its paces already.
Of course, we'd love to hear from you. Suggestions, contributions, critiques, all welcome. You can get in touch by opening an issue in GitHub.