ceph or lustre

ceph or lustre

The Lustre® file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. Whether you’re a member of our diverse development community or considering the Lustre file system as a parallel file system solution .


Lustre is purpose-built to provide a coherent, global POSIX-compliant namespace for very large scale computer infrastructure, including the world's largest supercomputer platforms. It can support hundred's of petabytes of data storage and hundreds of gigabytes per second in simultaneous, aggregate throughput. Some of the largest current installations have individual file systems in excess of fifty petabytes of usable capacity, and have reported throughput speeds exceeding one terabyte/sec.


Lustre is a parallel file system, which is a type of distributed file system. However, the difference lies in how data and metadata are stored. Distributed file systems support a single global namespace, as do parallel file systems, but a parallel file system chunks up files into data blocks and spreads file data across multiple storage servers, which can be written-to and read-from in parallel. Metadata is typically stored on a separate metadata server for more efficient file look up. In contrast a distributed file system uses standard network file access and the entire file data and metadata are managed by a single storage controller. For bandwidth intensive workloads, this single point of access becomes a bottleneck for performance. The Lustre parallel file system does not suffer this single controller bottleneck but the architecture required to deliver parallel access is relatively complex, which has limited Lustre deployments to niche applications.

Lustre File System Architecture

The Lustre file system architecture separates out metadata services and data services to deliver parallel file access and improve performance. The architecture consists of a series of I/O servers called Object Storage Servers (OSSs) and persistent storage targets where the physical data resides called Object Storage Targets (OSTs), typically based on spinning disk drives. In addition, Lustre has separate metadata services and file metadata is managed by a Metadata Server (MDS) while the actual metadata is persisted on a metadata target (MDT).

The OSSs are responsible for managing the OSTs, handling all I/O requests to the OST and. A single OSS typically manages between two and eight OSTs, after which an additional OST is required to maintain performance. The OSS requires a local file system to manage file placement on the OST, typically this requires XFS or ZFS file system.

For metadata, the MDS provides metadata services managing all physical storage locations associated with each file so that I/O requests are directed to the correct set of OSSs and associated OSTs. The metadata server is never in the I/O path, a key difference from traditional NAS and clustered file systems. The MDS also requires a local file system to manage metadata placement on the MDT.


Ceph and Lustre are both distributed file systems commonly used in high-performance computing (HPC) and storage environments, but they have different architectures and target different use cases. Here are some of the key differences between Ceph and Lustre:

  1. Architecture:

  • Ceph: Ceph is a unified distributed storage system that combines object storage, block storage, and file storage. It uses a cluster of storage nodes that communicate with each other to provide a scalable and fault-tolerant storage solution.
  • Lustre: Lustre is a parallel distributed file system designed specifically for high-performance computing. It separates the storage and metadata components, utilizing dedicated servers for metadata management and multiple object storage targets for data storage.

  1. Data Access:

  • Ceph: Ceph provides multiple interfaces for data access, including an object storage interface (RADOS), a block device interface (RBD), and a file system interface (CephFS). Clients can access data through these interfaces using standard protocols such as S3, iSCSI, and NFS.
  • Lustre: Lustre primarily focuses on providing a high-performance file system interface. It uses the Open Network Computing Remote Procedure Call (ONC RPC) protocol for client-server communication and allows clients to mount the Lustre file system directly.

  1. Scalability and Performance:

  • Ceph: Ceph is designed to scale horizontally by adding more storage nodes to the cluster. It distributes data across multiple OSDs (Object Storage Devices) and utilizes data replication and erasure coding for data protection. Ceph's performance can vary depending on the workload and configuration.
  • Lustre: Lustre is optimized for large-scale deployments and can handle massive amounts of data and concurrent clients. It uses distributed metadata servers and parallel data access techniques to achieve high performance in HPC environments.

  1. Data Consistency and Integrity:

  • Ceph: Ceph ensures data consistency and integrity through data replication or erasure coding. It provides mechanisms for data self-healing and recovery in case of failures.
  • Lustre: Lustre relies on the underlying file system's consistency mechanisms and does not provide built-in data replication or erasure coding. It assumes a trusted network environment and focuses on high-performance data access rather than data protection.

  1. Community and Support:

  • Ceph: Ceph has a large and active open-source community and is backed by companies like Red Hat and SUSE. It has extensive documentation, user forums, and commercial support options.
  • Lustre: Lustre is also an open-source project but is primarily maintained and supported by a smaller group of organizations. It has an active user community and commercial support options from vendors specializing in Lustre deployments.





If you are looking for a fast and reliable way to store and access your big data, you might be wondering what is different between Ceph and Lustre, two popular distributed file systems. In this paragraph, I will give you a brief overview of their main features and differences, so you can decide which one suits your needs better.


Ceph is an open-source object store that can also provide block and file storage interfaces. It uses a scalable and fault-tolerant architecture that allows you to store and retrieve data from multiple nodes in parallel. Ceph can also use erasure coding or replication to ensure data durability and availability. Ceph is designed to be self-healing and self-managing, which reduces the operational complexity and cost of running a large-scale storage system.


Lustre is an open-source parallel file system that is widely used in high-performance computing (HPC) environments. It enables fast and concurrent access to large files across multiple servers and clients. Lustre can also handle very high I/O throughput and low latency, which are critical for HPC workloads. Lustre does not provide any redundancy or data protection mechanisms by itself, so it relies on external solutions such as RAID or backup systems.


As you can see, Ceph and Lustre have different strengths and weaknesses, depending on your use case and requirements. Ceph offers more flexibility and functionality, but it may have lower performance than Lustre for some HPC applications. Lustre offers more speed and scalability, but it may have higher maintenance and hardware costs than Ceph for some big data scenarios. You should carefully evaluate your needs and test both systems before choosing one for your project.


No alt text provided for this image


Lustre is a powerful distributed file system designed for high-performance computing (HPC) and large-scale storage environments. Here are some reasons why Lustre may be needed in certain scenarios:


1. High Performance: Lustre is known for its high-performance capabilities, making it suitable for HPC workloads that require fast I/O and low-latency access to data. It can scale to handle large amounts of data and support concurrent access by multiple clients.


2. Scalability: Lustre is designed to scale horizontally, allowing you to expand storage capacity and performance by adding more storage servers and clients to the system. This makes it well-suited for environments that require large-scale storage deployments.


3. Parallel File System: Lustre provides a parallel file system architecture that allows multiple clients to access the same file simultaneously. This is particularly useful for applications that require shared file access across multiple compute nodes, such as those used in scientific research or data analytics.


4. HPC Workloads: Lustre is widely used in HPC environments, where it is essential to provide fast, reliable, and scalable storage solutions for compute clusters. It is often used in scientific research, weather forecasting, oil and gas exploration, and other computationally intensive fields.


5. Lustre Ecosystem: Lustre has an active and dedicated community, including Lustre vendors and organizations, that provide support, development, and enhancements to the file system. This ensures ongoing improvements, bug fixes, and compatibility with the latest hardware and software technologies.

Lustre is commonly used in a variety of use cases that require high-performance, scalable, and parallel file system solutions. Here are some notable use cases where Lustre is often employed:

  1. High-Performance Computing (HPC): Lustre is extensively used in HPC environments, including research institutions, national laboratories, and universities. It provides the parallel file system capabilities needed for running complex simulations, modeling, data analysis, and scientific research applications.
  2. Big Data Analytics: Lustre is well-suited for big data analytics workloads that require fast data processing and analysis. It enables multiple compute nodes to access large datasets concurrently, allowing for efficient data processing and reduced time-to-insight.
  3. Media and Entertainment: The media and entertainment industry deals with large-scale storage and streaming of high-resolution video, audio, and graphics files. Lustre's ability to handle high bandwidth and concurrent access makes it suitable for media production, post-production, and content delivery workflows.
  4. Oil and Gas Exploration: The oil and gas industry deals with massive amounts of seismic data and requires fast and reliable storage solutions for processing and analyzing this data. Lustre's parallel file system architecture and scalability make it an ideal choice for seismic data processing and reservoir simulation applications.
  5. Genomics and Bioinformatics: Lustre is used in genomics and bioinformatics research, where large-scale genomic data analysis and DNA sequencing are performed. Its parallel file system capabilities enable efficient data access and processing for genomics workflows and large-scale genome assemblies.
  6. Financial Services: Lustre is utilized in financial institutions for high-performance storage of financial data, risk modeling, algorithmic trading, and backtesting. Its ability to handle large volumes of data and provide low-latency access is crucial for time-sensitive financial applications.






There are several reasons why Ceph is widely used and considered beneficial in many scenarios:


1. Distributed and Scalable Storage: Ceph provides a distributed storage system that allows you to store and manage vast amounts of data across multiple nodes. It scales horizontally, meaning you can easily add more storage nodes as your data needs grow. This scalability makes Ceph suitable for handling petabytes or even exabytes of data.


2. High Performance: Ceph is designed to deliver high-performance storage. It uses parallel data access and distribution across multiple storage nodes, allowing for efficient data retrieval and processing. Ceph's ability to distribute data across multiple devices in parallel improves read and write speeds, making it suitable for demanding workloads.


3. Fault Tolerance and Data Redundancy: Ceph ensures data reliability and availability through its fault-tolerant architecture. It replicates data across multiple nodes, providing data redundancy and protecting against hardware failures. If a storage node fails, Ceph automatically redistributes data and maintains the system's integrity and availability.


4. Object, Block, and File Storage: Ceph supports different storage interfaces, including object storage (RADOS Gateway), block storage (RBD - RADOS Block Device), and file storage (CephFS). This flexibility allows you to choose the appropriate storage interface based on your application requirements, making Ceph versatile and adaptable to various use cases.


5. Unified Storage Cluster: Ceph provides a unified storage cluster, meaning you can manage object, block, and file storage within a single infrastructure. This simplifies storage management and reduces the need for separate storage systems for different data types.


6. Open Source and Community Support: Ceph is an open-source project with an active and vibrant community. This ensures ongoing development, frequent updates, and access to a wealth of community knowledge and support. It also allows for customization and integration with other open-source technologies.


7. Cost-Effective: Ceph's distributed and scalable nature, coupled with its open-source availability, can lead to cost savings compared to proprietary storage solutions. It enables organizations to build highly available and scalable storage infrastructures using commodity hardware.


Overall, Ceph is an attractive choice for organizations that require scalable, fault-tolerant, and high-performance storage solutions. It is particularly well-suited for use cases involving large-scale data storage, cloud infrastructure, virtualization environments, and data-intensive applications.



Ceph is a versatile storage system that can be used in various use cases. Here are some common use cases for Ceph:


1. Object Storage: Ceph's RADOS Gateway provides a highly scalable and distributed object storage solution. It is compatible with the S3 and Swift APIs, making it ideal for building private or public cloud storage services, content delivery networks (CDNs), and web-scale applications that require efficient and scalable object storage.


2. Block Storage: Ceph's RADOS Block Device (RBD) enables the creation of block storage volumes that can be mounted and used by virtual machines or applications. This makes Ceph suitable for virtualization environments, such as OpenStack, where high-performance, shared, and scalable block storage is required.


3. File Storage: CephFS is a distributed file system that provides a POSIX-compliant file interface. It allows multiple clients to access and share files across a distributed storage cluster. CephFS is useful for scenarios where shared file storage is required, such as scientific computing, data analytics, media processing, and collaboration platforms.


4. Big Data and Analytics: Ceph's ability to handle massive amounts of data and deliver high-performance storage makes it well-suited for big data and analytics workloads. It can be integrated with popular big data frameworks like Apache Hadoop, Apache Spark, and Apache Kafka, providing scalable and reliable storage for data processing, analytics, and real-time streaming applications.


5. Backup and Disaster Recovery: Ceph's distributed and fault-tolerant architecture makes it an excellent choice for backup and disaster recovery solutions. By replicating data across multiple nodes, Ceph ensures data durability and availability. It allows organizations to create scalable backup repositories and implement off-site data replication for disaster recovery purposes.


6. Media and Entertainment: Ceph is widely used in the media and entertainment industry for storing and managing large media files, such as videos, images, and audio. It provides high-performance, scalable, and reliable storage for content distribution, video editing, rendering, and streaming applications.


7. Private Cloud Infrastructure: Ceph is a popular choice for building private cloud infrastructures due to its scalability, fault tolerance, and support for multiple storage interfaces. It can serve as the underlying storage platform for private cloud deployments, enabling organizations to provide self-service storage to their users and manage their cloud storage resources efficiently.


These are just a few examples of the many use cases where Ceph can be beneficial. Its flexibility, scalability, and ability to handle diverse storage workloads make it suitable for a wide range of applications and industries.



No alt text provided for this image




in other hand Lustre is widely used in scientific computing applications that require high-performance storage for handling large volumes of data. Here are some examples of Lustre's use in scientific applications:


1. High-Performance Computing (HPC): Lustre is commonly deployed in HPC environments where it provides a parallel file system that can deliver the high bandwidth and low-latency storage performance required by scientific simulations, modeling, and data analysis. Lustre's distributed architecture allows for efficient access to data from multiple compute nodes simultaneously.


2. Genomics and Bioinformatics: Lustre is well-suited for genomics and bioinformatics applications that involve processing and analyzing large-scale genomic datasets. It provides the scalability and performance necessary for tasks such as DNA sequencing, genome assembly, variant calling, and comparative genomics.


3. Climate and Weather Modeling: Lustre is utilized in climate and weather modeling applications, which involve running complex simulations to understand and predict weather patterns, climate change, and atmospheric phenomena. These simulations generate massive amounts of data that require high-performance storage for efficient processing and analysis.


4. Particle Physics and Astronomy: Scientists working in the field of particle physics and astronomy often rely on Lustre for storing and analyzing the vast amounts of data produced by particle accelerators, telescopes, and other observational instruments. Lustre's parallel file system architecture enables efficient data access and processing for these data-intensive applications.


5. Computational Chemistry and Molecular Dynamics: Lustre is utilized in computational chemistry and molecular dynamics simulations, which involve studying the behavior and interactions of atoms and molecules at a molecular level. These simulations generate extensive datasets that Lustre can handle efficiently, enabling researchers to analyze and visualize complex molecular systems.


In these scientific applications, Lustre's scalability, performance, and parallel file system architecture play a critical role in facilitating data-intensive computations, enabling researchers to process, analyze, and extract insights from large-scale datasets efficiently.



Here are a few examples of Lustre deployments in real-world scenarios:


1. Oak Ridge National Laboratory (ORNL): ORNL, one of the world's leading research institutions, uses Lustre in its supercomputing environment. Lustre provides the high-performance storage infrastructure required for running complex simulations and data-intensive scientific workflows. It enables researchers to efficiently store, access, and analyze massive amounts of data generated by scientific experiments and simulations.


2. CERN: The European Organization for Nuclear Research (CERN) utilizes Lustre for its particle physics experiments, including the famous Large Hadron Collider (LHC). Lustre provides the necessary storage capacity and performance to handle the massive amount of data produced by particle collisions. It enables researchers to store and analyze petabytes of data, facilitating discoveries in the field of high-energy physics.


3. Human Brain Project (HBP): The HBP is a large-scale neuroscience project that aims to simulate the human brain. Lustre is used to manage and analyze the vast amounts of brain imaging and simulation data generated by the project. The high-speed storage infrastructure provided by Lustre allows researchers to perform complex computations and data analysis on this valuable brain data.


4. National Energy Research Scientific Computing Center (NERSC): NERSC, a major computing facility for the U.S. Department of Energy, relies on Lustre for its high-performance storage needs. Lustre is used to support a wide range of scientific applications, including climate modeling, astrophysics simulations, and materials science research. It enables NERSC users to efficiently store and process large-scale scientific datasets.


5. Oil and Gas Exploration: In the oil and gas industry, Lustre is employed to manage seismic data used in exploration and reservoir modeling. The seismic data, collected through geophysical surveys, is stored in Lustre file systems, enabling geoscientists to analyze and interpret the data for optimizing oil and gas exploration efforts.


These examples highlight the diverse range of applications where Lustre is utilized, demonstrating its effectiveness in handling large-scale scientific datasets, supporting high-performance computing, and facilitating breakthrough scientific research across various domains.



Here are a few examples of real-world deployments of Ceph:

  1. CERN: CERN, the European Organization for Nuclear Research, uses Ceph as the storage backend for its OpenStack-based cloud infrastructure. Ceph provides highly scalable and distributed storage for virtual machine images, object storage, and block devices. It enables CERN to handle the massive amount of data generated by the Large Hadron Collider (LHC) and other scientific experiments.
  2. University of California, Santa Cruz: UC Santa Cruz uses Ceph for its high-performance computing (HPC) cluster, providing shared storage for research data and scientific simulations. Ceph's distributed architecture allows multiple nodes to contribute storage capacity and performance, ensuring efficient data access and reliability for demanding HPC workloads.
  3. Deutsche Telekom: Deutsche Telekom, a telecommunications company, leverages Ceph to build its scalable and flexible cloud storage services. Ceph's object storage capabilities enable Deutsche Telekom to offer cost-effective and highly available storage solutions to its customers, supporting a variety of use cases such as data backup, archival, and content distribution.
  4. CERN Open Data Portal: CERN also utilizes Ceph for its Open Data Portal, which provides public access to datasets from CERN experiments. Ceph's scalability and fault tolerance allow CERN to store and serve petabytes of open data to researchers and the general public, facilitating scientific collaboration and discovery.
  5. Academic and Research Institutions: Many academic and research institutions worldwide deploy Ceph to meet their storage needs. Ceph's ability to scale horizontally, its data redundancy features, and its support for multiple storage interfaces make it suitable for managing large-scale research data, collaborative projects, and data-intensive applications across various disciplines.

These examples showcase the versatility of Ceph and its ability to address diverse storage requirements, ranging from high-performance computing to cloud storage, big data analytics, and scientific research. Ceph's distributed architecture, fault tolerance, and scalability make it a popular choice for organizations seeking reliable and scalable storage solutions.




The Lustre Infrastructure

When a Lustre client wants to access a file, it sends a request to the MDS, which in turn accesses the associated MDT. The location of the data is returned to the client, which then directly accesses the OSS and associated OST. In real production environments the infrastructure is significantly more complex as the storage hardware used for MDTs and OSTs typically require additional hardware RAID devices or the addition of ZFS for the back end file system, which provides software-based RAID protection on JBOD (Just a bunch of disks) storage.

The following diagram (figure 1) provides a simplified view of the physical configuration of a Lustre environment. It is important to note that installing Lustre is not like installing a NAS appliance, it is composed of several different services that are deployed on separate server infrastructure.

Following the diagram below, the environment requires a Lustre management service (MGS) server whose job is to support the multiple file systems running in the Lustre environment. Clients and servers have to connect to the MGS on startup to establish the file system configuration and get notifications of changes to the file systems. While not compute or storage resource intensive, the MGS can pose a single point of failure because if the host fails all the client and server services will be unavailable while the MGS if offline. In order to alleviate this, the MGS must be deployed in a high-availability configuration. Again, just like other services in Lustre, the MGS must have a local file system deployed to managed the physical data storage on the management service target (MGT).

The second set of server infrastructure is composed of metadata servers (MDSs). As described earlier, these servers manage the namespace for the Lustre file system. MDS servers are typically configured in a RAID 1 (mirror) to ensure the MGS server can access the metadata server in the event of a server failure. It requires careful consideration how many metadata servers are required per file system and how much memory is required to support the number of Lustre clients accessing the MDS. For I/O intensive applications, getting the ratios correct can be a huge administration challenge and it is recommended to reference the Lustre tuning and sizing guide.

The third major component of server infrastructure is the object storage servers that house the file data. Typically the OSS make up the majority of the infrastructure as they manage the data services for individual files. Each OSS communicates with the OSTs that persist the physical data blocks. OSS nodes are connected over multiple paths to the physical media, protected with RAID controller or protected using the ZFS software RAID for file protection. Each RAID or ZFS stripe is an OST. Several strategies are utilized to manage storage services including layering file services on top of SAN infrastructure for more storage scale.

The final component of a Lustre environment is gateway servers to provide Linux and Windows user access via gateway services. These services are optional and may not be found in many HPC environments but are required if individual user access to the file system data is required.




Article content








Why Lustre is not well suited for AI/ML I/O intensive workloads.

The Lustre file system, and other parallel file systems such as IBM Spectrum Scale (also known as GPFS), separate data and metadata into separate services allowing HPC clients to communicate directly with the storage servers. While separating data and metadata services was a significant improvement for large file I/O performance, it created a scenario where the metadata services became the bottleneck. Newer workloads such as IoT and analytics are small file (4KB and lower) and metadata intensive, consequently the metadata server is often the performance bottleneck for Lustre deployments. When setting up the Lustre environment the MDS size has to be calculated with the workload in mind. It most important factor is the average size of the file to be stored on the file system. Based on the average file size the number of anticipated inodes is calculated, the default inode size is 1024 bytes for the MDT and 512 for the OST. A file system must have at least 2x the number of inodes on the MDT as the OST or it will run out of inodes and the file system will be full even though there is still physical storage media available. Every time a file is created in Lustre it consumed one inode, if the file is large or small it is allocated a single inode. Many AI workloads produce billions of tiny files which quickly consume inodes, the result is the file system is “full” even though the physical storage capacity is only partially used up. This Lustre forum provides a great example where the application generates tonnes of small files, the file system is “full” while only using 30% of the disk space.

Managing node count to match an unpredictable workload is one of the major problems using Lustre in an environment where file count and file size is unpredictable. Yet another challenge is poor metadata performance. AI and machine learning workloads require small file access with low latency. The Lustre file system was built for large file access, where the initial file request was the only latency experienced because after that the I/O streamed directly from the persistent media. Comparing Lustre metadata performance to a local file system such as XFS or ext4 shows that Lustre metadata performance was only 26% of the local file system capability. There are many strategies to alleviate metadata performance, NASA’s Pleiades Lustre file system has an excellent knowledge base on Lustre best practices including:

  • Limit the number of files in a directory
  • Avoid putting small files on Lustre to begin with and put on an NFS mount instead
  • Put small files in a dedicated directory so that only one OST is needed for each file
  • Avoid “stat” operations
  • Avoid repetitive Open and close operations to minimize metadata operations

All of these operations require intervention by administrators to change best practices or they require changes to applications to minimize metadata operations. In any event, it is an ongoing challenge to maintain performance to metadata intensive applications.

Ahmed Hazem El-Sherif

IT Management Consultant, Technology Expert Labs Consultant at IBM

11mo

Why only vs Lustre? where is Storage Scale that is proven for 250PiB filesystem ?!

Like
Reply
Deepen Dhulla

Innovative Thinking, Practical Solutions

1y

Hi Yashar, Thanks for your informative post on Ceph and Lustre. I think it's a great resource for IT managers who are considering these two storage systems for their organization. I agree with your assessment that Ceph is a good choice for general-purpose storage, while Lustre is better suited for high-performance computing applications. Ultimately, the best way to choose between Ceph and Lustre is to carefully consider your specific needs, requirements and people to manage it. If you're not sure which system is right for you, I recommend consulting with a storage expert. I also think your suggestion to read this post for every IT manager is spot on. Storage is an essential part of any IT infrastructure, and it's important for IT managers to understand the different storage options available to them. This post provides a good overview of two of the most popular storage systems, and it can help IT managers make informed decisions about their storage needs. Thanks again for your post. It's a valuable resource for IT managers everywhere.

To view or add a comment, sign in

More articles by Yashar Esmaildokht

  • نکته های ceph

    نکته های ceph

    نویسنده : یاشار اسمعیل دخت این قسمت :(pgdump و pgp_dump) Ceph، و دو پارامتر مهم هستند که به مدیریت و توزیع داده‌ها در…

  • Azure DevOps vs Gitlab

    Azure DevOps vs Gitlab

    Both Azure DevOps and GitLab provide robust CI/CD pipelines to automate build, test, and deployment workflows, but…

  • پایگاه داده های مبتنی بر وکتور

    پایگاه داده های مبتنی بر وکتور

    پایگاه داده ای مبتنی بر وکتور (Vector Databases) یک دسته جدید از پایگاه های داده هستند که برای ذخیره، پردازش و جستجوی…

  • Devops & SCA

    Devops & SCA

    In a DevOps environment, both Software Composition Analysis (SCA) and Wazuh play critical roles in securing the entire…

  • statefull vs stateless

    statefull vs stateless

    In programming, "state" refers to the current condition of a system, component, or application at a specific moment…

  • Database Activity Monitoring (DAM) -Part 2

    Database Activity Monitoring (DAM) -Part 2

    Database Activity Monitoring (DAM) in MariaDB / MYSQL And Audit - Part 1 MariaDB MaxScale is a high-performance…

  • Database Activity Monitoring (DAM) in MariaDB / MYSQL And Audit & max scale

    Database Activity Monitoring (DAM) in MariaDB / MYSQL And Audit & max scale

    Database Activity Monitoring (DAM) is a security approach used to monitor and analyze database activity in real time or…

  • نظریه cap و چالش های آن در پایگاه داده

    نظریه cap و چالش های آن در پایگاه داده

    در پایگاه داده (Database)، CAP theorem یا نظریه CAP یکی از مفاهیم مهم است که محدودیت های موجود در سیستم های توزیع شده…

  • Google not use kubernetes but use Borg and Omega for Internal Use ...

    Google not use kubernetes but use Borg and Omega for Internal Use ...

    Borg is a highly specialized, internal cluster management and container orchestration system developed by Google to…

    13 Comments
  • ابزار OpenGnsys ابزاری کارامد برای اتومیشن و تفاوت آن با سایر ابزار ها

    ابزار OpenGnsys ابزاری کارامد برای اتومیشن و تفاوت آن با سایر ابزار ها

    در واقع OpenGnsys یک نرمافزار متن باز است که برای مدیریت و خودکارسازی فرآیندهای نصب و پیکربندی سیستم عاملها بر روی شبکه…

Insights from the community

Others also viewed

Explore topics