Skip to main content
Object Storage
Djuno Support avatar
Written by Djuno Support
Updated over a week ago

Overview

Object storage is a scalable data storage architecture that manages data as objects rather than as files in a hierarchical file system or as blocks in a storage area network. Each object includes the data itself, metadata for easier identification, and a unique identifier, allowing for efficient storage and retrieval. Object storage is commonly used for various applications, including backups, archiving, content delivery, and big data analytics, as it can handle vast amounts of unstructured data. This type of storage is highly durable and available, enabling users to access their data from anywhere on the web. Additionally, it often supports features like access control, versioning, and lifecycle management to optimize data handling and costs.

Buckets

The Buckets tab in object storage provides an overview of your storage containers, called buckets, which are used to store and organize data. Each bucket holds objects (files). From this tab, you can manage your buckets, apply lifecycle rules to auto

mate data management (such as transitioning or deleting objects after a certain time), and create new buckets as needed for organizing and storing data. This interface helps you efficiently manage your cloud storage resources.

Create bucket

The Create Bucket page allows you to set up a new storage container, called a bucket, in your object storage system. You can specify the name of the bucket, which must follow certain rules for uniqueness and formatting. Other options include enabling versioning to track changes to files, object locking to prevent files from being deleted or overwritten, and setting quotas to limit storage capacity. You can also configure excluded folders or prefixes to omit certain data from specific rules. This setup page gives you flexibility to control storage behavior and ensure data management according to your needs.

Lifecycle

When you select a bucket in your object storage interface, the Lifecycle button becomes active, allowing you to manage the lifecycle policies for that specific bucket.

Create Lifecycle

The Create Lifecycle modal in object storage allows you to configure automated rules for managing the lifecycle of objects in one or more buckets. You can set rules for expiry, which dictate when objects should be deleted, and for transitioning objects to different storage classes based on their age. Key parameters include defining expiry days (after which objects are deleted), non-current expiration days for previous versions, and using prefixes or tags to target specific objects. The modal also features options for replicating lifecycle rules across multiple buckets and handling delete markers for versioned objects. Once you’ve configured the desired settings, you can create the rules to streamline data management, optimize storage costs, and ensure compliance with retention policies.

Create Multiple Bucket Replication

The Multiple Bucket Replication modal enables you to configure the replication of multiple buckets to a remote storage endpoint, enhancing data redundancy and backup capabilities. You can specify the local buckets to replicate and provide the necessary credentials (access key and secret key) for authentication with the remote storage.

Browse

When you click Browse or select a bucket row, you access a page displaying the contents of that bucket. It shows the bucket's name and lists all stored objects, allowing you to create new folders and upload files.

When you click on a file in the bucket's contents page, you are taken to a detailed view of that file. This view displays the file name and provides various actions, such as downloading, sharing, or previewing the file. You can also manage legal holds, retention policies, and tags associated with the file.

Actions modals:

  • share modal:

  • Preview modal:

  • Legal hold modal

  • Retention modal

  • Tags modal:

  • Inspect modal:

  • Display object various:

Create new path:

Rewind:

The Rewind modal allows you to restore a bucket to a previous state by selecting a specific date and time.

Setting

When you click Settings from the dropdown menu, you access a page that outlines the configuration options for the selected bucket . Here, you can manage features like deleting the bucket, setting up event notifications, configuring replication, and applying lifecycle policies. The page also displays the bucket's access policies, encryption status, usage, and object locking settings. You can manage tags, quotas, and check the versioning and retention status. Overall, this page provides a comprehensive view for efficiently managing the bucket's settings and features.

Setting Events

In the Events tab , helps you stay informed about changes and manage the bucket effectively.

Subscribe to event:

The Subscribe modal for bucket events allows you to configure notifications for specific activities related to the bucket. You must provide an ARN (Amazon Resource Name) to specify where notifications will be sent, which is a required field. You can also set filters using Prefix and Suffix fields to limit notifications to certain objects. Additionally, you can select from various events to subscribe to, such as uploads (PUT), accesses (GET), deletions (DELETE), replications (REPLICA), lifecycle transitions (ILM), and alerts for excessive versions or sub-folders (SCANNER). You have the option to Cancel without saving changes or Set to confirm your subscription settings. This modal enables tailored event notifications to help you effectively manage your bucket's activities.

Replications

The Replications tab allows you to manage and configure replication settings for the selected bucket. Here, you can add new replication rules by clicking Add Replication.

Add replication:

The Add Replication modal allows you to configure replication settings for objects in the selected bucket.

Lifecycle

The Lifecycle tab allows you to manage and create lifecycle rules for objects in the selected bucket. You can view existing rules and click on Add Lifecycle Rule to create new ones that automate data management tasks.

Add lifecycle rule:

The Add Lifecycle Rule modal allows you to manage the lifecycle of objects in a bucket through two primary types of rules: Expiry and Transition. The Expiry type enables automatic deletion of objects after a specified number of days, helping to efficiently manage storage by removing outdated data. In contrast, the Transition type allows you to automate the movement of objects to different storage tiers based on their age, optimizing costs by ensuring that less frequently accessed data is stored in more economical tiers. Both types provide options to filter which objects the rules apply to using prefixes or tags, allowing for precise control over data management and facilitating the automation of storage optimization strategies.

  • Expiry type:

  • Transition type:

Access logs

In the Access Log, Policies tab, different predefined policies are listed to manage access control for users interacting with the storage system. Each policy specifies the allowed actions and the resources they apply to, following the JSON-based structure for defining permissions:

  • consoleAdmin: Grants full administrative access to all resources, including actions related to admin, KMS (Key Management Service), and S3 operations.

  • diagnostics: Allows access to diagnostic tools such as bandwidth monitoring, console logs, server info, and other diagnostics-related functions.

  • readonly: Provides read-only access, allowing users to get the bucket's location and retrieve objects but not modify or delete them.

  • readwrite: Grants full read and write access to the S3 bucket, allowing users to perform all actions on objects.

  • writeonly: Restricts access to only uploading objects, allowing users to write data but not read or modify existing objects.

These policies allow for fine-tuned access control, helping to manage user permissions efficiently.

Anonymous

The Anonymous section allows you to define and manage access rules for unauthenticated users, enabling control over what anonymous users can do within a storage bucket. You can set rules based on a specific Prefix to limit access to certain objects or paths, and assign different levels of access, such as readonly . This feature is useful for enabling public access to certain data while ensuring other parts of the storage remain protected or restricted from anonymous access.

Add Actions Rule:

Access Keys

The Access Keys tab is where you manage the keys that allow secure access to your object storage through APIs. Access keys are used to authenticate and authorize programmatic access to your storage, typically for automating tasks or integrating with other services.

Create access key :

Events

The Events tab allows you to set up integrations between your storage bucket and various external systems, enabling real-time notifications and data processing when certain events occur. You can configure a variety of event destinations such as message queues (Kafka, AMQP, MQTT, Redis, NATS, NSQ), databases (PostgreSQL, MySQL, Elasticsearch), and other services like Webhooks for triggering external actions via HTTP.

By configuring these event destinations, you can automate workflows, trigger alerts, or synchronize data across different systems based on bucket events like object uploads, deletions, or modifications. This helps enhance automation, data management, and system integrations, making it easier to handle complex operations in real time.

Logs

In the Logs tab, you can view a record of recent actions performed on the bucket. These logs display details about different activities such as DELETE, PUT, and POST actions, along with timestamps and the entities affected.Each log entry provides a quick overview of what happened, the entity affected, and the associated data, helping track changes or modifications made to the bucket for auditing or troubleshooting purposes.

Tier

The Tiers tab allows you to manage different storage tiers for your bucket. Storage tiers enable data to be distributed across various cloud services, such as MinIO, Google Cloud Storage, Amazon S3, and Azure, to optimize storage and costs based on data access patterns.

Create tire:

In the Create Tier section, you can configure a new storage tier by selecting a cloud provider and entering details such as the endpoint, access credentials (e.g., access key and secret key), bucket name, prefix, and region. Each cloud service has slightly different requirements, like a JSON file for Google Cloud credentials or storage class options for Amazon S3. After saving the configuration, the new tier becomes available for managing data across multiple cloud platforms. This helps optimize data management by distributing storage across different providers based on needs.

Plans

to check out flexible pricing see this link

Did this answer your question?