Original author(s) | Inktank Storage (Sage Weil, Yehuda Sadeh Weinraub, Gregory Farnum, Josh Durgin, Samuel Just, Wido den Hollander) |
---|---|
Developer(s) | Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE |
Stable release |
11.2.0 "Kraken" / 20 January 2017
|
Repository | git |
Written in | C++, Python |
Operating system | Linux, FreeBSD |
Type | Distributed object store |
License | LGPL 2.1 |
Website | ceph |
In computing, Ceph, a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level, and freely available.
Ceph replicates data and makes it fault-tolerant, using commodity hardware and requiring no specific hardware support. As a result of its design, the system is both self-healing and self-managing, aiming to minimize administration time and other costs.
On April 21, 2016, the Ceph development team released "Jewel", the first Ceph release in which CephFS is considered stable. The CephFS repair and disaster recovery tools are feature-complete (snapshots, multiple active metadata servers and some other functionality is disabled by default).
Ceph employs four distinct kinds of daemons:
All of these are fully distributed, and may run on the same set of servers. Clients directly interact with all of them.
Ceph does striping of individual files across multiple nodes to achieve higher throughput, similarly to how RAID0 stripes partitions across multiple hard drives. Adaptive load balancing is supported whereby frequently accessed objects are replicated over more nodes. As of December 2014[update], XFS is the recommended underlying filesystem type for production environments, while Btrfs is recommended for non-production environments. ext4 filesystems are not recommended because of resulting limitations on the maximum RADOS objects length.