CNCF to Host the Rook Project to Further Cloud-Native Storage Capabilities
Drudge Bot last edited by
Today, theCloud Native Computing Foundation (CNCF)Technical Oversight Committee (TOC) voted to acceptRook as the 15th hosted project alongside Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, Jaeger, Notary and TUF.
Rook has been accepted as an inception-level project, under theCNCF Graduation Criteria v1.0. The CNCF provides every project an associated maturity level of either inception, incubating or graduated. At a minimum, an inception-level project is required to add value to cloud native computing and be aligned with the CNCFcharter.
Rook brings File, Block and Object storage systems into the Kubernetes cluster, running them seamlessly alongside other applications and services that are consuming the storage. By doing so, the cloud-native cluster becomes self-sufficient and portable across public cloud and on-premise deployments. The project has been developed to enable organizations to modernize their data centers with dynamic application orchestration for distributed storage systems running in on-premise and public cloud environments.
:undefined:“:undefined:Storage is one of the most important components of cloud native computing, yet persistent storage systems typically run outside the cloud native environments today,:undefined:”:undefined: said Chris Aniszczyk, COO of Cloud Native Computing Foundation. :undefined:“:undefined:Rook was one of the early adopters of the Kubernetes operator pattern and we:undefined:’:undefined:re excited to bring in Rook as an inception level project to advance the state of cloud native storage.:undefined:”:undefined:
Instead of building an entirely new storage system which requires many years to mature, Rook focuses on turning existing battle-tested storage systems likeCeph into a set of cloud-native services that run seamlessly on-top of Kubernetes. Rook integrates deeply into Kubernetes providing a seamless experience for security, policies, quotas, lifecycle management, and resource management.
In thisSoftware Engineering Daily podcast, Bassam Tabbara, CEO ofUpbound and creator of Rook, said: :undefined:“:undefined:Rook is essentially using the operator pattern to extend Kubernetes to support storage systems. We:undefined:’:undefined:ve added a concept of a storage cluster, a storage pool, an object store and a file system. Those are all new abstractions that we:undefined:’:undefined:ve used to extend Kubernetes:undefined:”:undefined:
An alpha version ofRook (release 0.6) is available now, followed by a beta and production ready versions in the first half of 2018.
- Software-defined storage running on commodity hardware
- File, block and object storage presentations integrated with Ceph
- Hyper-scale or hyper-converged storage options
- Elastic storage that can easily scale up or down
- Zero-touch management
- Integrated data protection with snapshot, cloning and versioning
- Deployable onKubernetes.
The latest release of Kubernetes 1.9 introduced a CSI alpha implementation that makes installing new volume plugins as easy as deploying a pod, and enables third-party storage providers to develop their solutions without adding to the core Kubernetes codebase. Rook will expose storage through CSI to Kubernetes.
:undefined:“:undefined:It:undefined:’:undefined:s a natural fit to run a storage cluster on Kubernetes. It makes perfect sense to bring it into the fold and keep the unified management interface,:undefined:”:undefined: said Dan Kerns, Senior Director atQuantum, the initial sponsor of the Rook project. :undefined:“:undefined:With Rook, we wanted to create a software-defined storage cluster that could run really well in modern cloud-native environments, and the storage cluster becomes even more resilient with an orchestrator like Kubernetes.:undefined:”:undefined:
Community support for Rook is growing rapidly as companies and users deploy Rook in their cloud-native environments (on-premise and public cloud). Companies and organizations like HBO, UCSD Nautilus Project, Norwegian Welfare, Verne Global, FlexShopper, and Acaleph have implemented Rook as part of their storage platforms.
- 47 contributors
- 1,935 GitHub stars
- 13 releases
- 1,463 commits
- 1.25M+ container downloads
:undefined:“:undefined:We used Rook underneath our Prometheus servers at HBO, running on Kubernetes and deployed on AWS,:undefined:”:undefined: said Illya Chekrygin, former senior staff engineer at HBO and founding member of Upbound. :undefined:“:undefined:Rook made a significant improvement on the Prometheus pod restart time, virtually eliminating downtime and metrics scrape gaps. We are looking forward to Rook being in a production ready state.:undefined:”:undefined:
As a CNCF hosted project, Rook will be part of a neutral foundation aligned with technical interests, receive help with project governance and be provided marketing support to reach a wider audience.
:undefined:“:undefined:Operating storage in cloud-native environments is a significantly more difficult task than stateless containers,:undefined:”:undefined: said Benjamin Hindman, co-founder of Mesosphere and CNCF TOC representative and project sponsor. :undefined:“:undefined:We:undefined:’:undefined:re thrilled to have Rook as the first CNCF inception project that begins to address the difficult problem of storage orchestration.:undefined:”:undefined:
For more read the Rookblog, Quantum:undefined:’:undefined:s recentannouncement on the momentum of the project, Upbound:undefined:’:undefined:sblog, and listen toThe New Stack:undefined:’:undefined:s Makers Podcast orSoftware Engineering Daily featuring Bassam Tabbara discussing Rook and Storage on Kubernetes.
The postCNCF to Host the Rook Project to Further Cloud-Native Storage Capabilities appeared first onThe Linux Foundation.
Make ISO from DVD
In this case I had an OS install disk which was required to be on a virtual node with no optical drive, so I needed to transfer an image to the server to create a VM
Find out which device the DVD is:lsblk
Output:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─centos-root 253:0 0 50G 0 lvm / ├─centos-swap 253:1 0 11.8G 0 lvm [SWAP] └─centos-home 253:2 0 403G 0 lvm /home sdb 8:16 1 14.5G 0 disk /mnt sr0 11:0 1 4.1G 0 rom /run/media/rick/CCSA_X64FRE_EN-US_DV5
Therefore /dev/sr0 is the location , or disk to be made into an ISO
I prefer simplicity, and sometimes deal with the fallout after the fact, however Ive repeated this countless times with success.dd if=/dev/sr0 of=win10.iso
Where if=Input file and of=output file
I chill out and do something else while the image is being copied/created, and the final output:8555456+0 records in 8555456+0 records out 4380393472 bytes (4.4 GB) copied, 331.937 s, 13.2 MB/s
Recreate postrgresql database template encode to ASCIIUPDATE pg_database SET datistemplate = FALSE WHERE datname = 'template1';
Now we can drop it:DROP DATABASE template1;
Create database from template0, with a new default encoding:CREATE DATABASE template1 WITH TEMPLATE = template0 ENCODING = 'UNICODE'; UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template1'; \c template1 VACUUM FREEZE;