Rubber, Meet Road: X-Project Collaboration Demos at ONS
Drudge Bot last edited by
Linux Foundation Networking (LFN) was launched earlier this year to increase collaboration and operational efficiencies across open source networking projects. While each technical project retains its technical independence and project roadmaps, the LFN structure integrates governance, improves operational efficiencies, and simplifies member engagement. It also provides a vehicle to strengthen and develop the myriad of cross-project collaboration nexus points across the open networking stack. Nowhere is this more clear than in the demo lineup for the LFN booth at Open Networking Summit where several demos will incorporate the work of multiple networking projects and peel off the line to show the integration, automation, and acceleration made possible by open source.
Founding LFN projects FD.io, OpenDaylight, ONAP, OPNFV are all represented while PNDA and SNAS are considering a multiple-project analytics demo for Open Networking Summit Europe this fall. A great example of inter-project collaboration is the Cross-Community Continuous Integration (XCI) demo that establishes a new framework for telco use cases that simplifies end-to-end testing and integration by giving developers full control over components and versions to use while creating various combinations of an integrated stack. End users will also benefit from learning how the OPNFV Doctor project has built something completely new in open source with a framework to perform infrastructure maintenance and upgrades with zero virtual network function (VNF) downtime by scaling down applications and utilizing capabilities of the underlying compute nodes with OPNFV software running on OCP hardware.
Cloud-native, containerized VNFs are an industry hot topic and will be on display in multiple demos. Folks from the FD.io project will show how to create cloud-native VNFs using Ligato and compare their performance and scale to VNFs deployed in other environments. Containers are a crucial vehicle on the path toward edge computing and the folks from the Container4NFV project in OPNFV will present a high-performance container cloud solution for edge computing on the Arm platform.
Automation and orchestration will also take center stage as ONAP Amsterdam will be used to design, orchestrate and manage VoTLE — a complex end-to-end real-world service composed of multiple VNFs from different vendors based on the ETSI NFV architecture. ONAP and Kubernetes will also be demoed together in the CNCF booth, showing the best of network automation and cloud native orchestration by enabling ONAP deployments to any public, private, or hybrid cloud.
Listed below is the full networking demo lineup:
- NFV Use Cases Deployed and Tested by OPNFV XCI (OPNFV, OpenDaylight, Open vSwitch, OpenStack)
- ONAP Amsterdam VoLTE Use Case (ONAP)
- Accelerated Cloud Native VNFs in Kubernetes with FD.io/VPP and Ligato (FD.io, k8s, Ligato, Contiv)
- OPNFV Verified: NFVI Platform Verification (OPNFV)
- Containerized VNF Running on High-performance Kubernetes for Edge Computing on Arm Platform (OPNFV, k8s, DPDK)
- Infrastructure Maintenance & Upgrade: Zero VNF Downtime with OPNFV Doctor on OCP Hardware (OPNFV, OCP)
- Networking for Hybrid Cloud and DCI with OpenDaylight EVPN (OpenDaylight)
- World’s Tiniest OPNFV Pod (OPNFV)
- Intro to the CNCF Cross-cloud CI project (k8s, ONAP) In the CNCF Booth
To all the demo managers down in pit row with the greasy hands, thanks for your hard work pulling these together. To everyone else, we’ll see you at the race.
Sign up to get the latest updates on ONS NA 2018!
If you haven’t already registered for ONS, use code ONSNA18COM15 for 15% off. Hurry, standard registration expires March 10.
The post Rubber, Meet Road: X-Project Collaboration Demos at ONS appeared first on The Linux Foundation.
Make ISO from DVD
In this case I had an OS install disk which was required to be on a virtual node with no optical drive, so I needed to transfer an image to the server to create a VM
Find out which device the DVD is:lsblk
Output:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─centos-root 253:0 0 50G 0 lvm / ├─centos-swap 253:1 0 11.8G 0 lvm [SWAP] └─centos-home 253:2 0 403G 0 lvm /home sdb 8:16 1 14.5G 0 disk /mnt sr0 11:0 1 4.1G 0 rom /run/media/rick/CCSA_X64FRE_EN-US_DV5
Therefore /dev/sr0 is the location , or disk to be made into an ISO
I prefer simplicity, and sometimes deal with the fallout after the fact, however Ive repeated this countless times with success.dd if=/dev/sr0 of=win10.iso
Where if=Input file and of=output file
I chill out and do something else while the image is being copied/created, and the final output:8555456+0 records in 8555456+0 records out 4380393472 bytes (4.4 GB) copied, 331.937 s, 13.2 MB/s
Recreate postrgresql database template encode to ASCIIUPDATE pg_database SET datistemplate = FALSE WHERE datname = 'template1';
Now we can drop it:DROP DATABASE template1;
Create database from template0, with a new default encoding:CREATE DATABASE template1 WITH TEMPLATE = template0 ENCODING = 'UNICODE'; UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template1'; \c template1 VACUUM FREEZE;