Skip to content
This repository has been archived by the owner on Jul 22, 2024. It is now read-only.

Commit

Permalink
Merge pull request #271 from IBM/release_v2.0.0
Browse files Browse the repository at this point in the history
Release v2.0.0 merge to master
  • Loading branch information
shay-berman authored Dec 19, 2018
2 parents f45eba3 + 05ef190 commit 970142a
Show file tree
Hide file tree
Showing 59 changed files with 3,403 additions and 2,741 deletions.
4 changes: 2 additions & 2 deletions Dockerfile.Flex
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ COPY . .
RUN CGO_ENABLED=1 GOOS=linux go build -tags netgo -v -a --ldflags '-w -linkmode external -extldflags "-static"' -installsuffix cgo -o ubiquity-k8s-flex cmd/flex/main/cli.go


FROM alpine:3.7
RUN apk --no-cache add ca-certificates=20171114-r0
FROM alpine:3.8
RUN apk --no-cache add ca-certificates=20171114-r3
ENV UBIQUITY_PLUGIN_VERIFY_CA=/var/lib/ubiquity/ssl/public/ubiquity-trusted-ca.crt
WORKDIR /root/
COPY --from=0 /go/src/github.com/IBM/ubiquity-k8s/ubiquity-k8s-flex .
Expand Down
4 changes: 2 additions & 2 deletions Dockerfile.Provisioner
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ COPY . .
RUN CGO_ENABLED=1 GOOS=linux go build -tags netgo -v -a --ldflags '-w -linkmode external -extldflags "-static"' -installsuffix cgo -o ubiquity-k8s-provisioner cmd/provisioner/main/main.go


FROM alpine:3.7
RUN apk --no-cache add ca-certificates=20171114-r0
FROM alpine:3.8
RUN apk --no-cache add ca-certificates=20171114-r3
ENV UBIQUITY_PLUGIN_VERIFY_CA=/var/lib/ubiquity/ssl/public/ubiquity-trusted-ca.crt
WORKDIR /root/
COPY --from=0 /go/src/github.com/IBM/ubiquity-k8s/ubiquity-k8s-provisioner .
Expand Down
14 changes: 8 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,24 +7,26 @@ This project includes components for managing [Kubernetes persistent storage](ht
- Ubiquity Dynamic Provisioner for creating and deleting persistent volumes
- Ubiquity FlexVolume Driver CLI for attaching and detaching persistent volumes

Currently, the following storage systems use Ubiquity:
The IBM official solution for Kubernetes, based on the Ubiquity project, is referred to as IBM Storage Enabler for Containers. You can download the installation package and its documentation from [IBM Fix Central](http://www.ibm.com/support/fixcentral/swg/quickorder?parent=Software%20defined%20storage&product=ibm/StorageSoftware/IBM+Storage+Enabler+for+Containers&release=All&platform=All&function=all&source=fc).

Ubiquity supports the following storage systems:
* IBM block storage.

The IBM block storage is supported for Kubernetes via IBM Spectrum Connect. Ubiquity communicates with the IBM storage systems through Spectrum Connect. Spectrum Connect creates a storage profile (for example, gold, silver or bronze) and makes it available for Kubernetes. For details about supported storage systems, refer to the latest Spectrum Connect release notes.
IBM block storage is supported for Kubernetes via IBM Spectrum Connect. Ubiquity communicates with the IBM storage systems through Spectrum Connect. Spectrum Connect creates a storage profile (for example, gold, silver or bronze) and makes it available for Kubernetes.

The IBM official solution for Kubernetes, based on the Ubiquity project, is referred to as IBM Storage Enabler for Containers. You can download the installation package and its documentation from [IBM Fix Central](https://www.ibm.com/support/fixcentral/swg/selectFixes?parent=Software%2Bdefined%2Bstorage&product=ibm/StorageSoftware/IBM+Spectrum+Connect&release=All&platform=Linux&function=all). For details on the IBM Storage Enabler for Containers, see the relevant sections in the Spectrum Connect user guide.
* IBM Spectrum Scale

* IBM Spectrum Scale, for testing only.
IBM Spectrum Scale file storage is supported for Kubernetes. Ubiquity communicates with IBM Spectrum Scale system directly via IBM Spectrum Scale management API v2.

The code is provided as is, without warranty. Any issue will be handled on a best-effort basis.

## Solution overview

![Ubiquity Overview](images/ubiquity_architecture_draft_for_github.jpg)

Deployment description:
Main deployment description:
* Ubiquity Kubernetes Dynamic Provisioner (ubiquity-k8s-provisioner) runs as a Kubernetes deployment with replica=1.
* Ubiquity Kubernetes FlexVolume (ubiquity-k8s-flex) runs as a Kubernetes daemonset on all the worker and master nodes.
* Ubiquity Kubernetes FlexVolume (ubiquity-k8s-flex) runs as a Kubernetes daemonset in all the worker and master nodes.
* Ubiquity (ubiquity) runs as a Kubernetes deployment with replica=1.
* Ubiquity database (ubiquity-db) runs as a Kubernetes deployment with replica=1.

Expand Down
20 changes: 9 additions & 11 deletions cmd/provisioner/main/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ package main
import (
"fmt"

"flag"
k8sresources "github.com/IBM/ubiquity-k8s/resources"
k8sutils "github.com/IBM/ubiquity-k8s/utils"
"github.com/IBM/ubiquity-k8s/volume"
Expand All @@ -28,22 +29,24 @@ import (
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"os"
)

var (
provisioner = k8sresources.ProvisionerName
configFile = os.Getenv("KUBECONFIG")
)

func main() {

/* this is fixing an existing issue with glog in kuberenetes in version 1.9
if we ever move to a newer code version this can be removed.
*/
flag.CommandLine.Parse([]string{})

ubiquityConfig, err := k8sutils.LoadConfig()
if err != nil {
panic(fmt.Errorf("Failed to load config %#v", err))
}
fmt.Printf("Starting ubiquity plugin with %s config file\n", configFile)

err = os.MkdirAll(ubiquityConfig.LogPath, 0640)
if err != nil {
Expand All @@ -57,14 +60,9 @@ func main() {

var config *rest.Config

if configFile != "" {
logger.Printf("Uses k8s configuration file name %s", configFile)
config, err = clientcmd.BuildConfigFromFlags("", configFile)
} else {
config, err = rest.InClusterConfig()
}
config, err = rest.InClusterConfig()
if err != nil {
panic(fmt.Sprintf("Failed to create config: %v", err))
panic(fmt.Sprintf("Failed to create k8s InClusterConfig: %v", err))
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
Expand All @@ -91,7 +89,7 @@ func main() {
flexProvisioner, err := volume.NewFlexProvisioner(logger, remoteClient, ubiquityConfig)
if err != nil {
logger.Printf("Error starting provisioner: %v", err)
panic("Error starting ubiquity client")
panic("Error starting ubiquity provisioner")
}

// Start the provision controller which will dynamically provision Ubiquity PVs
Expand Down
Loading

0 comments on commit 970142a

Please sign in to comment.