kubeflow 설치방법, ubuntu 18.04상에

Moon-Kee Bahk(박문기)
10 min readOct 3, 2020

[How to install kubeflow on ubuntu 18.04 with minikube(kubernetes) and Docker]

[참고사항]
2020년10월03일 현재 https://github.com/kubeflow/kfctl/releases 에 kfctl v1.1.0 까지 Release되어 이 버젼의 설치를 시도했으나 istio, ml-pipeline, mysql등의 docker가 init-1 상태를 유지해 일단 포기하고, v1.0으로 다운그래이드해 시도하였습니다. 향후 v.1.1.0나 최신버젼이 정상적으로 설치되면 이 문서를 update하도록 하겠습니다.

아래는 단일머쉰 환경에서 kubeflow를 설치하는 방법을 설명합니다.

[설치환경]

O/S:ubuntu 18.04 LTS :
~# uname -a
Linux GC8440_106–246–237–171 5.4.0–48-generic #52~18.04.1-Ubuntu SMP Thu Sep 10 12:50:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Docker: 19.03.13
~# docker — version
Docker version 19.03.13, build 4484c46d9

Kubenetes: v1.15.0
~# kubectl version
Client Version: version.Info{Major:”1", Minor:”15", GitVersion:”v1.15.0", GitCommit:”e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:”clean”, BuildDate:”2019–06–19T16:40:16Z”, GoVersion:”go1.12.5", Compiler:”gc”, Platform:”linux/amd64"}
Server Version: version.Info{Major:”1", Minor:”15", GitVersion:”v1.15.0", GitCommit:”e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:”clean”, BuildDate:”2019–06–19T16:32:14Z”, GoVersion:”go1.12.5", Compiler:”gc”, Platform:”linux/amd64"}

minikube: v1.2.0
~#minikube version
minikube version: v1.2.0

[주의]
아래의 모든 명령은 superuser(root) 권한으로 수행했습니다.

[운영체제 최신 이미지로 update]

apt-get update -y && apt-get dist-upgrade -y
apt -get autoremove -y

[Docker 설치]

처음 시도할 때 docker.io로 설치했으나 다른 구성요소들과 통합이 유연하게 되지 않아 docker-ce 로 설치했습니다.

apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL
https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
“deb [arch=amd64]
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable”
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io

설치가 완료된 후 docker환경이 정상적으로 수행되는지 확인하기 위해 아래 명령을 수행해 봅니다.

~# docker run hello-world

Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
Digest: sha256:4cf9c47f86df71d48364001ede3a4fcd85ae80ce02ebad74156906caff5378bc
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

~# docker images | grep hello

hello-world latest bf756fb1ae65 9 months ago 13.3kB

[kubernetes(kubectl) 설치]

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl

정상설치를 확인하기 위해 아래 명령을 수행합니다.

:~# kubectl version

Client Version: version.Info{Major:”1", Minor:”15", GitVersion:”v1.15.0", GitCommit:”e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:”clean”, BuildDate:”2019–06–19T16:40:16Z”, GoVersion:”go1.12.5", Compiler:”gc”, Platform:”linux/amd64"}
Server Version: version.Info{Major:”1", Minor:”15", GitVersion:”v1.15.0", GitCommit:”e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:”clean”, BuildDate:”2019–06–19T16:32:14Z”, GoVersion:”go1.12.5", Compiler:”gc”, Platform:”linux/amd64"}

[참고사항]
minikube로 설치하기 전에 가장 간단하게 설치하는 방법인 ubuntu 공식문서로 https://ubuntu.com/kubeflow/install 에 나와있는 MicroK8S로 시도했으나 실폐했습니다.

[minikube 설치]

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.2.0/minikube-linux-amd64

chmod +x minikube
cp minikube /usr/local/bin/
rm minikube

정상설치를 확인하기 위해

~# minikube status

host: Stopped
kubelet:
apiserver:
kubectl:

아래 명령을 수행하여 minikube를 이용한 kubernetes환경을 설치합니다. 약 2분 정도 소요됩니다.

minikube start — vm-driver=none — cpus 20 — memory 256000 — disk-size=120g — extra-config=apiserver.authorization-mode=RBAC — extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf — extra-config kubeadm.ignore-preflight-errors=SystemVerification

위의 옵션 중 — cpus, — memory, — disk-size등은 여러분의 환경에 맞추어 변경하세요. 아래와 같은 메세지들이 출력됩니다.

* minikube v1.2.0 on linux (amd64)
* Tip: Use ‘minikube start -p <name>’ to create a new cluster, or ‘minikube delete’ to delete this one.
* Restarting existing none VM for “minikube” …
* Waiting for SSH access …
* Configuring environment for Kubernetes v1.15.0 on Docker 19.03.13
— apiserver.authorization-mode=RBAC
— kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
— kubeadm.ignore-preflight-errors=SystemVerification
— kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Relaunching Kubernetes v1.15.0 using kubeadm …
* Configuring local host environment …

! The ‘none’ driver provides limited isolation and may reduce system security and reliability.
! For more information, see:
https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md

! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may
! need to relocate them. For example, to overwrite your own settings:

- sudo mv /root/.kube /root/.minikube $HOME
— sudo chown -R $USER $HOME/.kube $HOME/.minikube

* This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
* Verifying: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use “minikube”

정상적인 수행을 확인하기 위해 아래 명령을 수행해 보세요.

~#kubectl get pods -n kube-system

kube-system coredns-5c98db65d4–5wd65 1/1 Running 1 73m
kube-system coredns-5c98db65d4-bvbxl 1/1 Running 1 73m
kube-system etcd-minikube 1/1 Running 1 72m
kube-system kube-addon-manager-minikube 1/1 Running 1 74m
kube-system kube-apiserver-minikube 1/1 Running 1 72m
kube-system kube-controller-manager-minikube 1/1 Running 1 72m
kube-system kube-proxy-tjjkc 1/1 Running 1 73m
kube-system kube-scheduler-minikube 1/1 Running 1 73m

<관련된 명령어>
일시적으로 kubernetes 서비스 중단할 때

~#minikube stop

* Stopping “minikube” in none …
* “minikube” stopped

모든 설정삭제하고 kubernetes환경 지우기

~#minikube delete

[kubeflow(kfctl)설치하기]

여기서는 kfctl_v1.0.2를 설치합니다. 이 문서로 설치하는 방법을 읽히 후에 https://github.com/kubeflow/kfctl/releases 에서 시간이 지난 후 상위버젼을 설치해 보세요.

mkdir -p /root/kubeflow/v1.0.2
cd /root/kubeflow/v1.0.2
wget
https://github.com/kubeflow/kfctl/releases/download/v1.0.2/kfctl_v1.0.2-0-ga476281_linux.tar.gz
tar -xvf kfctl_v1.0.2–0-ga476281_linux.tar.gz
mv kfctl /usr/bin

정상적인 설치를 확인하기 위해 아래 명령을 수행해 봅니다.

kfctl version

kfctl v1.0.2–0-ga476281

[Kubernetes가 kubeflow에게 제공하는 pv스토리지 ]

인터넷에 공개된 대부분의 설치절차들에는 이 부분이 빠져있어(또는 기본적으로 kubernetes에 미리 설치되어 있다고 간주해서) 나중에 katib-mysql, metadata-mysql, minio-pv-claim, mysql-pv-claim등이 pending상태로 유지되어 설치가 않됩니다. docker나 kubernetes에 익숙하지 않으신 분들을 위해 아래부분을 적어둡니다.

익숙하신 편집기 vi, vim, nano 등을 이용해 아래 .yaml을 만들세요.

nano sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate

아래 명령을 이용해 StorageClass를 만들고 확인하고, Default Storage 클래스로 지정해 줍니다.

kubectl apply -f sc.yaml
kubectl get sc
kubectl patch storageclass local-storage -p ‘{“metadata”: {“annotations”:{“storageclass.kubernetes.io/is-default-class”:”true”}}}’
kubectl describe sc

결과는 아래와 비슷할 것입니다.

~# kubectl get sc
NAME PROVISIONER AGE
local-storage (default) kubernetes.io/no-provisioner 86m

~# kubectl describe sc
Name: local-storage
IsDefaultClass: Yes

Annotations: kubectl.kubernetes.io/last-applied-configuration={“apiVersion”:”storage.k8s.io/v1",”kind”:”StorageClass”,”metadata”:{“annotations”:{},”name”:”local-storage”},”provisioner”:”kubernetes.io/no-provisioner”,”volumeBindingMode”:”Immediate”}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>

디스크 용량이 충분할 것을 /mnt에 마운팅해서 PV를 5개 정도 만들어 줍니다.

먼저디렉토리를 생성합니다.

sudo mkdir /mnt/pv{1..5}

익숙한 편집기를 이용해 아래 .yaml파일를 작성하세요.

nano pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume1
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
— ReadWriteOnce
hostPath:
path: “/mnt/pv1”
— -
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume2
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
— ReadWriteOnce
hostPath:
path: “/mnt/pv2”
— -
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume3
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
— ReadWriteOnce
hostPath:
path: “/mnt/pv3”
— -
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume4
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 20Gi
accessModes:
— ReadWriteOnce
hostPath:
path: “/mnt/pv4”
— -
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume5
labels:
type: local
spec:
storageClassName: local-storage

capacity:
storage: 20Gi
accessModes:
— ReadWriteOnce
hostPath:
path: “/mnt/pv5”
— -

아래 명령을 수행하여 적용하세요.

kubectl apply -f pv.yaml

수행결과를 아래 명령을 통해 확인하세요. 기본적으로 4개가 사용되고 1개는 남아있습니다.

kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-volume1 10Gi RWO Retain Bound kubeflow/metadata-mysql local-storage 91m
pv-volume2 10Gi RWO Retain Bound kubeflow/katib-mysql local-storage 91m
pv-volume3 10Gi RWO Retain Available local-storage 91m
pv-volume4 20Gi RWO Retain Bound kubeflow/minio-pv-claim local-storage 91m
pv-volume5 20Gi RWO Retain Bound kubeflow/mysql-pv-claim local-storage 91m

참고로 모든 kubeflow서비스들이 생성된 후 아래 명령을 수행하며 어떤 kubeflow서비스가 어떤 영구 스토리지를 사용하는 지 확인이 됩니다. 서비스와 스토리지의 bonding이 확인되어야 합니다.

~# kubectl -n kubeflow get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
katib-mysql Bound pv-volume2 10Gi RWO local-storage 84m
metadata-mysql Bound pv-volume1 10Gi RWO local-storage 84m
minio-pv-claim Bound pv-volume4 20Gi RWO local-storage 84m
mysql-pv-claim Bound pv-volume5 20Gi RWO local-storage 84m

[kubeflow kfctl명령을 이용해 kubeflow 설치]

환경변수를 export합니다.

export PATH=$PATH:/root/kubeflow/v1.0.2
export KF_NAME=mzc-kubeflow
export BASE_DIR=/root/kubeflow/v1.0.2
export KF_DIR=${BASE_DIR}/${KF_NAME}
export CONFIG_URI=”
https://raw.githubusercontent.com/kubeflow/manifests/v1.0.0-branch/kfdef/kfctl_k8s_istio.v1.0.0.yaml"

환경변수값을 이용해 관련 디렉토리를 만들고설치용 .yaml파일을 이용해 Docker와 Kubernetes상에 kubernetes를 pod를 다수 생성해 kubeflow를 서비스를 시작합니다.

mkdir -p ${KF_DIR}
cd ${KF_DIR}
kfctl apply -V -f ${CONFIG_URI}

설치되기 위해 시스템의 성능마다 차이게 있겠지만 약 20분이 소요되었습니다. 설치되는 동안 아래의 명령을 다른 창에서 수행하여 설치되는 내용을 확인해 보세요.

watch -n 5 kubectl get pods — all-namespaces

Every 1.0s: kubectl get pods — all-namespaces GC8440_106–246–237–171: Sat Oct 3 20:58:04 2020

NAMESPACE NAME READY STATUS RESTARTS AGE
cert-manager cert-manager-5d849b9888-dbgx4 1/1 Running 1 98m
cert-manager cert-manager-cainjector-dccb4d7f-z46gb 1/1 Running 1 98m
cert-manager cert-manager-webhook-695df7dbb-pf8r4 1/1 Running 1 98m
istio-system cluster-local-gateway-7bf56777fb-d8f2g 1/1 Running 1 98m
istio-system grafana-86f89dbd84-gd6zm 1/1 Running 1 98m
istio-system istio-citadel-74966f47d6-kvgfb 1/1 Running 1 98m
istio-system istio-cleanup-secrets-1.1.6-ljcgl 0/1 Completed 0 98m
istio-system istio-egressgateway-5c64d575bc-gsqrw 1/1 Running 1 98m
istio-system istio-galley-784b9f6d75-hsqs8 1/1 Running 1 98m
istio-system istio-grafana-post-install-1.1.6-f29qc 0/1 Completed 0 98m
istio-system istio-ingressgateway-589ff776dd-xt72s 1/1 Running 1 98m
istio-system istio-pilot-677df6b6d4-md4r5 2/2 Running 2 98m
istio-system istio-policy-6f74d9d95d-fvkxf 2/2 Running 8 98m
istio-system istio-security-post-install-1.1.6-nr7hh 0/1 Completed 0 98m
istio-system istio-sidecar-injector-866f4b98c7-zxjz2 1/1 Running 1 98m
istio-system istio-telemetry-549c8f9dcb-9ngtq 2/2 Running 7 98m
istio-system istio-tracing-555cf644d-srlwc 1/1 Running 1 98m
istio-system kiali-7db44d6dfb-wq8cq 1/1 Running 1 98m
istio-system prometheus-d44645598-k9z9w 1/1 Running 1 98m
knative-serving activator-6dc4884-spv8r 2/2 Running 6 95m
knative-serving autoscaler-69bcc99c79–6zpnx 2/2 Running 7 95m
knative-serving autoscaler-hpa-68cc87bfb9-lg6q6 1/1 Running 1 95m
knative-serving controller-95dc7f8bd-p62t8 1/1 Running 1 95m
knative-serving networking-istio-5b8c5c6cff-kzbt7 1/1 Running 1 95m
knative-serving webhook-67847fb4b5-zmx75 1/1 Running 1 95m
kube-system coredns-5c98db65d4–5wd65 1/1 Running 1 111m
kube-system coredns-5c98db65d4-bvbxl 1/1 Running 1 111m
kube-system etcd-minikube 1/1 Running 1 110m
kube-system kube-addon-manager-minikube 1/1 Running 1 111m
kube-system kube-apiserver-minikube 1/1 Running 1 110m
kube-system kube-controller-manager-minikube 1/1 Running 1 110m
kube-system kube-proxy-tjjkc 1/1 Running 1 111m
kube-system kube-scheduler-minikube 1/1 Running 1 110m
kubeflow admission-webhook-bootstrap-stateful-set-0 1/1 Running 1 95m
kubeflow admission-webhook-deployment-569558c8b6-wqxjc 1/1 Running 0 39m
kubeflow application-controller-stateful-set-0 1/1 Running 1 98m
kubeflow argo-ui-7ffb9b6577-zl5rq 1/1 Running 1 95m
kubeflow centraldashboard-659bd78c-866bs 1/1 Running 1 95m
kubeflow jupyter-web-app-deployment-679d5f5dc4-hwxh6 1/1 Running 1 95m
kubeflow katib-controller-7f58569f7d-f6nx5 1/1 Running 2 95m
kubeflow katib-db-manager-54b66f9f9d-lczdt 1/1 Running 2 95m
kubeflow katib-mysql-dcf7dcbd5–6vwf4 1/1 Running 1 95m
kubeflow katib-ui-6f97756598-w9jjt 1/1 Running 1 95m
kubeflow kfserving-controller-manager-0 2/2 Running 3 95m
kubeflow metacontroller-0 1/1 Running 1 95m
kubeflow metadata-db-65fb5b695d-2mpsq 1/1 Running 1 95m
kubeflow metadata-deployment-65ccddfd4c-5wp9x 1/1 Running 1 95m
kubeflow metadata-envoy-deployment-7754f56bff-kgmg7 1/1 Running 1 95m
kubeflow metadata-grpc-deployment-75f9888cbf-rnwcg 1/1 Running 3 95m
kubeflow metadata-ui-7c85545947-qm7km 1/1 Running 1 95m
kubeflow minio-69b4676bb7-pvthq 1/1 Running 1 95m
kubeflow ml-pipeline-5cddb75848–72fgk 1/1 Running 1 95m
kubeflow ml-pipeline-ml-pipeline-visualizationserver-7f6fcb68c8–9fl7n 1/1 Running 1 95m
kubeflow ml-pipeline-persistenceagent-6ff9fb86dc-gmgpr 1/1 Running 3 95m
kubeflow ml-pipeline-scheduledworkflow-7f84b54646-lkhs7 1/1 Running 1 95m
kubeflow ml-pipeline-ui-6758f58868–6zhlq 1/1 Running 1 95m
kubeflow ml-pipeline-viewer-controller-deployment-745dbb444d-rnws6 1/1 Running 1 95m
kubeflow mysql-6bcbfbb6b8-pnqtr 1/1 Running 1 95m
kubeflow notebook-controller-deployment-5c55f5845b-z82b7 1/1 Running 1 95m
kubeflow profiles-deployment-78f694bffb-62dvc 2/2 Running 2 95m
kubeflow pytorch-operator-cf8c5c497-wr9zk 1/1 Running 1 95m
kubeflow seldon-controller-manager-6b4b969447–8bczw 1/1 Running 1 95m
kubeflow spark-operatorcrd-cleanup-kmk5t 0/2 Completed 0 95m
kubeflow spark-operatorsparkoperator-76dd5f5688-cl5rw 1/1 Running 1 95m
kubeflow spartakus-volunteer-5dc96f4447-n4brp 1/1 Running 1 95m
kubeflow tensorboard-5f685f9d79-lmcwf 1/1 Running 1 95m
kubeflow tf-job-operator-5fb85c5fb7-qnmfj 1/1 Running 1 95m
kubeflow workflow-controller-689d6c8846-wz8jh 1/1 Running 1 95m

모든 kubernetes, kubeflow 관련 서비스들이 running 또는 completed 상태이어야 합니다.

[관리용 DASHBOARD접속하기]

아래 명령을 수행하여 istio-ingressgateway를 접속포트 80:31380/TCP,443:31390/TCP 를 확인합니다. 31380이 외부에서 공인IP로 접속할 포트입니다.

~# kubectl get service -n istio-system

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cluster-local-gateway ClusterIP 10.111.252.88 <none> 80/TCP,443/TCP,31400/TCP,15011/TCP,8060/TCP,15029/TCP,15030/TCP,15031/TCP,15032/TCP 99m
grafana ClusterIP 10.102.198.32 <none> 3000/TCP 99m
istio-citadel ClusterIP 10.100.8.228 <none> 8060/TCP,15014/TCP 99m
istio-egressgateway ClusterIP 10.102.187.43 <none> 80/TCP,443/TCP,15443/TCP 99m
istio-galley ClusterIP 10.96.251.122 <none> 443/TCP,15014/TCP,9901/TCP 99m
istio-ingressgateway NodePort 10.96.63.167 <none> 15020:31462/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32235/TCP,15030:31184/TCP,15031:32031/TCP,15032:30239/TCP,15443:31355/TCP 99m
istio-pilot ClusterIP 10.106.117.168 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 99m
istio-policy ClusterIP 10.110.199.184 <none> 9091/TCP,15004/TCP,15014/TCP 99m
istio-sidecar-injector ClusterIP 10.110.161.15 <none> 443/TCP 99m
istio-telemetry ClusterIP 10.107.127.53 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 99m
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 99m
jaeger-collector ClusterIP 10.104.13.82 <none> 14267/TCP,14268/TCP 99m
jaeger-query ClusterIP 10.96.60.230 <none> 16686/TCP 99m
kiali ClusterIP 10.110.114.176 <none> 20001/TCP 99m
prometheus ClusterIP 10.98.24.55 <none> 9090/TCP 99m
tracing ClusterIP 10.107.58.31 <none> 80/TCP 99m
zipkin ClusterIP 10.96.241.25 <none> 9411/TCP

호스트의 공인IP(Nodeport나 ClusterIP가 아님): 31380 포트로 브라우저를 이용해 접속합니다.

http://공인IP:31380/

초기 setup화면에서 anonymous 네임스페이스를 선택하고 finish하세요.

[모든 작업 초기화]

대부분은 kubeflow version을 변경하고 싶을 때나 모든 작업을 삭제하고 싶으시면 아래 절차를 수행하세요.

(kubeflow version 변경하여 재-설치)

minikube stop

docker images -f $(docker image ls -a -aq)

rm -rf /root/.kube

rm -rf /root/kubeflow

rm -rf /root/.minikube

위 명령을 수행 후에 “[minikube설치]” 단계 이후를 수행하세요.

(모든 것을 초기화)

바로 위의 “(kubeflow version 변경하여 재-설치)” 과정을 수행하고,

rm kfctl /usr/bin/kfctl
rm /usr/local/bin/kubectl
apt-get remove docker-ce docker-ce-cli containerd.io -y

[설치후기]

  • pipeline을 정상동작 확인됨
  • jupyter notebook server가 재대로 동작않됨

향후에 동작유무등을 추가로update하도록 하겠습니다.

감사합니다.

--

--