Kubeedge - 6:AI协同子项目-Sedna

什么是 Sedna

  • Sedna是在KubeEdge SIG AI孵化的边云协同AI项目。得益于 KubeEdge 提供的边云协同能力,Sedna 可以实现跨边云协同训练和协同推理能力,如联合推理、增量学习、联邦学习和终身学习。Sedna支持流行的AI框架,如TensorFlow,Pytorch,PaddlePaddle,MindSpore。
  • Sedna可以简单地为现有的训练和推理脚本启用边缘云协同功能,从而带来降低成本、提高模型性能和保护数据隐私的好处

安装Sedna

环境准备

    1. 1VM
    1. 2CPU(个人建议4CPU)
    1. 2GB+MEMORY(建议4G+)
    1. 10GB+ free disk space
    1. Internet connection(docker hub, github etc.)
    1. Linux platform, such as ubuntu/centos
    1. Docker 17.06+
  • 特别提醒:当你运行样例的时候,发现卡死,线查看主机的CPU和运行占用情况,所以要求CPU和内存要最好4CPU+4G

Sedna集群安装

环境:

    1. 安装好K8S
    1. K8S version >=1.16
    1. KubeEdge version>=1.8
    1. 部署安装好EdgeMesh

针对于访问GitHub困难的安装

  • 首先要通过“正常安装”中的手动设置安装
  • 脚本安装失败,超时主要是拉取gtihubyaml文件失败了,我们只需要事先下载移动到对应位置就好了
    主要就是这个目录下
    /opt/sedna/build/crds
    # YAML 存放位置:https://github.com/kubeedge/sedna/tree/main/build/crds
    # 主要拉取的就是这几个YAML文件, 当然还有另外一个gm文件夹,你如果拉取不下来,也可以仿造
    sedna.io_datasets.yaml
    sedna.io_federatedlearningjobs.yaml
    sedna.io_incrementallearningjobs.yaml
    sedna.io_jointinferenceservices.yaml
    sedna.io_lifelonglearningjobs.yaml
    sedna.io_models.yaml
  • 你需要手动下载https://raw.githubusercontent.com/kubeedge/sedna/main/scripts/installation/install.sh中的install.sh文件夹,修改其中的`download_yamls函数`
    download_yamls() {
    yaml_files=(
    sedna.io_datasets.yaml
    sedna.io_federatedlearningjobs.yaml
    sedna.io_incrementallearningjobs.yaml
    sedna.io_jointinferenceservices.yaml
    sedna.io_lifelonglearningjobs.yaml
    sedna.io_models.yaml
    )
    #只需要注释掉这一行就好了,然后通过bash命令启动,这个修改好的shell脚本
    # _download_yamls build/crds
    yaml_files=(
    gm.yaml
    )
    _download_yamls build/gm/rbac
    }

正常安装

一行命令安装:

curl https://raw.githubusercontent.com/kubeedge/sedna/main/scripts/installation/install.sh 

手动设置安装:

https://raw.githubusercontent.com/kubeedge/sedna/main/scripts/installation/install.sh 

运行Using Joint Inference Service in Helmet Detection Scenario¶ Demo

  • 基本上按照官网安装就好,唯独注意Create joint inference service需要添加这个dnsPolicy: ClusterFirstWithHostNet
    apiVersion: sedna.io/v1alpha1
    kind: JointInferenceService
    metadata:
    name: helmet-detection-inference-example
    namespace: default
    spec:
    edgeWorker:
    model:
    name: "helmet-detection-inference-little-model"
    hardExampleMining:
    name: "IBT"
    parameters:
    - key: "threshold_img"
    value: "0.9"
    - key: "threshold_box"
    value: "0.9"
    template:
    spec:
    nodeName: $EDGE_NODE
    containers:
    - image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0
    imagePullPolicy: IfNotPresent
    name: little-model
    env: # user defined environments
    - name: input_shape
    value: "416,736"
    - name: "video_url"
    value: "rtsp://localhost/video"
    - name: "all_examples_inference_output"
    value: "/data/output"
    - name: "hard_example_cloud_inference_output"
    value: "/data/hard_example_cloud_inference_output"
    - name: "hard_example_edge_inference_output"
    value: "/data/hard_example_edge_inference_output"
    resources: # user defined resources
    requests:
    memory: 64M
    cpu: 100m
    limits:
    memory: 2Gi
    volumeMounts:
    - name: outputdir
    mountPath: /data/
    volumes: # user defined volumes
    - name: outputdir
    hostPath:
    # user must create the directory in host
    path: /joint_inference/output
    type: Directory

    cloudWorker:
    model:
    name: "helmet-detection-inference-big-model"
    template:
    spec:
    nodeName: $CLOUD_NODE
    dnsPolicy: ClusterFirstWithHostNet <----------- LOOK AT HERE!!!
    containers:
    - image: kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0
    name: big-model
    imagePullPolicy: IfNotPresent
    env: # user defined environments
    - name: "input_shape"
    value: "544,544"
    resources: # user defined resources
    requests:
    memory: 2Gi