{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "7a982d9b", "metadata": {}, "source": [ "# Using FATE Built-In Dataset\n", "\n", "In FATE-1.10, three data sets of table, nlp_tokenizer and image are provided to meet the basic needs of table data, text data and image data" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0a57e6ec", "metadata": {}, "source": [ "## TableDataset\n", "\n", "TableDataset is provided under [table.py](../../../../python/federatedml/nn/dataset/table.py), which is used to process data in csv format, and will automatically parse the id and label from the data. Here is some source code to understand the use of this dataset class:" ] }, { "cell_type": "code", "execution_count": 2, "id": "e80fa81b", "metadata": {}, "outputs": [], "source": [ "class TableDataset(Dataset):\n", "\n", " \"\"\"\n", " A Table Dataset, load data from a give csv path, or transform FATE DTable\n", "\n", " Parameters\n", " ----------\n", " label_col str, name of label column in csv, if None, will automatically take 'y' or 'label' or 'target' as label\n", " feature_dtype dtype of feature, supports int, long, float, double\n", " label_dtype: dtype of label, supports int, long, float, double\n", " label_shape: list or tuple, the shape of label\n", " flatten_label: bool, flatten extracted label column or not, default is False\n", " \"\"\"\n", "\n", " def __init__(\n", " self,\n", " label_col=None,\n", " feature_dtype='float',\n", " label_dtype='float',\n", " label_shape=None,\n", " flatten_label=False):" ] }, { "attachments": {}, "cell_type": "markdown", "id": "78b44382", "metadata": {}, "source": [ "### TokenizerDataset\n", "\n", "TokenizerDataset is provided under [nlp_tokenizer.py](../../../../python/federatedml/nn/dataset/nlp_tokenizer.py), which is developed based on Transformer's BertTokenizer, which can read strings from csv, and at the same time automatically segment the text and convert it into word ids." ] }, { "cell_type": "code", "execution_count": null, "id": "1c41c8b6", "metadata": {}, "outputs": [], "source": [ "class TokenizerDataset(Dataset):\n", " \"\"\"\n", " A Dataset for some basic NLP Tasks, this dataset will automatically transform raw text into word indices\n", " using BertTokenizer from transformers library,\n", " see https://huggingface.co/docs/transformers/model_doc/bert?highlight=berttokenizer for details of BertTokenizer\n", "\n", " Parameters\n", " ----------\n", " truncation bool, truncate word sequence to 'text_max_length'\n", " text_max_length int, max length of word sequences\n", " tokenizer_name_or_path str, name of bert tokenizer(see transformers official for details) or path to local\n", " transformer tokenizer folder\n", " return_label bool, return label or not, this option is for host dataset, when running hetero-NN\n", " \"\"\"\n", "\n", " def __init__(self, truncation=True, text_max_length=128,\n", " tokenizer_name_or_path=\"bert-base-uncased\",\n", " return_label=True):" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6b348177", "metadata": {}, "source": [ "### ImageDataset\n", "\n", "ImageDataset is provided under [image.py](../../../../python/federatedml/nn/dataset/image.py), which is used to simply process image data. It is developed based on torchvision's ImageFolder. It can be seen that the parameters of this dataset are used:" ] }, { "cell_type": "code", "execution_count": null, "id": "571eed05", "metadata": {}, "outputs": [], "source": [ "class ImageDataset(Dataset):\n", "\n", " \"\"\"\n", "\n", " A basic Image Dataset built on pytorch ImageFolder, supports simple image transform\n", " Given a folder path, ImageDataset will load images from this folder, images in this\n", " folder need to be organized in a Torch-ImageFolder format, see\n", " https://pytorch.org/vision/main/generated/torchvision.datasets.ImageFolder.html for details.\n", "\n", " Image name will be automatically taken as the sample id.\n", "\n", " Parameters\n", " ----------\n", " center_crop : bool, use center crop transformer\n", " center_crop_shape: tuple or list\n", " generate_id_from_file_name: bool, whether to take image name as sample id\n", " file_suffix: str, default is '.jpg', if generate_id_from_file_name is True, will remove this suffix from file name,\n", " result will be the sample id\n", " return_label: bool, return label or not, this option is for host dataset, when running hetero-NN\n", " float64: bool, returned image tensors will be transformed to double precision\n", " label_dtype: str, long, float, or double, the dtype of return label\n", " \"\"\"\n", "\n", " def __init__(self, center_crop=False, center_crop_shape=None,\n", " generate_id_from_file_name=True, file_suffix='.jpg',\n", " return_label=True, float64=False, label_dtype='long'):" ] }, { "attachments": {}, "cell_type": "markdown", "id": "168c6b20", "metadata": {}, "source": [ "## Use Built-IN Dataset\n", "\n", "Using the built-in dataset of FATE is precisely the same as using a user-customized dataset. Here we use our image dataset and a new model with conv layers to solve the MNIST handwritten recognition task again, as the example.\n", "\n", "If you don't have the MNIST dataset, you can refer to previous tutorial and download it:\n", " - [Customize your Dataset](Homo-NN-Customize-your-Dataset.ipynb)" ] }, { "cell_type": "code", "execution_count": 1, "id": "a0000f40", "metadata": {}, "outputs": [], "source": [ "from federatedml.nn.dataset.image import ImageDataset" ] }, { "cell_type": "code", "execution_count": 2, "id": "28fcd702", "metadata": {}, "outputs": [], "source": [ "! ls ../examples/data/mnist/ " ] }, { "cell_type": "code", "execution_count": 5, "id": "8bf4d85e", "metadata": {}, "outputs": [], "source": [ "dataset = ImageDataset()\n", "dataset.load('../../../../examples/data/mnist/') " ] }, { "cell_type": "code", "execution_count": 6, "id": "a558125e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1309" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(dataset)" ] }, { "cell_type": "code", "execution_count": 7, "id": "8fdd37eb", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(tensor([[[0.0000, 0.0275, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0118, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " ...,\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],\n", " \n", " [[0.0000, 0.0275, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0118, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " ...,\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],\n", " \n", " [[0.0000, 0.0275, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0118, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " ...,\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n", " [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]]),\n", " tensor(3))" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dataset[400] " ] }, { "cell_type": "code", "execution_count": 35, "id": "898046e0", "metadata": {}, "outputs": [], "source": [ "from torch import nn\n", "import torch as t\n", "from torch.nn import functional as F\n", "from pipeline.component.nn.backend.torch.operation import Flatten\n", "\n", "# a new model with conv layer, it can work with our ImageDataset\n", "model = t.nn.Sequential(\n", " nn.Conv2d(in_channels=3, out_channels=12, kernel_size=5),\n", " nn.MaxPool2d(kernel_size=3),\n", " nn.Conv2d(in_channels=12, out_channels=12, kernel_size=3),\n", " nn.AvgPool2d(kernel_size=3),\n", " Flatten(start_dim=1),\n", " nn.Linear(48, 32),\n", " nn.ReLU(),\n", " nn.Linear(32, 10),\n", " nn.Softmax(dim=1)\n", " )\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0311ed01", "metadata": {}, "source": [ "## Local Test\n", "\n", "**In the case of local testing, all federation processes will be skipped, and the model will not perform fed averaging**" ] }, { "cell_type": "code", "execution_count": 36, "id": "c53366f3", "metadata": {}, "outputs": [], "source": [ "from federatedml.nn.homo.trainer.fedavg_trainer import FedAVGTrainer\n", "trainer = FedAVGTrainer(epochs=5, batch_size=256, shuffle=True, data_loader_worker=8, pin_memory=False) # 参数\n", "trainer.set_model(model)" ] }, { "cell_type": "code", "execution_count": 37, "id": "711ef7fa", "metadata": {}, "outputs": [], "source": [ "trainer.local_mode() " ] }, { "cell_type": "code", "execution_count": 38, "id": "0d65f9b8", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "epoch is 0\n", "100%|██████████| 6/6 [00:00<00:00, 7.49it/s]\n", "epoch loss is 2.6923995983336515\n", "epoch is 1\n", "100%|██████████| 6/6 [00:00<00:00, 7.78it/s]\n", "epoch loss is 2.636708398735915\n", "epoch is 2\n", "100%|██████████| 6/6 [00:00<00:00, 7.75it/s]\n", "epoch loss is 2.4953262410699364\n", "epoch is 3\n", "100%|██████████| 6/6 [00:00<00:00, 7.79it/s]\n", "epoch loss is 2.3616474521715647\n", "epoch is 4\n", "100%|██████████| 6/6 [00:00<00:00, 8.26it/s]\n", "epoch loss is 2.2441106669496635\n" ] } ], "source": [ "optimizer = t.optim.Adam(model.parameters(), lr=0.01)\n", "loss = t.nn.CrossEntropyLoss()\n", "trainer.train(train_set=dataset,optimizer=optimizer, loss=loss)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e08ed729", "metadata": {}, "source": [ "It can work, now good to go to federated task!" ] }, { "attachments": {}, "cell_type": "markdown", "id": "413aefa9", "metadata": {}, "source": [ "## A Homo-NN Task with Built-in Dataset" ] }, { "cell_type": "code", "execution_count": 27, "id": "1518af62", "metadata": {}, "outputs": [], "source": [ "import torch as t\n", "from torch import nn\n", "from pipeline import fate_torch_hook\n", "from pipeline.component import HomoNN\n", "from pipeline.backend.pipeline import PipeLine\n", "from pipeline.component import Reader, Evaluation, DataTransform\n", "from pipeline.interface import Data, Model\n", "\n", "t = fate_torch_hook(t)\n" ] }, { "cell_type": "code", "execution_count": 28, "id": "d900c35a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'namespace': 'experiment', 'table_name': 'mnist_host'}" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import os\n", "# bind data path to name & namespace\n", "fate_project_path = os.path.abspath('../../../../')\n", "host = 10000\n", "guest = 9999\n", "arbiter = 10000\n", "pipeline = PipeLine().set_initiator(role='guest', party_id=guest).set_roles(guest=guest, host=host,\n", " arbiter=arbiter)\n", "\n", "data_0 = {\"name\": \"mnist_guest\", \"namespace\": \"experiment\"}\n", "data_1 = {\"name\": \"mnist_host\", \"namespace\": \"experiment\"}\n", "\n", "data_path_0 = fate_project_path + '/examples/data/mnist'\n", "data_path_1 = fate_project_path + '/examples/data/mnist'\n", "pipeline.bind_table(name=data_0['name'], namespace=data_0['namespace'], path=data_path_0)\n", "pipeline.bind_table(name=data_1['name'], namespace=data_1['namespace'], path=data_path_1)" ] }, { "cell_type": "code", "execution_count": 29, "id": "d3af79ff", "metadata": {}, "outputs": [], "source": [ "# 定义reader\n", "reader_0 = Reader(name=\"reader_0\")\n", "reader_0.get_party_instance(role='guest', party_id=guest).component_param(table=data_0)\n", "reader_0.get_party_instance(role='host', party_id=host).component_param(table=data_1)" ] }, { "cell_type": "code", "execution_count": 39, "id": "de9917a7", "metadata": {}, "outputs": [], "source": [ "from pipeline.component.homo_nn import DatasetParam, TrainerParam \n", "\n", "# a new model with conv layer, it can work with our ImageDataset\n", "model = t.nn.Sequential(\n", " nn.Conv2d(in_channels=3, out_channels=12, kernel_size=5),\n", " nn.MaxPool2d(kernel_size=3),\n", " nn.Conv2d(in_channels=12, out_channels=12, kernel_size=3),\n", " nn.AvgPool2d(kernel_size=3),\n", " Flatten(start_dim=1),\n", " nn.Linear(48, 32),\n", " nn.ReLU(),\n", " nn.Linear(32, 10),\n", " nn.Softmax(dim=1)\n", " )\n", "\n", "nn_component = HomoNN(name='nn_0',\n", " model=model, # model\n", " loss=t.nn.CrossEntropyLoss(), # loss\n", " optimizer=t.optim.Adam(model.parameters(), lr=0.01), # optimizer\n", " dataset=DatasetParam(dataset_name='image', label_dtype='long'), # dataset\n", " trainer=TrainerParam(trainer_name='fedavg_trainer', epochs=2, batch_size=1024, validation_freqs=1),\n", " torch_seed=100 # random seed\n", " )" ] }, { "cell_type": "code", "execution_count": 40, "id": "62361f34", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pipeline.add_component(reader_0)\n", "pipeline.add_component(nn_component, data=Data(train_data=reader_0.output.data))\n", "pipeline.add_component(Evaluation(name='eval_0', eval_type='multi'), data=Data(data=nn_component.output.data))" ] }, { "cell_type": "code", "execution_count": 46, "id": "1fa46219", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "\u001b[32m2022-12-19 17:31:15.709\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m83\u001b[0m - \u001b[1mJob id is 202212191731149354320\n", "\u001b[0m\n", "\u001b[32m2022-12-19 17:31:15.732\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m98\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KJob is still waiting, time elapse: 0:00:00\u001b[0m\n", "\u001b[0mm2022-12-19 17:31:16.813\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m125\u001b[0m - \u001b[1m\n", "\u001b[32m2022-12-19 17:31:16.815\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:01\u001b[0m\n", "\u001b[32m2022-12-19 17:31:17.847\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:02\u001b[0m\n", "\u001b[32m2022-12-19 17:31:18.874\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:03\u001b[0m\n", "\u001b[32m2022-12-19 17:31:19.944\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:04\u001b[0m\n", "\u001b[32m2022-12-19 17:31:20.978\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:05\u001b[0m\n", "\u001b[32m2022-12-19 17:31:22.021\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:06\u001b[0m\n", "\u001b[32m2022-12-19 17:31:23.072\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:07\u001b[0m\n", "\u001b[32m2022-12-19 17:31:24.114\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:08\u001b[0m\n", "\u001b[32m2022-12-19 17:31:25.144\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component reader_0, time elapse: 0:00:09\u001b[0m\n", "\u001b[0mm2022-12-19 17:31:27.250\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m125\u001b[0m - \u001b[1m\n", "\u001b[32m2022-12-19 17:31:27.256\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:11\u001b[0m\n", "\u001b[32m2022-12-19 17:31:28.288\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:12\u001b[0m\n", "\u001b[32m2022-12-19 17:31:29.378\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:13\u001b[0m\n", "\u001b[32m2022-12-19 17:31:30.708\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:14\u001b[0m\n", "\u001b[32m2022-12-19 17:31:31.771\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:16\u001b[0m\n", "\u001b[32m2022-12-19 17:31:32.864\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:17\u001b[0m\n", "\u001b[32m2022-12-19 17:31:33.906\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:18\u001b[0m\n", "\u001b[32m2022-12-19 17:31:34.945\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:19\u001b[0m\n", "\u001b[32m2022-12-19 17:31:35.997\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:20\u001b[0m\n", "\u001b[32m2022-12-19 17:31:37.038\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:21\u001b[0m\n", "\u001b[32m2022-12-19 17:31:38.085\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:22\u001b[0m\n", "\u001b[32m2022-12-19 17:31:39.145\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:23\u001b[0m\n", "\u001b[32m2022-12-19 17:31:40.189\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:24\u001b[0m\n", "\u001b[32m2022-12-19 17:31:41.287\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:25\u001b[0m\n", "\u001b[32m2022-12-19 17:31:42.321\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:26\u001b[0m\n", "\u001b[32m2022-12-19 17:31:43.395\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:27\u001b[0m\n", "\u001b[32m2022-12-19 17:31:44.515\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:28\u001b[0m\n", "\u001b[32m2022-12-19 17:31:45.552\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:29\u001b[0m\n", "\u001b[32m2022-12-19 17:31:46.670\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:30\u001b[0m\n", "\u001b[32m2022-12-19 17:31:47.717\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:32\u001b[0m\n", "\u001b[32m2022-12-19 17:31:48.824\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:33\u001b[0m\n", "\u001b[32m2022-12-19 17:31:50.015\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:34\u001b[0m\n", "\u001b[32m2022-12-19 17:31:51.117\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:35\u001b[0m\n", "\u001b[32m2022-12-19 17:31:52.211\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:36\u001b[0m\n", "\u001b[32m2022-12-19 17:31:53.299\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:37\u001b[0m\n", "\u001b[32m2022-12-19 17:31:54.444\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:38\u001b[0m\n", "\u001b[32m2022-12-19 17:31:55.488\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:39\u001b[0m\n", "\u001b[32m2022-12-19 17:31:56.547\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:40\u001b[0m\n", "\u001b[32m2022-12-19 17:31:57.642\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:41\u001b[0m\n", "\u001b[32m2022-12-19 17:31:58.679\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component nn_0, time elapse: 0:00:42\u001b[0m\n", "\u001b[0mm2022-12-19 17:32:00.872\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m125\u001b[0m - \u001b[1m\n", "\u001b[32m2022-12-19 17:32:00.874\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:45\u001b[0m\n", "\u001b[32m2022-12-19 17:32:01.946\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:46\u001b[0m\n", "\u001b[32m2022-12-19 17:32:03.013\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:47\u001b[0m\n", "\u001b[32m2022-12-19 17:32:04.096\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:48\u001b[0m\n", "\u001b[32m2022-12-19 17:32:05.175\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:49\u001b[0m\n", "\u001b[32m2022-12-19 17:32:06.217\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:50\u001b[0m\n", "\u001b[32m2022-12-19 17:32:07.258\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:51\u001b[0m\n", "\u001b[32m2022-12-19 17:32:08.313\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:52\u001b[0m\n", "\u001b[32m2022-12-19 17:32:09.372\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:53\u001b[0m\n", "\u001b[32m2022-12-19 17:32:10.445\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:54\u001b[0m\n", "\u001b[32m2022-12-19 17:32:11.491\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:55\u001b[0m\n", "\u001b[32m2022-12-19 17:32:12.570\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:56\u001b[0m\n", "\u001b[32m2022-12-19 17:32:13.763\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:58\u001b[0m\n", "\u001b[32m2022-12-19 17:32:14.822\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:00:59\u001b[0m\n", "\u001b[32m2022-12-19 17:32:15.872\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m127\u001b[0m - \u001b[1m\u001b[80D\u001b[1A\u001b[KRunning component eval_0, time elapse: 0:01:00\u001b[0m\n", "\u001b[32m2022-12-19 17:32:18.078\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m89\u001b[0m - \u001b[1mJob is success!!! Job id is 202212191731149354320\u001b[0m\n", "\u001b[32m2022-12-19 17:32:18.081\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mpipeline.utils.invoker.job_submitter\u001b[0m:\u001b[36mmonitor_job_status\u001b[0m:\u001b[36m90\u001b[0m - \u001b[1mTotal time: 0:01:02\u001b[0m\n" ] } ], "source": [ "pipeline.compile()\n", "pipeline.fit()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.13 ('venv': venv)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13 (default, Mar 28 2022, 11:38:47) \n[GCC 7.5.0]" }, "vscode": { "interpreter": { "hash": "d29574a2ab71ec988cdcd4d29c58400bd2037cad632b9528d973466f7fb6f853" } } }, "nbformat": 4, "nbformat_minor": 5 }