Browse Source

Add projects

Shellmiao 2 years ago
parent
commit
9279d1873b
100 changed files with 12632 additions and 0 deletions
  1. 33 0
      FATE-Flow/.github/ISSUE_TEMPLATE/00-bug-report.md
  2. 20 0
      FATE-Flow/.github/ISSUE_TEMPLATE/00-feature-request.md
  3. 46 0
      FATE-Flow/.github/workflows/doc.yml
  4. 33 0
      FATE-Flow/.gitignore
  5. 0 0
      FATE-Flow/.gitmodules
  6. 12 0
      FATE-Flow/.readthedocs.yml
  7. 201 0
      FATE-Flow/LICENSE
  8. 34 0
      FATE-Flow/README.md
  9. 34 0
      FATE-Flow/README.zh.md
  10. 232 0
      FATE-Flow/RELEASE.md
  11. 185 0
      FATE-Flow/bin/service.sh
  12. 11 0
      FATE-Flow/conf/casbin_model.conf
  13. 27 0
      FATE-Flow/conf/component_registry.json
  14. 5 0
      FATE-Flow/conf/incompatible_version.yaml
  15. 29 0
      FATE-Flow/conf/job_default_config.yaml
  16. 10 0
      FATE-Flow/conf/template_info.yaml
  17. 84 0
      FATE-Flow/doc/cli/checkpoint.md
  18. 84 0
      FATE-Flow/doc/cli/checkpoint.zh.md
  19. 282 0
      FATE-Flow/doc/cli/data.md
  20. 285 0
      FATE-Flow/doc/cli/data.zh.md
  21. 277 0
      FATE-Flow/doc/cli/job.md
  22. 275 0
      FATE-Flow/doc/cli/job.zh.md
  23. 101 0
      FATE-Flow/doc/cli/key.md
  24. 101 0
      FATE-Flow/doc/cli/key.zh.md
  25. 376 0
      FATE-Flow/doc/cli/model.md
  26. 375 0
      FATE-Flow/doc/cli/model.zh.md
  27. 150 0
      FATE-Flow/doc/cli/privilege.md
  28. 163 0
      FATE-Flow/doc/cli/privilege.zh.md
  29. 179 0
      FATE-Flow/doc/cli/provider.md
  30. 180 0
      FATE-Flow/doc/cli/provider.zh.md
  31. 89 0
      FATE-Flow/doc/cli/resource.md
  32. 89 0
      FATE-Flow/doc/cli/resource.zh.md
  33. 111 0
      FATE-Flow/doc/cli/server.md
  34. 111 0
      FATE-Flow/doc/cli/server.zh.md
  35. 320 0
      FATE-Flow/doc/cli/table.md
  36. 318 0
      FATE-Flow/doc/cli/table.zh.md
  37. 89 0
      FATE-Flow/doc/cli/tag.md
  38. 89 0
      FATE-Flow/doc/cli/tag.zh.md
  39. 38 0
      FATE-Flow/doc/cli/task.md
  40. 38 0
      FATE-Flow/doc/cli/task.zh.md
  41. 604 0
      FATE-Flow/doc/cli/tracking.md
  42. 604 0
      FATE-Flow/doc/cli/tracking.zh.md
  43. 412 0
      FATE-Flow/doc/configuration_instruction.md
  44. 419 0
      FATE-Flow/doc/configuration_instruction.zh.md
  45. 40 0
      FATE-Flow/doc/document_navigation.md
  46. 52 0
      FATE-Flow/doc/document_navigation.zh.md
  47. 102 0
      FATE-Flow/doc/faq.md
  48. 102 0
      FATE-Flow/doc/faq.zh.md
  49. 110 0
      FATE-Flow/doc/fate_flow.md
  50. 110 0
      FATE-Flow/doc/fate_flow.zh.md
  51. 152 0
      FATE-Flow/doc/fate_flow_authority_management.md
  52. 153 0
      FATE-Flow/doc/fate_flow_authority_management.zh.md
  53. 164 0
      FATE-Flow/doc/fate_flow_client.md
  54. 164 0
      FATE-Flow/doc/fate_flow_client.zh.md
  55. 19 0
      FATE-Flow/doc/fate_flow_component_registry.md
  56. 19 0
      FATE-Flow/doc/fate_flow_component_registry.zh.md
  57. 136 0
      FATE-Flow/doc/fate_flow_data_access.md
  58. 129 0
      FATE-Flow/doc/fate_flow_data_access.zh.md
  59. 17 0
      FATE-Flow/doc/fate_flow_http_api.md
  60. 30 0
      FATE-Flow/doc/fate_flow_http_api.zh.md
  61. 640 0
      FATE-Flow/doc/fate_flow_http_api_call_demo.md
  62. 640 0
      FATE-Flow/doc/fate_flow_http_api_call_demo.zh.md
  63. 702 0
      FATE-Flow/doc/fate_flow_job_scheduling.md
  64. 702 0
      FATE-Flow/doc/fate_flow_job_scheduling.zh.md
  65. 213 0
      FATE-Flow/doc/fate_flow_model_migration.md
  66. 213 0
      FATE-Flow/doc/fate_flow_model_migration.zh.md
  67. 78 0
      FATE-Flow/doc/fate_flow_model_registry.md
  68. 203 0
      FATE-Flow/doc/fate_flow_model_registry.zh.md
  69. 5 0
      FATE-Flow/doc/fate_flow_monitoring.md
  70. 6 0
      FATE-Flow/doc/fate_flow_monitoring.zh.md
  71. 48 0
      FATE-Flow/doc/fate_flow_permission_management.md
  72. 48 0
      FATE-Flow/doc/fate_flow_permission_management.zh.md
  73. 102 0
      FATE-Flow/doc/fate_flow_resource_management.md
  74. 103 0
      FATE-Flow/doc/fate_flow_resource_management.zh.md
  75. 13 0
      FATE-Flow/doc/fate_flow_server_operation.md
  76. 13 0
      FATE-Flow/doc/fate_flow_server_operation.zh.md
  77. 32 0
      FATE-Flow/doc/fate_flow_service_registry.md
  78. 32 0
      FATE-Flow/doc/fate_flow_service_registry.zh.md
  79. 49 0
      FATE-Flow/doc/fate_flow_tracking.md
  80. 49 0
      FATE-Flow/doc/fate_flow_tracking.zh.md
  81. BIN
      FATE-Flow/doc/images/fate_arch.png
  82. BIN
      FATE-Flow/doc/images/fate_deploy_directory.png
  83. BIN
      FATE-Flow/doc/images/fate_flow_arch.png
  84. BIN
      FATE-Flow/doc/images/fate_flow_authorization.png
  85. BIN
      FATE-Flow/doc/images/fate_flow_component_dsl.png
  86. BIN
      FATE-Flow/doc/images/fate_flow_component_registry.png
  87. BIN
      FATE-Flow/doc/images/fate_flow_dag.png
  88. BIN
      FATE-Flow/doc/images/fate_flow_detector.png
  89. BIN
      FATE-Flow/doc/images/fate_flow_dsl.png
  90. BIN
      FATE-Flow/doc/images/fate_flow_inputoutput.png
  91. BIN
      FATE-Flow/doc/images/fate_flow_logical_arch.png
  92. BIN
      FATE-Flow/doc/images/fate_flow_major_feature.png
  93. BIN
      FATE-Flow/doc/images/fate_flow_model_storage.png
  94. BIN
      FATE-Flow/doc/images/fate_flow_pipelined_model.png
  95. BIN
      FATE-Flow/doc/images/fate_flow_resource_process.png
  96. BIN
      FATE-Flow/doc/images/fate_flow_scheduling_arch.png
  97. BIN
      FATE-Flow/doc/images/federated_learning_pipeline.png
  98. 4 0
      FATE-Flow/doc/index.md
  99. 4 0
      FATE-Flow/doc/index.zh.md
  100. 79 0
      FATE-Flow/doc/mkdocs/README.md

+ 33 - 0
FATE-Flow/.github/ISSUE_TEMPLATE/00-bug-report.md

@@ -0,0 +1,33 @@
+---
+
+name: Bug Report
+about: Use this template for reporting a bug
+labels: 'type:bug'
+
+---
+
+**System information**
+
+- Have I written custom code (yes/no):
+- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
+- FATE Flow version (use command: python fate_flow_server.py --version):
+- Python version (use command: python --version):
+
+**Describe the current behavior**
+
+**Describe the expected behavior**
+
+**Other info / logs** Include any logs or source code that would be helpful to
+diagnose the problem. If including tracebacks, please include the full
+traceback. Large logs and files should be attached.
+
+- fateflow/logs/$job_id/fate_flow_schedule.log: scheduling log for a job
+- fateflow/logs/$job_id/* : all logs for a job
+- fateflow/logs/fate_flow/fate_flow_stat.log: a log of server stat
+- fateflow/logs/fate_flow/fate_flow_schedule.log: the starting scheduling log for all jobs
+- fateflow/logs/fate_flow/fate_flow_detect.log: the starting detecting log for all jobs
+
+**Contributing**
+
+- Do you want to contribute a PR? (yes/no):
+- Briefly describe your candidate solution(if contributing):

+ 20 - 0
FATE-Flow/.github/ISSUE_TEMPLATE/00-feature-request.md

@@ -0,0 +1,20 @@
+---
+name: Feature Request
+about: Use this template for raising a feature request
+labels: 'type:feature'
+
+---
+
+**System information**
+
+- FATE Flow version (use command: python fate_flow_server.py --version):
+- Python version (use command: python --version):
+- Are you willing to contribute it (yes/no):
+
+**Describe the feature and the current behavior/state.**
+
+**Will this change the current api? How?**
+
+**Who will benefit with this feature?**
+
+**Any Other info.**

+ 46 - 0
FATE-Flow/.github/workflows/doc.yml

@@ -0,0 +1,46 @@
+name: generate doc
+
+on:
+  push:
+    branches:
+      - 'main'
+      - 'develop-[0-9]+.[0-9]+.[0-9]+'
+
+  schedule:
+    - cron: '0 8 * * *'
+
+  workflow_dispatch: {}
+
+concurrency:
+  group: doc_generator_${{ github.ref_name }}
+  cancel-in-progress: true
+
+jobs:
+  doc_generator:
+    name: generate doc on branch ${{ github.ref_name }}
+    runs-on: ubuntu-latest
+    steps:
+      - name: check out the repo
+        uses: actions/checkout@v2
+
+      - name: fetch gh-pages
+        continue-on-error: true
+        run: git fetch origin gh-pages --depth=1
+
+      - name: configure a git user
+        run: |
+          git config user.name github-actions[bot]
+          git config user.email 41898282+github-actions[bot]@users.noreply.github.com
+
+      - name: install python packages
+        run: pip install -Ur doc/mkdocs/requirements.txt
+
+      - name: build doc via mike
+        shell: bash
+        run: |
+          VERSION='${{ github.ref_name }}'
+          [ "$VERSION" == main ] && { VERSION=latest; ALIAS='main master'; }
+          VERSION="${VERSION#develop-}"
+
+          mike deploy --push --update-aliases "$VERSION" $ALIAS
+          mike set-default --push latest

+ 33 - 0
FATE-Flow/.gitignore

@@ -0,0 +1,33 @@
+# common file patterns
+.DS_STORE
+.idea
+*.iml
+*.pyc
+__pycache__
+*.jar
+*.class
+.project
+*.prefs
+_build
+venv
+
+# excluded paths
+/data/
+/logs/
+/jobs/
+/audit/
+.vscode/*
+/temp/
+/tmp
+/worker/
+/provider_registrar/
+/model_local_cache/
+*.db
+*.db-journal
+*.whl
+/conf/local.*.yaml
+/cluster-deploy/FATE_install_*
+/python/component_plugins/
+
+# doc
+/site/

+ 0 - 0
FATE-Flow/.gitmodules


+ 12 - 0
FATE-Flow/.readthedocs.yml

@@ -0,0 +1,12 @@
+version: 2
+
+mkdocs:
+  configuration: mkdocs.yml
+  fail_on_warning: false
+
+formats: all
+
+python:
+  version: 3.7
+  install:
+    - requirements: doc/mkdocs/requirements.txt

+ 201 - 0
FATE-Flow/LICENSE

@@ -0,0 +1,201 @@
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

+ 34 - 0
FATE-Flow/README.md

@@ -0,0 +1,34 @@
+[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![CodeStyle](https://img.shields.io/badge/Check%20Style-Google-brightgreen)](https://checkstyle.sourceforge.io/google_style.html) [![Style](https://img.shields.io/badge/Check%20Style-Black-black)](https://checkstyle.sourceforge.io/google_style.html)
+
+[中文](./README.zh.md)
+
+FATE Flow is a multi-party federated task security scheduling platform for federated learning end-to-end pipeline
+
+- [Shared-State Scheduling Architecture](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/41684.pdf)
+- Secure Multi-Party Communication Across Data Centers
+
+Providing production-level service capabilities:
+
+- Data Access
+- Component Registry
+- Federated Job&Task Scheduling
+- Multi-Party Resource Coordination
+- Data Flow Tracking
+- Real-Time Job Monitoring
+- Multi-Party Federated Model Registry
+- Multi-Party Cooperation Authority Management
+- High Availability
+- CLI, REST API, Python API
+
+For detailed introduction, please refer to [FATE Flow Overall Design](https://federatedai.github.io/FATE-Flow/latest/fate_flow/#overall-design)
+
+## Deployment
+
+Please refer to [FATE](https://github.com/FederatedAI/FATE)
+
+## Documentation
+
+The official FATE Flow documentation is here [https://federatedai.github.io/FATE-Flow/latest/](https://federatedai.github.io/FATE-Flow/latest/)
+
+## License
+[Apache License 2.0](LICENSE)

+ 34 - 0
FATE-Flow/README.zh.md

@@ -0,0 +1,34 @@
+[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![CodeStyle](https://img.shields.io/badge/Check%20Style-Google-brightgreen)](https://checkstyle.sourceforge.io/google_style.html) [![Style](https://img.shields.io/badge/Check%20Style-Black-black)](https://checkstyle.sourceforge.io/google_style.html)
+
+[English](./README.md)
+
+FATE Flow是一个联邦学习端到端全流程的多方联合任务安全调度平台, 基于:
+
+- [共享状态调度架构](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/41684.pdf)
+- 跨数据中心的多方安全通信
+
+提供生产级服务能力:
+
+- 数据接入
+- 任务组件注册中心
+- 联合作业&任务调度
+- 多方资源协调
+- 数据流动追踪
+- 作业实时监测
+- 联合模型注册中心
+- 多方合作权限管理
+- 系统高可用
+- CLI、REST API、Python API
+
+详细介绍请参考[FATE Flow整体设计](https://federatedai.github.io/FATE-Flow/latest/zh/fate_flow/)
+
+## 部署
+
+请查阅[FATE](https://github.com/FederatedAI/FATE)
+
+## 文档
+
+FATE Flow官方文档在这里[https://federatedai.github.io/FATE-Flow/latest/zh/](https://federatedai.github.io/FATE-Flow/latest/zh/)
+
+## License
+[Apache License 2.0](LICENSE)

+ 232 - 0
FATE-Flow/RELEASE.md

@@ -0,0 +1,232 @@
+# Release 1.10.1
+## Major Features and Improvements
+* Optimize table info API
+
+
+# Release 1.10.0
+## Major Features and Improvements
+* Add connection test API
+* May configure gRPC message size limit 
+## Bug Fixes
+* Fix module duplication issue in model 
+
+# Release 1.9.1
+## Bug Fixes
+* Fix parameter inheritance when loading non-model modules from ModelLoader 
+* Fix job inheritance after adding or removing roles from training configuration
+* Fix delimiter error in uploaded/downloaded data
+* Fix anonymous feature name renewal
+
+# Release 1.9.0
+## Major Features and Improvements
+* Support high availability and load balancing to improve system availability and stability
+* Added support for site authentication and data set authority authentication, and supports hook mode for users to customize authentication schemes
+* Component registration optimization, support participants to use different versions of algorithm components
+* Upload, reader support feature anonymity, support specifying id column
+* Scheduling optimization, asynchronous time-consuming operations, component scheduling performance improved by more than 5 times This optimization obvious benefits for multi-component tasks
+* Added component ApiReader to get feature data by id
+* Model storage optimization, support model data synchronization between local and other storage
+* The scheduler now can obtain the error information from other participant's algorithm components
+
+# Release 1.8.0
+## Major Features and Improvements
+* Optimize the model migration function to reduce user operation steps;
+* Add version compatibility check in component center to support multiple parties to use different versions;
+* Add data table disable/enable function, and support batch delete disable table
+
+# Release 1.7.2
+## Major Features and Improvements
+* Separate the base connection address of the data storage table from the data table information, and compatible with historical versions;
+* Optimize the component output data download interface.
+
+# Release 1.7.1
+## Major Features and Improvements
+* Added the writer component, which supports exporting data to mysql and saving data as a new table;
+* Added job reuse function, which supports the reuse of successful status components of historical tasks in new jobs;
+* Optimize the time-consuming problem of submitting tasks and the time-consuming problem of stopping tasks;
+* Component registration supports automatic setting of PYTHONPYTH.
+
+## Bug Fixes
+* Fix the problem of OOM when uploading hdfs table;
+* Fix the problem of incompatibility with the old version of serving;
+* The parameter partitions of the toy test is set to 4, and a timeout prompt is added.
+
+# Release 1.7.0
+
+## Major Features and Improvements
+
+* Independent repository instead of all code in the main FATE repository
+* Component registry, which can hot load many different versions of component packages at the same time
+* Hot update of component parameters, component-specific reruns, automatic reruns
+* Model Checkpoint to support task hot start, model deployment and other
+* Data, Model and Cache can be reused between jobs
+* Reader component supports more data sources, such as MySQL, Hive
+* Realtime recording of dataset usage derivation routes
+* Multi-party permission control for datasets
+* Automatic push to reliable storage when model deployment, support Tencent Cloud COS, MySQL, Redis
+* REST API authentication
+
+## Bug Fixes
+
+# Release 1.6.1
+## Major Features and Improvements
+* Support mysql storage engine;
+* Added service registry interface;
+* Added service query interface;
+* Support fate on WeDataSphere mode
+* Add lock when writing `model_local_cache`
+* Register the model download urls to zookeeper
+
+## Bug Fixes
+* Fix job id length no more than 25 limitation
+
+
+# Release 1.5.2
+## Major Features and Improvements
+* Read data from mysql with ‘table bind’ command to map source table to FATE table
+* FATE cluster push model for one-to-multiple FATE Serving clusters in one party
+
+## Bug Fixes
+* Fix job id length no more than 25 limitation
+
+
+# Release 1.5.1
+## Major Features and Improvements
+* Optimize the model center, reconstruct publishing model, support deploy, load, bind, migrate operations, and add new interfaces such as model info
+* Improve identity authentication and resource authorization, support party identity verification, and participate in the authorization of roles and components
+* Optimize and fix resource manager, add task_cores job parameters to adapt to different computing engines
+
+## Deploy
+* Support 1.5.0 retain data upgrade to 1.5.1
+
+## Bug Fixes
+* Fix job clean CLI
+
+
+# Release 1.5.0(LTS)
+## Major Features and Improvements
+* Brand new scheduling framework based on global state and optimistic concurrency control and support multiple scheduler
+* Upgraded task scheduling: multi-model output for component, executing component in parallel, component rerun
+* Add new DSL v2 which significantly improves user experiences in comparison to DSL v1. Several syntax error detection functions are supported in v2. Now DSL v1 and v2 are 
+   compatible in the current FATE version
+* Enhanced resource scheduling: remove limit on job number, base on cores, memory and working node according to different computing engine supports
+* Add model registry, supports model query, import/export, model transfer between clusters
+* Add Reader component: automatically dump input data to FATE-compatible format and cluster storage engine; now data from HDFS
+* Refactor submit job configuration's parameters setting, support different parties use different job parameters when using dsl V2.
+
+## Client
+* Brand new CLI v2 with easy independent installation, user-friendly programming syntax & command-line prompt
+* Support FLOW python language SDK
+
+
+# Release 1.4.4
+## Major Features and Improvements
+* Task Executor supports monkey patch
+* Add forward API
+
+
+# Release 1.4.2
+## Major Features and Improvements
+* Distinguish between user stop job and system stop job;
+* Optimized some logs;
+* Optimize zookeeper configuration
+* The model supports persistent storage to mysql
+* Push the model to the online service to support the specified storage address (local file and FATEFlowServer interface)
+
+
+# Release 1.4.1
+## Major Features and Improvements
+* Allow the host to stop the job
+* Optimize the task queue
+* Automatically align the input table partitions of all participants when the job is running
+* Fate flow client large file upload optimization
+* Fixed some bugs with abnormal status
+
+
+# Release 1.4.0
+## Major Features and Improvements
+* Refactoring model management, native file directory storage, storage structure is more flexible, more information
+* Support model import and export, store and restore with reliable distributed system(Redis is currently supported)
+* Using MySQL instead of Redis to implement Job Queue, reducing system complexity
+* Support for uploading client local files
+* Automatically detects the existence of the table and provides the destroy option
+* Separate system, algorithm, scheduling command log, scheduling command log can be independently audited
+
+
+# Release 1.3.1
+## Major Features and Improvements
+## Deploy
+* Support deploying by MacOS
+* Support using external db
+* Deploy JDK and Python environments on demand
+* Improve MySQL and FATE Flow service.sh
+* Support more custom deployment configurations in the default_configurations.sh, such as ssh_port, mysql_port and so one.
+
+# Release 1.3.0
+## Major Features and Improvements
+* Add clean job CLI for cleaning output and intermediate results, including data, metrics and sessions
+* Support for obtaining table namespace and name of output data via CLI
+* Fix KillJob unsuccessful execution in some special cases
+* Improve log system, add more exception and run time status prompts
+
+
+# Release 1.2.0
+## Major Features and Improvements
+* Add data management module for recording the uploaded data tables and the outputs of the model in the job running, and for querying and cleaning up CLI. 
+* Support registration center for simplifying communication configuration between FATEFlow and FATEServing
+* Restruct model release logic, FATE_Flow pushes model directly to FATE-Serving. Decouple FATE-Serving and Eggroll, and the offline and online architectures are connected only by FATE-Flow.
+* Provide CLI to query data upload record
+* Upload and download data support progress statistics by line
+* Add some abnormal diagnosis tips
+* Support adding note information to job
+
+## Deploy
+* Fix bugs in EggRoll startup script, add mysql, redis startup options.
+* Disable host name resolution configuration for mysql service.
+* The version number of each module of the software packaging script is updated using the automatic acquisition mode.
+
+
+# Release 1.1.1
+## Major Features and Improvements
+* Add cluster deployment support based on ubuntu operating system。
+* Support intermediate data cleanup after the task ends
+* Optimizing the deployment process
+
+
+## Bug Fixes
+* Fix a bug in download api
+* Fix bugs of spark-backend
+
+
+# Release 1.1
+## Major Features and Improvements
+* Upload and Download support CLI for querying job status
+* Support for canceling waiting job
+* Support for setting job timeout
+* Support for storing a job scheduling log in the job log folder
+* Add authentication control Beta version, including component, command, role
+
+
+# Release 1.0.2
+## Major Features and Improvements
+* Python and JDK environment are required only for running standalone version quick experiment
+* Support cluster version docker deployment
+* Add deployment guide in Chinese
+* Standalone version job for quick experiment is supported when cluster version deployed. 
+* Python service log will remain for 14 days now.
+
+
+# Release 1.0.1
+## Bug Fixes
+* Support upload file in version argument
+* Support get serviceRoleName from configuration
+
+
+# Release 1.0
+## Major Features and Improvements
+* DAG defines Pipeline
+* Federated Multi-party asymmetric DSL parser
+* Federated Learning lifecycle management
+* Federated Task collaborative scheduling
+* Tracking for data, metric, model and so on
+* Federated Multi-party model management

+ 185 - 0
FATE-Flow/bin/service.sh

@@ -0,0 +1,185 @@
+#!/bin/bash
+
+#
+#  Copyright 2019 The FATE Authors. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#
+
+if [[ -z "${FATE_PROJECT_BASE}" ]]; then
+    PROJECT_BASE=$(cd "$(dirname "$0")";cd ../;cd ../;pwd)
+else
+    PROJECT_BASE="${FATE_PROJECT_BASE}"
+fi
+FATE_FLOW_BASE=${PROJECT_BASE}/fateflow
+echo "PROJECT_BASE: "${PROJECT_BASE}
+
+# source init_env.sh
+INI_ENV_SCRIPT=${PROJECT_BASE}/bin/init_env.sh
+if test -f "${INI_ENV_SCRIPT}"; then
+  source ${PROJECT_BASE}/bin/init_env.sh
+  echo "PYTHONPATH: "${PYTHONPATH}
+else
+  echo "file not found: ${INI_ENV_SCRIPT}"
+  exit
+fi
+
+log_dir=${FATE_FLOW_BASE}/logs
+
+module=fate_flow_server.py
+
+parse_yaml() {
+   local prefix=$2
+   local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo @|tr @ '\034')
+   sed -ne "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
+        -e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p"  $1 |
+   awk -F$fs '{
+      indent = length($1)/2;
+      vname[indent] = $2;
+      for (i in vname) {if (i > indent) {delete vname[i]}}
+      if (length($3) > 0) {
+         vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
+         printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
+      }
+   }'
+}
+
+getport() {
+    service_conf_path=${PROJECT_BASE}/conf/service_conf.yaml
+    if test -f "${service_conf_path}"; then
+      echo "found service conf: ${service_conf_path}"
+      eval $(parse_yaml ${service_conf_path} "service_config_")
+      echo "fate flow http port: ${service_config_fateflow_http_port}, grpc port: ${service_config_fateflow_grpc_port}"
+      echo
+    else
+      echo "service conf not found: ${service_conf_path}"
+      exit
+    fi
+}
+
+getport
+
+getpid() {
+    echo "check process by http port and grpc port"
+    pid1=`lsof -i:${service_config_fateflow_http_port} | grep 'LISTEN' | awk 'NR==1 {print $2}'`
+    pid2=`lsof -i:${service_config_fateflow_grpc_port} | grep 'LISTEN' | awk 'NR==1 {print $2}'`
+    if [[ -n ${pid1} && "x"${pid1} = "x"${pid2} ]];then
+        pid=$pid1
+    elif [[ -z ${pid1} && -z ${pid2} ]];then
+        pid=
+    fi
+}
+
+mklogsdir() {
+    if [[ ! -d $log_dir ]]; then
+        mkdir -p $log_dir
+    fi
+}
+
+status() {
+    getpid
+    if [[ -n ${pid} ]]; then
+        echo "status:`ps aux | grep ${pid} | grep -v grep`"
+        lsof -i:${service_config_fateflow_http_port} | grep 'LISTEN'
+        lsof -i:${service_config_fateflow_grpc_port} | grep 'LISTEN'
+    else
+        echo "service not running"
+    fi
+}
+
+start() {
+    getpid
+    if [[ ${pid} == "" ]]; then
+        mklogsdir
+        if [[ $1x == "front"x ]];then
+          export FATE_PROJECT_BASE=${PROJECT_BASE}
+          exec python ${FATE_FLOW_BASE}/python/fate_flow/fate_flow_server.py >> "${log_dir}/console.log" 2>>"${log_dir}/error.log"
+          unset FATE_PROJECT_BASE
+        else
+          export FATE_PROJECT_BASE=${PROJECT_BASE}
+          nohup python ${FATE_FLOW_BASE}/python/fate_flow/fate_flow_server.py >> "${log_dir}/console.log" 2>>"${log_dir}/error.log" &
+          unset FATE_PROJECT_BASE
+        fi
+        for((i=1;i<=100;i++));
+        do
+            sleep 0.1
+            getpid
+            if [[ -n ${pid} ]]; then
+                echo "service start sucessfully. pid: ${pid}"
+                return
+            fi
+        done
+        if [[ -n ${pid} ]]; then
+           echo "service start sucessfully. pid: ${pid}"
+        else
+           echo "service start failed, please check ${log_dir}/error.log and ${log_dir}/console.log"
+        fi
+    else
+        echo "service already started. pid: ${pid}"
+    fi
+}
+
+stop() {
+    getpid
+    if [[ -n ${pid} ]]; then
+        echo "killing: `ps aux | grep ${pid} | grep -v grep`"
+        for((i=1;i<=100;i++));
+        do
+            sleep 0.1
+            kill ${pid}
+            getpid
+            if [[ ! -n ${pid} ]]; then
+                echo "killed by SIGTERM"
+                return
+            fi
+        done
+        kill -9 ${pid}
+        if [[ $? -eq 0 ]]; then
+            echo "killed by SIGKILL"
+        else
+            echo "kill error"
+        fi
+    else
+        echo "service not running"
+    fi
+}
+
+
+case "$1" in
+    start)
+        start
+        status
+        ;;
+
+    starting)
+        start front
+        ;;
+
+    stop)
+        stop
+        ;;
+
+    status)
+        status
+        ;;
+
+    restart)
+        stop
+        sleep 10
+        start
+        status
+        ;;
+    *)
+        echo "usage: $0 {start|stop|status|restart}"
+        exit -1
+esac

+ 11 - 0
FATE-Flow/conf/casbin_model.conf

@@ -0,0 +1,11 @@
+[request_definition]
+r = party_id, type, value
+
+[policy_definition]
+p = party_id, type, value
+
+[policy_effect]
+e = some(where (p.eft == allow))
+
+[matchers]
+m = r.party_id == p.party_id && r.type == p.type && r.value == p.value

+ 27 - 0
FATE-Flow/conf/component_registry.json

@@ -0,0 +1,27 @@
+{
+  "components": {
+  },
+  "providers": {
+  },
+  "default_settings": {
+    "fate_flow":{
+      "default_version_key": "FATEFlow"
+    },
+    "fate": {
+      "default_version_key": "FATE"
+    },
+    "class_path": {
+      "interface": "components.components.Components",
+      "feature_instance": "feature.instance.Instance",
+      "feature_vector": "feature.sparse_vector.SparseVector",
+      "model": "protobuf.generated",
+      "model_migrate": "protobuf.model_migrate.model_migrate",
+      "homo_model_convert": "protobuf.homo_model_convert.homo_model_convert",
+      "anonymous_generator": "util.anonymous_generator_util.Anonymous",
+      "data_format": "util.data_format_preprocess.DataFormatPreProcess",
+      "hetero_model_merge": "protobuf.model_merge.merge_hetero_models.hetero_model_merge",
+      "extract_woe_array_dict": "protobuf.model_migrate.binning_model_migrate.extract_woe_array_dict",
+      "merge_woe_array_dict": "protobuf.model_migrate.binning_model_migrate.merge_woe_array_dict"
+    }
+  }
+}

+ 5 - 0
FATE-Flow/conf/incompatible_version.yaml

@@ -0,0 +1,5 @@
+FATE:
+  1.7: <1.7.0, >=1.8.0
+  1.7.2: <=1.7.0, 1.7.1, 1.7.1.1, >=1.8.0
+  1.8: <1.8.0, >=1.9.0
+  1.9: <1.9.0

+ 29 - 0
FATE-Flow/conf/job_default_config.yaml

@@ -0,0 +1,29 @@
+# component provider, relative path to get_fate_python_directory
+default_component_provider_path: federatedml
+
+# resource
+total_cores_overweight_percent: 1  # 1 means no overweight
+total_memory_overweight_percent: 1  # 1 means no overweight
+task_parallelism: 1
+task_cores: 4
+task_memory: 0  # mb
+max_cores_percent_per_job: 1  # 1 means total
+
+# scheduling
+job_timeout: 259200 # s
+remote_request_timeout: 30000  # ms
+federated_command_trys: 3
+end_status_job_scheduling_time_limit: 300000 # ms
+end_status_job_scheduling_updates: 1
+auto_retries: 0
+auto_retry_delay: 1  #seconds
+# It can also be specified in the job configuration using the federated_status_collect_type parameter
+federated_status_collect_type: PUSH
+detect_connect_max_retry_count: 3
+detect_connect_long_retry_count: 2
+
+# upload
+upload_block_max_bytes: 104857600 # bytes
+
+#component output
+output_data_summary_count_limit: 100

+ 10 - 0
FATE-Flow/conf/template_info.yaml

@@ -0,0 +1,10 @@
+# base dir: fateflow
+template_path:
+  fate_examples: ../examples
+  fateflow_examples: examples
+template_data:
+  base_dir: ../examples/data
+  min_data: ['breast_hetero_guest.csv', 'breast_hetero_host.csv', 'default_credit_hetero_guest.csv', 'default_credit_hetero_host.csv']
+
+delete_path:
+  fateflow_examples: [data]

+ 84 - 0
FATE-Flow/doc/cli/checkpoint.md

@@ -0,0 +1,84 @@
+## Checkpoint
+
+### list
+
+List checkpoints.
+
+```bash
+flow checkpoint list --model-id <model_id> --model-version <model_version> --role <role> --party-id <party_id> --component-name <component_name>
+```
+
+**Options**
+
+| Parameter      | Short Flag | Long Flag          | Optional | Description    |
+| -------------- | ---------- | ------------------ | -------- | -------------- |
+| model_id       |            | `--model-id`       | No       | Model ID       |
+| model_version  |            | `--model-version`  | No       | Model version  |
+| role           | `-r`       | `--role`           | No       | Party role     |
+| party_id       | `-p`       | `--party-id`       | No       | Party ID       |
+| component_name | `-cpn`     | `--component-name` | No       | Component name |
+
+**Example**
+
+```json
+{
+  "retcode": 0,
+  "retmsg": "success",
+  "data": [
+    {
+      "create_time": "2021-11-07T02:34:54.683015",
+      "step_index": 0,
+      "step_name": "step_name",
+      "models": {
+        "HeteroLogisticRegressionMeta": {
+          "buffer_name": "LRModelMeta",
+          "sha1": "6871508f6e6228341b18031b3623f99a53a87147"
+        },
+        "HeteroLogisticRegressionParam": {
+          "buffer_name": "LRModelParam",
+          "sha1": "e3cb636fc93675684bff27117943f5bfa87f3029"
+        }
+      }
+    }
+  ]
+}
+```
+
+### get
+
+Get checkpoint information.
+
+```bash
+flow checkpoint get --model-id <model_id> --model-version <model_version> --role <role> --party-id <party_id> --component-name <component_name> --step-index <step_index>
+```
+
+
+**Example**
+
+| Parameter      | Short Flag | Long Flag          | Optional | Description                                 |
+| -------------- | ---------- | ------------------ | -------- | ------------------------------------------- |
+| model_id       |            | `--model-id`       | No       | Model ID                                    |
+| model_version  |            | `--model-version`  | No       | Model version                               |
+| role           | `-r`       | `--role`           | No       | Party role                                  |
+| party_id       | `-p`       | `--party-id`       | No       | Party ID                                    |
+| component_name | `-cpn`     | `--component-name` | No       | Component name                              |
+| step_index     |            | `--step-index`     | Yes      | Step index, cannot be used with `step_name` |
+| step_name      |            | `--step-name`      | Yes      | Step name, cannot be used with `step_index` |
+
+**Example**
+
+```json
+{
+  "retcode": 0,
+  "retmsg": "success",
+  "data": {
+    "create_time": "2021-11-07T02:34:54.683015",
+    "step_index": 0,
+    "step_name": "step_name",
+    "models": {
+      "HeteroLogisticRegressionMeta": "CgJMMhEtQxzr4jYaPxkAAAAAAADwPyIHcm1zcHJvcDD///////////8BOTMzMzMzM8M/QApKBGRpZmZYAQ==",
+      "HeteroLogisticRegressionParam": "Ig0KAng3EW1qASu+uuO/Ig0KAng0EcNi7a65ReG/Ig0KAng4EbJbl4gvVea/Ig0KAng2EcZwlVZTkOu/Ig0KAngwEVpG8dCbGvG/Ig0KAng5ESJNTx5MLve/Ig0KAngzEZ88H9P8qfO/Ig0KAng1EVfWP8JJv/K/Ig0KAngxEVS0xVXoTem/Ig0KAngyEaApgW32Q/K/KSiiE8AukPs/MgJ4MDICeDEyAngyMgJ4MzICeDQyAng1MgJ4NjICeDcyAng4MgJ4OUj///////////8B"
+    }
+  }
+}
+```

+ 84 - 0
FATE-Flow/doc/cli/checkpoint.zh.md

@@ -0,0 +1,84 @@
+## Checkpoint
+
+### list
+
+获取 Checkpoint 模型列表。
+
+```bash
+flow checkpoint list --model-id <model_id> --model-version <model_version> --role <role> --party-id <party_id> --component-name <component_name>
+```
+
+**选项**
+
+| 参数           | 短格式 | 长格式             | 可选参数 | 说明       |
+| -------------- | ------ | ------------------ | -------- | ---------- |
+| model_id       |        | `--model-id`       | 否       | 模型 ID    |
+| model_version  |        | `--model-version`  | 否       | 模型版本   |
+| role           | `-r`   | `--role`           | 否       | Party 角色 |
+| party_id       | `-p`   | `--party-id`       | 否       | Party ID   |
+| component_name | `-cpn` | `--component-name` | 否       | 组件名     |
+
+**样例**
+
+```json
+{
+  "retcode": 0,
+  "retmsg": "success",
+  "data": [
+    {
+      "create_time": "2021-11-07T02:34:54.683015",
+      "step_index": 0,
+      "step_name": "step_name",
+      "models": {
+        "HeteroLogisticRegressionMeta": {
+          "buffer_name": "LRModelMeta",
+          "sha1": "6871508f6e6228341b18031b3623f99a53a87147"
+        },
+        "HeteroLogisticRegressionParam": {
+          "buffer_name": "LRModelParam",
+          "sha1": "e3cb636fc93675684bff27117943f5bfa87f3029"
+        }
+      }
+    }
+  ]
+}
+```
+
+### get
+
+获取 Checkpoint 模型信息。
+
+```bash
+flow checkpoint get --model-id <model_id> --model-version <model_version> --role <role> --party-id <party_id> --component-name <component_name> --step-index <step_index>
+```
+
+
+**选项**
+
+| 参数           | 短格式 | 长格式             | 可选参数 | 说明                                  |
+| -------------- | ------ | ------------------ | -------- | ------------------------------------- |
+| model_id       |        | `--model-id`       | 否       | 模型 ID                               |
+| model_version  |        | `--model-version`  | 否       | 模型版本                              |
+| role           | `-r`   | `--role`           | 否       | Party 角色                            |
+| party_id       | `-p`   | `--party-id`       | 否       | Party ID                              |
+| component_name | `-cpn` | `--component-name` | 否       | 组件名                                |
+| step_index     |        | `--step-index`     | 是       | Step index,不可与 step_name 同时使用 |
+| step_name      |        | `--step-name`      | 是       | Step name,不可与 step_index 同时使用 |
+
+**样例**
+
+```json
+{
+  "retcode": 0,
+  "retmsg": "success",
+  "data": {
+    "create_time": "2021-11-07T02:34:54.683015",
+    "step_index": 0,
+    "step_name": "step_name",
+    "models": {
+      "HeteroLogisticRegressionMeta": "CgJMMhEtQxzr4jYaPxkAAAAAAADwPyIHcm1zcHJvcDD///////////8BOTMzMzMzM8M/QApKBGRpZmZYAQ==",
+      "HeteroLogisticRegressionParam": "Ig0KAng3EW1qASu+uuO/Ig0KAng0EcNi7a65ReG/Ig0KAng4EbJbl4gvVea/Ig0KAng2EcZwlVZTkOu/Ig0KAngwEVpG8dCbGvG/Ig0KAng5ESJNTx5MLve/Ig0KAngzEZ88H9P8qfO/Ig0KAng1EVfWP8JJv/K/Ig0KAngxEVS0xVXoTem/Ig0KAngyEaApgW32Q/K/KSiiE8AukPs/MgJ4MDICeDEyAngyMgJ4MzICeDQyAng1MgJ4NjICeDcyAng4MgJ4OUj///////////8B"
+    }
+  }
+}
+```

+ 282 - 0
FATE-Flow/doc/cli/data.md

@@ -0,0 +1,282 @@
+## Data
+
+### upload
+
+Used to upload the input data for the modeling task to the storage system supported by fate
+
+```bash
+flow data upload -c ${conf_path}
+```
+
+Note: conf_path is the parameter path, the specific parameters are as follows
+
+**Options**
+
+| parameter name | required | type | description                                                                                                                      |
+| :------------------ | :--- | :----------- |----------------------------------------------------------------------------------------------------------------------------------|
+| file | yes | string | data storage path                                                                                                                |
+| id_delimiter | yes | string | Data separator, e.g. ","                                                                                                         |
+| head | no | int | Whether the data has a table header                                                                                              | yes | int
+| partition | yes | int | Number of data partitions                                                                                                        |
+| storage_engine | no | string | storage engine type, default "EGGROLL", also support "HDFS", "LOCALFS", "HIVE", etc.                                             |
+| namespace | yes | string | table namespace                                                                                                                  | yes
+| table_name | yes | string | table name                                                                                                                       |
+| storage_address | no | object | The storage address of the corresponding storage engine is required                                                              
+| use_local_data | no | int | The default is 1, which means use the data from the client's machine; 0 means use the data from the fate flow service's machine. 
+| drop | no | int | Whether to overwrite uploads                                                                                                     |
+| extend_sid | no | bool | Whether to add a new column for uuid id, default False                                                                           |
+| auto_increasing_sid | no | bool | Whether the new id column is self-increasing (will only work if extend_sid is True), default False                               |
+
+**mete information**
+
+| parameter name | required | type | description |
+|:---------------------|:----|:-------|-------------------------------------------|
+| input_format | no | string | The format of the data (danse, svmlight, tag:value), used to determine |
+| delimiter | no | string | The data separator, default "," |
+| tag_with_value | no | bool | Valid for tag data format, whether to carry value |
+| tag_value_delimiter | no | string | tag:value data separator, default ":" |
+| with_match_id | no | bool | Whether or not to carry match id |
+| with_match_id | no | object | The name of the id column, effective when extend_sid is enabled, e.g., ["email", "phone"] |
+| id_range | no | object | For tag/svmlight format data, which columns are ids |
+| exclusive_data_type | no | string | The format of the special type data columns |
+| data_type | no | string | Column data type, default "float64 |
+| with_label | no | bool | Whether to have a label, default False |
+| label_name | no | string | The name of the label, default "y" |
+| label_type | no | string | Label type, default "int" |
+
+**In version 1.9.0 and later, passing in the meta parameter will generate anonymous information about the feature.**
+**Example** 
+
+- eggroll
+
+  ```json
+  {
+      "file": "examples/data/breast_hetero_guest.csv",
+      "id_delimiter": ",",
+      "head": 1,
+      "partition": 10,
+      "namespace": "experiment",
+      "table_name": "breast_hetero_guest",
+      "storage_engine": "EGGROLL"
+  }
+  ```
+
+- hdfs
+
+  ```json
+  {
+      "file": "examples/data/breast_hetero_guest.csv",
+      "id_delimiter": ",",
+      "head": 1,
+      "partition": 10,
+      "namespace": "experiment",
+      "table_name": "breast_hetero_guest",
+      "storage_engine": "HDFS"
+  }
+  ```
+
+- localfs
+
+  ```json
+  {
+      "file": "examples/data/breast_hetero_guest.csv",
+      "id_delimiter": ",",
+      "head": 1,
+      "partition": 4,
+      "namespace": "experiment",
+      "table_name": "breast_hetero_guest",
+      "storage_engine": "LOCALFS"
+  }
+  ```
+
+**return parameters** 
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| jobId | string | job id |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | object | return data |
+
+**Example** 
+
+```shell
+{
+    "data": {
+        "board_url": "http://xxx.xxx.xxx.xxx:8080/index.html#/dashboard?job_id=202111081218319075660&role=local&party_id=0",
+        "code": 0,
+        "dsl_path": "/data/projects/fate/jobs/202111081218319075660/job_dsl.json",
+        "job_id": "202111081218319075660",
+        "logs_directory": "/data/projects/fate/logs/202111081218319075660",
+        "message": "success",
+        "model_info": {
+            "model_id": "local-0#model",
+            "model_version": "202111081218319075660"
+        },
+        "namespace": "experiment",
+        "pipeline_dsl_path": "/data/projects/fate/jobs/202111081218319075660/pipeline_dsl.json",
+        "runtime_conf_on_party_path": "/data/projects/fate/jobs/202111081218319075660/local/0/job_runtime_on_party_conf.json",
+        "runtime_conf_path":"/data/projects/fate/jobs/202111081218319075660/job_runtime_conf.json",
+        "table_name": "breast_hetero_host",
+        "train_runtime_conf_path":"/data/projects/fate/jobs/202111081218319075660/train_runtime_conf.json"
+    },
+    "jobId": "202111081218319075660",
+    "retcode": 0,
+    "retmsg": "success"
+}
+
+```
+
+### upload-history
+
+Used to query upload table history.
+
+```
+flow data upload-history -l 20
+flow data upload-history --job-id $JOB_ID
+```
+
+**Options**
+
+| parameter name | required | type   | description                                |
+| :------------- | :------- | :----- | ------------------------------------------ |
+| -l --limit     | no       | int    | Number of records to return. (default: 10) |
+| -j --job_id    | no       | string | Job ID                                     |
+|                |          |        |                                            |
+
+### download
+
+**Brief description:** 
+
+Used to download data from within the fate storage engine to file format data
+
+```bash
+flow data download -c ${conf_path}
+```
+
+Note: conf_path is the parameter path, the specific parameters are as follows
+
+**Options**
+
+| parameter name | required | type | description |
+| :---------- | :--- | :----- | -------------- |
+| output_path | yes | string | download_path |
+| table_name | yes | string | fate table name |
+| namespace | yes | int | fate table namespace |
+
+Example:
+
+```json
+{
+  "output_path": "/data/projects/fate/breast_hetero_guest.csv",
+  "namespace": "experiment",
+  "table_name": "breast_hetero_guest"
+}
+```
+
+**return parameters** 
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| jobId | string | job id |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | object | return data |
+
+**Example** 
+
+```json
+{
+    "data": {
+        "board_url": "http://xxx.xxx.xxx.xxx:8080/index.html#/dashboard?job_id=202111081457135282090&role=local&party_id=0",
+        "code": 0,
+        "dsl_path": "/data/projects/fate/jobs/202111081457135282090/job_dsl.json",
+        "job_id": "202111081457135282090",
+        "logs_directory": "/data/projects/fate/logs/202111081457135282090",
+        "message": "success",
+        "model_info": {
+            "model_id": "local-0#model",
+            "model_version": "202111081457135282090"
+        },
+        "pipeline_dsl_path": "/data/projects/fate/jobs/202111081457135282090/pipeline_dsl.json",
+        "runtime_conf_on_party_path": "/data/projects/fate/jobs/202111081457135282090/local/0/job_runtime_on_party_conf.json",
+        "runtime_conf_path": "/data/projects/fate/jobs/202111081457135282090/job_runtime_conf.json",
+        "train_runtime_conf_path": "/data/projects/fate/jobs/202111081457135282090/train_runtime_conf.json"
+    },
+    "jobId": "202111081457135282090",
+    "retcode": 0,
+    "retmsg": "success"
+}
+
+```
+
+### writer
+
+**Brief description:** 
+
+Used to download data from the fate storage engine to the external engine or to save data as a new table
+
+```bash
+flow data writer -c ${conf_path}
+```
+
+Note: conf_path is the parameter path, the specific parameters are as follows
+
+**Options** 
+
+| parameter name | required | type | description |
+| :---------- | :--- | :----- | -------------- |
+| table_name | yes | string | fate table name |
+| namespace | yes | int | fate table namespace |
+| storage_engine | no | string | Storage type, e.g., MYSQL |
+| address | no | object | storage_address |
+| output_namespace | no | string | Save as a table namespace for fate |
+| output_name | no | string | Save as fate's table name |
+**Note: storage_engine, address are combined parameters that provide storage to the specified engine.
+output_namespace, output_name are also combined parameters, providing the function to save as a new table of the same engine**
+
+Example:
+
+```json
+{
+  "table_name": "name1",
+  "namespace": "namespace1",
+  "output_name": "name2",
+  "output_namespace": "namespace2"
+}
+```
+
+**return**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| jobId | string | job id |
+| retcode | int | return code |
+| retmsg | string | return information |
+| data | object | return data |
+
+**Example** 
+
+```json
+{
+    "data": {
+        "board_url": "http://xxx.xxx.xxx.xxx:8080/index.html#/dashboard?job_id=202201121235115028490&role=local&party_id=0",
+        "code": 0,
+        "dsl_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/job_dsl.json",
+        "job_id": "202201121235115028490",
+        "logs_directory": "/data/projects/fate/fateflow/logs/202201121235115028490",
+        "message": "success",
+        "model_info": {
+            "model_id": "local-0#model",
+            "model_version": "202201121235115028490"
+        },
+        "pipeline_dsl_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/pipeline_dsl.json",
+        "runtime_conf_on_party_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/local/0/job_runtime_on_party_conf.json",
+        "runtime_conf_path":"/data/projects/fate/fateflow/jobs/202201121235115028490/job_runtime_conf.json",
+        "train_runtime_conf_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/train_runtime_conf.json"
+    },
+    "jobId": "202201121235115028490",
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 285 - 0
FATE-Flow/doc/cli/data.zh.md

@@ -0,0 +1,285 @@
+## Data
+
+### upload
+
+用于上传建模任务的输入数据到fate所支持的存储系统
+
+```bash
+flow data upload -c ${conf_path}
+```
+
+注: conf_path为参数路径,具体参数如下
+
+**选项** 
+
+| 参数名                 | 必选  | 类型     | 说明                                              |
+|:--------------------|:----|:-------|-------------------------------------------------|
+| file                | 是   | string | 数据存储路径                                          |
+| id_delimiter        | 是   | string | 数据分隔符,如","                                      |
+| head                | 否   | int    | 数据是否有表头                                         |
+| partition           | 是   | int    | 数据分区数                                           |
+| storage_engine      | 否   | string | 存储引擎类型,默认"EGGROLL",还支持"HDFS","LOCALFS", "HIVE"等 |
+| namespace           | 是   | string | 表命名空间                                           |
+| table_name          | 是   | string | 表名                                              |
+| storage_address     | 否   | object | 需要填写对应存储引擎的存储地址                                 |
+| use_local_data      | 否   | int    | 默认1,代表使用client机器的数据;0代表使用fate flow服务所在机器的数据     |
+| drop                | 否   | int    | 是否覆盖上传                                          |
+| extend_sid          | 否   | bool   | 是否新增一列uuid id,默认False                           |
+| auto_increasing_sid | 否   | bool   | 新增的id列是否自增(extend_sid为True才会生效), 默认False        |
+| with_meta           | 否   | bool   | 是否携带meta数据, 默认False                             |
+| meta                | 否   | object | 元数据, 默认为空,with_meta为true生效                      |
+
+**mete信息**
+
+| 参数名                  | 必选  | 类型     | 说明                                        |
+|:---------------------|:----|:-------|-------------------------------------------|
+| input_format         | 否   | string | 数据格式(danse、svmlight、tag:value),用来判断       |
+| delimiter            | 否   | string | 数据分隔符,默认","                               |
+| tag_with_value       | 否   | bool   | 对tag的数据格式生效,是否携带value                     |
+| tag_value_delimiter  | 否   | string | tag:value数据分隔符,默认":"                      |
+| with_match_id        | 否   | bool   | 是否携带match id                              |
+| id_list              | 否   | object | id列名称,开启extend_sid下生效,如:["imei", "phone"] |
+| id_range             | 否   | object | 对于tag/svmlight格式数据,哪几列为id                 |
+| exclusive_data_type  | 否   | string | 特殊类型数据列格式                                 |
+| data_type            | 否   | string | 列数据类型,默认"float64                          |
+| with_label           | 否   | bool   | 是否有标签,默认False                             |
+| label_name           | 否   | string | 标签名,默认"y"                                 |
+| label_type           | 否   | string | 标签类型, 默认"int"                             |
+
+**注意:在1.9.0及之后的版本中,若传入meta参数,会生成特征的匿名信息。**
+
+**样例** 
+
+- eggroll
+
+  ```json
+  {
+      "file": "examples/data/breast_hetero_guest.csv",
+      "id_delimiter": ",",
+      "head": 1,
+      "partition": 10,
+      "namespace": "experiment",
+      "table_name": "breast_hetero_guest",
+      "storage_engine": "EGGROLL"
+  }
+  ```
+
+- hdfs
+
+  ```json
+  {
+      "file": "examples/data/breast_hetero_guest.csv",
+      "id_delimiter": ",",
+      "head": 1,
+      "partition": 10,
+      "namespace": "experiment",
+      "table_name": "breast_hetero_guest",
+      "storage_engine": "HDFS"
+  }
+  ```
+
+- localfs
+
+  ```json
+  {
+      "file": "examples/data/breast_hetero_guest.csv",
+      "id_delimiter": ",",
+      "head": 1,
+      "partition": 4,
+      "namespace": "experiment",
+      "table_name": "breast_hetero_guest",
+      "storage_engine": "LOCALFS"
+  }
+  ```
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| jobId   | string | 任务id   |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+**样例** 
+
+```shell
+{
+    "data": {
+        "board_url": "http://xxx.xxx.xxx.xxx:8080/index.html#/dashboard?job_id=202111081218319075660&role=local&party_id=0",
+        "code": 0,
+        "dsl_path": "/data/projects/fate/jobs/202111081218319075660/job_dsl.json",
+        "job_id": "202111081218319075660",
+        "logs_directory": "/data/projects/fate/logs/202111081218319075660",
+        "message": "success",
+        "model_info": {
+            "model_id": "local-0#model",
+            "model_version": "202111081218319075660"
+        },
+        "namespace": "experiment",
+        "pipeline_dsl_path": "/data/projects/fate/jobs/202111081218319075660/pipeline_dsl.json",
+        "runtime_conf_on_party_path": "/data/projects/fate/jobs/202111081218319075660/local/0/job_runtime_on_party_conf.json",
+        "runtime_conf_path": "/data/projects/fate/jobs/202111081218319075660/job_runtime_conf.json",
+        "table_name": "breast_hetero_host",
+        "train_runtime_conf_path": "/data/projects/fate/jobs/202111081218319075660/train_runtime_conf.json"
+    },
+    "jobId": "202111081218319075660",
+    "retcode": 0,
+    "retmsg": "success"
+}
+
+```
+
+### upload-history
+
+用于查询上传历史
+
+```
+flow data upload-history -l 20
+flow data upload-history --job-id $JOB_ID
+```
+
+**选项**
+
+| 参数名      | 必选 | 类型   | 说明                |
+| :---------- | :--- | :----- | ------------------- |
+| -l --limit  | no   | int    | 返回数量 (默认: 10) |
+| -j --job_id | no   | string | 任务ID              |
+|             |      |        |                     |
+
+### download
+
+**简要描述:** 
+
+用于下载fate存储引擎内的数据到文件格式数据
+
+```bash
+flow data download -c ${conf_path}
+```
+
+注: conf_path为参数路径,具体参数如下
+
+**选项** 
+
+| 参数名      | 必选 | 类型   | 说明           |
+| :---------- | :--- | :----- | -------------- |
+| output_path | 是   | string | 下载路径       |
+| table_name  | 是   | string | fate表名       |
+| namespace   | 是   | int    | fate表命名空间 |
+
+样例:
+
+```json
+{
+  "output_path": "/data/projects/fate/breast_hetero_guest.csv",
+  "namespace": "experiment",
+  "table_name": "breast_hetero_guest"
+}
+```
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| jobId | string | 任务id |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+    "data": {
+        "board_url": "http://xxx.xxx.xxx.xxx:8080/index.html#/dashboard?job_id=202111081457135282090&role=local&party_id=0",
+        "code": 0,
+        "dsl_path": "/data/projects/fate/jobs/202111081457135282090/job_dsl.json",
+        "job_id": "202111081457135282090",
+        "logs_directory": "/data/projects/fate/logs/202111081457135282090",
+        "message": "success",
+        "model_info": {
+            "model_id": "local-0#model",
+            "model_version": "202111081457135282090"
+        },
+        "pipeline_dsl_path": "/data/projects/fate/jobs/202111081457135282090/pipeline_dsl.json",
+        "runtime_conf_on_party_path": "/data/projects/fate/jobs/202111081457135282090/local/0/job_runtime_on_party_conf.json",
+        "runtime_conf_path": "/data/projects/fate/jobs/202111081457135282090/job_runtime_conf.json",
+        "train_runtime_conf_path": "/data/projects/fate/jobs/202111081457135282090/train_runtime_conf.json"
+    },
+    "jobId": "202111081457135282090",
+    "retcode": 0,
+    "retmsg": "success"
+}
+
+```
+
+### writer
+
+**简要描述:** 
+
+用于下载fate存储引擎内的数据到外部引擎或者将数据另存为新表
+
+```bash
+flow data writer -c ${conf_path}
+```
+
+注: conf_path为参数路径,具体参数如下
+
+**选项** 
+
+| 参数名      | 必选 | 类型   | 说明           |
+| :---------- | :--- | :----- | -------------- |
+| table_name  | 是   | string | fate表名       |
+| namespace   | 是   | int    | fate表命名空间 |
+| storage_engine  | 否   | string    | 存储类型,如:MYSQL |
+| address   | 否   | object    | 存储地址 |
+| output_namespace   | 否   | string    | 另存为fate的表命名空间 |
+| output_name   | 否   | string    | 另存为fate的表名 |
+**注: storage_engine、address是组合参数,提供存储到指定引擎的功能;
+output_namespace、output_name也是组合参数,提供另存为同种引擎的新表功能**
+
+样例:
+
+```json
+{
+  "table_name": "name1",
+  "namespace": "namespace1",
+  "output_name": "name2",
+  "output_namespace": "namespace2"
+}
+```
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| jobId | string | 任务id |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+    "data": {
+        "board_url": "http://xxx.xxx.xxx.xxx:8080/index.html#/dashboard?job_id=202201121235115028490&role=local&party_id=0",
+        "code": 0,
+        "dsl_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/job_dsl.json",
+        "job_id": "202201121235115028490",
+        "logs_directory": "/data/projects/fate/fateflow/logs/202201121235115028490",
+        "message": "success",
+        "model_info": {
+            "model_id": "local-0#model",
+            "model_version": "202201121235115028490"
+        },
+        "pipeline_dsl_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/pipeline_dsl.json",
+        "runtime_conf_on_party_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/local/0/job_runtime_on_party_conf.json",
+        "runtime_conf_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/job_runtime_conf.json",
+        "train_runtime_conf_path": "/data/projects/fate/fateflow/jobs/202201121235115028490/train_runtime_conf.json"
+    },
+    "jobId": "202201121235115028490",
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 277 - 0
FATE-Flow/doc/cli/job.md

@@ -0,0 +1,277 @@
+## Job
+
+### submit
+
+Build a federated learning job with two configuration files: job dsl and job conf, and submit it to the scheduler for execution
+
+```bash
+flow job submit [options]
+```
+
+**Options**
+
+| parameter name  | required | type   | description     |
+| :-------------- | :------- | :----- | --------------- |
+| -d, --dsl-path  | yes      | string | path to job dsl |
+| -c, --conf-path | yes      | string | job conf's path |
+
+**Returns**
+
+| parameter name                  | type   | description                                                                                                           |
+| :------------------------------ | :----- | --------------------------------------------------------------------------------------------------------------------- |
+| retcode                         | int    | return code                                                                                                           |
+| retmsg                          | string | return message                                                                                                    |
+| jobId                           | string | Job ID                                                                                                                |
+| data                            | dict   | return data                                                                                                           |
+| data.dsl_path                   | string | The path to the actual running dsl configuration generated by the system based on the submitted dsl content           |
+| data.runtime_conf_on_party_path | string | The system-generated path to the actual running conf configuration for each party based on the submitted conf content |
+| data.dsl_path                   | string | The system-generated path to the actual running conf configuration for each party based on the submitted conf content |
+| data.board_url                  | string | fateboard view address                                                                                                |
+| data.model_info                 | dict   | Model identification information                                                                                      |
+
+**Example**
+
+```json
+{
+    "data": {
+        "board_url": "http://127.0.0.1:8080/index.html#/dashboard?job_id=202111061608424372620&role=guest&party_id=9999",
+        "code": 0,
+        "dsl_path": "$FATE_PROJECT_BASE/jobs/202111061608424372620/job_dsl.json",
+        "job_id": "202111061608424372620",
+        "logs_directory": "$FATE_PROJECT_BASE/logs/202111061608424372620",
+        "message": "success",
+        "model_info": {
+            "model_id": "arbiter-10000#guest-9999#host-10000#model",
+            "model_version": "202111061608424372620"
+        },
+        "pipeline_dsl_path": "$FATE_PROJECT_BASE/jobs/202111061608424372620/pipeline_dsl.json",
+        "runtime_conf_on_party_path": "$FATE_FATE_PROJECT_BASE/jobs/202111061608424372620/guest/9999/job_runtime_on_party_conf.json",
+        "runtime_conf_path":"$FATE_PROJECT_BASE/jobs/202111061608424372620/job_runtime_conf.json",
+        "train_runtime_conf_path": "$FATE_PROJECT_BASE/jobs/202111061608424372620/train_runtime_conf.json"
+    },
+    "jobId": "202111061608424372620",
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### rerun
+
+Rerun a job
+
+```bash
+flow job rerun [options]
+```
+
+**Options**
+
+| parameter name | required | type | description |
+| :------------- | :------- | :--- | ----------- |------- |
+| -j, --job-id | yes | string | job id path |
+| --cpn, --component-name | no | string | Specifies which component to rerun from, unspecified components will not be executed if they have no upstream dependencies on the specified component; if not specified, the entire job will be rerun |
+| --force | no | bool | The job will be rerun even if it succeeds; if not specified, the job will be skipped if it succeeds |
+
+**Returns**
+
+| parameter name | type   | description        |
+| :------------- | :----- | ------------------ |
+| retcode        | int    | return code        |
+| retmsg         | string | return message |
+| jobId          | string | Job ID             |
+| data           | dict   | return data        |
+
+**Example**
+
+```bash
+flow job rerun -j 202111031100369723120
+```
+
+```bash
+flow job rerun -j 202111031100369723120 -cpn hetero_lr_0
+```
+
+```bash
+flow job rerun -j 202111031100369723120 -cpn hetero_lr_0 --force 
+```
+
+### parameter-update
+
+Update the job parameters
+
+```bash
+flow job parameter-update [options]
+```
+
+**Options**
+
+| parameter-name  | required | type   | description                                                                                                        |
+| :-------------- | :------- | :----- | ------------------------------------------------------------------------------------------------------------------ |
+| -j, --job-id    | yes      | string | job id path                                                                                                        |
+| -c, --conf-path | yes      | string | The contents of the job conf that needs to be updated, no need to fill in parameters that don't need to be updated |
+
+**Returns**
+
+| parameter name | type   | description                  |
+| :------------- | :----- | ---------------------------- |
+| retcode        | int    | return code                  |
+| retmsg         | string | return message           |
+| jobId          | string | Job ID                       |
+| data           | dict   | Returns the updated job conf |
+
+**Example**
+
+Assuming that the job is updated with some of the execution parameters of the hetero_lr_0 component, the configuration file is as follows.
+```bash
+{
+  "job_parameters": {
+  },
+  "component_parameters": {
+    "common": {
+      "hetero_lr_0": {
+        "alpha": 0.02,
+        "max_iter": 5
+      }
+    }
+  }
+}
+```
+
+Execution of the following command takes effect.
+
+```bash
+flow job parameter-update -j 202111061957421943730 -c examples/other/update_parameters.json
+```
+
+Execute the following command to rerun.
+
+```bash
+flow job rerun -j 202111061957421943730 -cpn hetero_lr_0 --force 
+```
+
+### stop
+
+Cancels or terminates the specified job
+
+**Options**
+
+| number | parameters | short format | long format | required parameters | parameter description |
+| ------ | ---------- | ------------ | ----------- | ------------------- | --------------------- |
+| 1      | job_id     | `-j`         | `--job_id`  | yes                 | Job ID                |
+
+**Example**
+
+``` bash
+flow job stop -j $JOB_ID
+```
+
+### query
+
+Retrieve task information.
+**Options**
+
+| number | parameters | short-format | long-format  | required parameters | parameter description |
+| ------ | ---------- | ------------ | ------------ | ------------------- | --------------------- |
+| 1      | job_id     | `-j`         | `--job_id`   | no                  | Job ID                |
+| 2      | role       | `-r`         | `--role`     | no                  | role                  |
+| 3      | party_id   | `-p`         | `--party_id` | no                  | Party ID              |
+| 4      | status     | `-s`         | `--status`   | No                  | Task status           |
+
+**Example**
+
+``` bash
+flow job query -r guest -p 9999 -s complete
+flow job query -j $JOB_ID
+```
+
+### view
+
+Retrieve the job data view.
+**Options**
+
+| number | parameters | short-format | long-format  | required parameters | parameter description |
+| ------ | ---------- | ------------ | ------------ | ------------------- | --------------------- |
+| 1      | job_id     | `-j`         | `--job_id`   | yes                 | Job ID                |
+| 2      | role       | `-r`         | `--role`     | no                  | role                  |
+| 3      | party_id   | `-p`         | `--party_id` | no                  | Party ID              |
+| 4      | status     | `-s`         | `--status`   | No                  | Task status           |
+
+**Example**
+
+``` bash
+flow job view -j $JOB_ID -s complete
+```
+
+### config
+
+Download the configuration file for the specified job to the specified directory.
+
+**Options**
+
+| number | parameters  | short-format | long-format     | required parameters | parameter description |
+| ------ | ----------- | ------------ | --------------- | ------------------- | --------------------- |
+| 1      | job_id      | `-j`         | `--job_id`      | yes                 | Job ID                |
+| 2      | role        | `-r`         | `--role`        | yes                 | role                  |
+| 3      | party_id    | `-p`         | `--party_id`    | yes                 | Party ID              |
+| 4      | output_path | `-o`         | `--output-path` | yes                 | output directory      |
+
+**Example**
+
+``` bash
+flow job config -j $JOB_ID -r host -p 10000 --output-path . /examples/
+```
+
+### log
+
+Download the log file of the specified job to the specified directory.
+**Options**
+
+| number | parameters  | short-format | long-format     | required parameters | parameter description |
+| ------ | ----------- | ------------ | --------------- | ------------------- | --------------------- |
+| 1      | job_id      | `-j`         | `--job_id`      | yes                 | Job ID                |
+| 2      | output_path | `-o`         | `--output-path` | yes                 | output directory      |
+
+**Example**
+
+``` bash
+flow job log -j JOB_ID --output-path . /examples/
+```
+
+### list
+
+Show the list of jobs.
+**Options**
+
+| number | parameters | short-format | long-format | required parameters | parameter description                  |
+| ------ | ---------- | ------------ | ----------- | ------------------- | -------------------------------------- |
+| 1      | limit      | `-l`         | `-limit`    | no                  | Returns the number limit (default: 10) |
+
+**Example**
+
+``` bash
+flow job list
+flow job list -l 30
+```
+
+### dsl
+
+Predictive DSL file generator.
+**Options**
+
+| number | parameters     | short-format | long-format       | required parameters | parameter description                                        |
+| ------ | -------------- | ------------ | ----------------- | ------------------- | ------------------------------------------------------------ |
+| 1      | cpn_list       |              | `-cpn-list`       | No                  | List of user-specified component names                       |
+| 2      | cpn_path       |              | `-cpn-path`       | No                  | User-specified path to a file with a list of component names |
+| 3      | train_dsl_path |              | `-train-dsl-path` | yes                 | path to the training dsl file                                |
+| 4      | output_path    | `-o`         | `--output-path`   | no                  | output directory path                                        |
+
+**Example**
+
+``` bash
+flow job dsl --cpn-path fate_flow/examples/component_list.txt --train-dsl-path fate_flow/examples/test_hetero_lr_job_dsl.json
+
+flow job dsl --cpn-path fate_flow/examples/component_list.txt --train-dsl-path fate_flow/examples/test_hetero_lr_job_dsl.json -o fate_flow /examples/
+
+flow job dsl --cpn-list "dataio_0, hetero_feature_binning_0, hetero_feature_selection_0, evaluation_0" --train-dsl-path fate_flow/examples/ test_hetero_lr_job_dsl.json -o fate_flow/examples/
+
+flow job dsl --cpn-list [dataio_0,hetero_feature_binning_0,hetero_feature_selection_0,evaluation_0] --train-dsl-path fate_flow/examples/ test_hetero_lr_job_dsl.json -o fate_flow/examples/
+```

+ 275 - 0
FATE-Flow/doc/cli/job.zh.md

@@ -0,0 +1,275 @@
+## Job
+
+### submit
+
+通过两个配置文件:job dsl和job conf构建一个联邦学习作业,提交到调度系统执行
+
+```bash
+flow job submit [options]
+```
+
+**选项**
+
+| 参数名          | 必选 | 类型   | 说明           |
+| :-------------- | :--- | :----- | -------------- |
+| -d, --dsl-path  | 是   | string | job dsl的路径  |
+| -c, --conf-path | 是   | string | job conf的路径 |
+
+**返回**
+
+| 参数名                          | 类型   | 说明                                                                  |
+| :------------------------------ | :----- | --------------------------------------------------------------------- |
+| retcode                         | int    | 返回码                                                                |
+| retmsg                          | string | 返回信息                                                              |
+| jobId                           | string | 作业ID                                                                |
+| data                            | dict   | 返回数据                                                              |
+| data.dsl_path                   | string | 依据提交的dsl内容,由系统生成的实际运行dsl配置的存放路径              |
+| data.runtime_conf_on_party_path | string | 依据提交的conf内容,由系统生成的在每个party实际运行conf配置的存放路径 |
+| data.board_url                  | string | fateboard查看地址                                                     |
+| data.model_info                 | dict   | 模型标识信息                                                          |
+
+**样例** 
+
+```json
+{
+    "data": {
+        "board_url": "http://127.0.0.1:8080/index.html#/dashboard?job_id=202111061608424372620&role=guest&party_id=9999",
+        "code": 0,
+        "dsl_path": "$FATE_PROJECT_BASE/jobs/202111061608424372620/job_dsl.json",
+        "job_id": "202111061608424372620",
+        "logs_directory": "$FATE_PROJECT_BASE/logs/202111061608424372620",
+        "message": "success",
+        "model_info": {
+            "model_id": "arbiter-10000#guest-9999#host-10000#model",
+            "model_version": "202111061608424372620"
+        },
+        "pipeline_dsl_path": "$FATE_PROJECT_BASE/jobs/202111061608424372620/pipeline_dsl.json",
+        "runtime_conf_on_party_path": "$FATE_FATE_PROJECT_BASE/jobs/202111061608424372620/guest/9999/job_runtime_on_party_conf.json",
+        "runtime_conf_path": "$FATE_PROJECT_BASE/jobs/202111061608424372620/job_runtime_conf.json",
+        "train_runtime_conf_path": "$FATE_PROJECT_BASE/jobs/202111061608424372620/train_runtime_conf.json"
+    },
+    "jobId": "202111061608424372620",
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### rerun
+
+重新运行某个作业
+
+```bash
+flow job rerun [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                                                                                                  |
+| :--------------------- | :--- | :----- | ----------------------------------------------------------------------------------------------------- |
+| -j, --job-id           | 是   | string | job id 路径                                                                                           |
+| -cpn, --component-name | 否   | string | 指定从哪个组件重跑,没被指定的组件若与指定组件没有上游依赖关系则不会执行;若不指定该参数则整个作业重跑 |
+| --force                | 否   | bool   | 作业即使成功也重跑;若不指定该参数,作业如果成功,则跳过重跑                                           |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| jobId   | string | 作业ID   |
+| data    | dict   | 返回数据 |
+
+**样例** 
+
+```bash
+flow job rerun -j 202111031100369723120
+```
+
+```bash
+flow job rerun -j 202111031100369723120 -cpn hetero_lr_0
+```
+
+```bash
+flow job rerun -j 202111031100369723120 -cpn hetero_lr_0 --force 
+```
+
+### parameter-update
+
+更新作业参数
+
+```bash
+flow job parameter-update [options]
+```
+
+**选项**
+
+| 参数名          | 必选 | 类型   | 说明                                                 |
+| :-------------- | :--- | :----- | ---------------------------------------------------- |
+| -j, --job-id    | 是   | string | job id 路径                                          |
+| -c, --conf-path | 是   | string | 需要更新的job conf的内容,不需要更新的参数不需要填写 |
+
+**返回**
+
+| 参数名  | 类型   | 说明                 |
+| :------ | :----- | -------------------- |
+| retcode | int    | 返回码               |
+| retmsg  | string | 返回信息             |
+| jobId   | string | 作业ID               |
+| data    | dict   | 返回更新后的job conf |
+
+**样例** 
+
+假设更新job中hetero_lr_0这个组件的部分执行参数,配置文件如下:
+```bash
+{
+  "job_parameters": {
+  },
+  "component_parameters": {
+    "common": {
+      "hetero_lr_0": {
+        "alpha": 0.02,
+        "max_iter": 5
+      }
+    }
+  }
+}
+```
+
+执行如下命令生效:
+
+```bash
+flow job parameter-update -j 202111061957421943730 -c examples/other/update_parameters.json
+```
+
+执行如下命令重跑:
+
+```bash
+flow job rerun -j 202111061957421943730 -cpn hetero_lr_0 --force 
+```
+
+### stop
+
+取消或终止指定任务
+
+**选项**
+
+| 编号 | 参数   | 短格式 | 长格式     | 必要参数 | 参数介绍 |
+| ---- | ------ | ------ | ---------- | -------- | -------- |
+| 1    | job_id | `-j`   | `--job_id` | 是       | Job ID   |
+
+**样例**
+
+``` bash
+flow job stop -j $JOB_ID
+```
+
+### query
+
+检索任务信息。
+**选项**
+
+| 编号 | 参数     | 短格式 | 长格式       | 必要参数 | 参数介绍 |
+| ---- | -------- | ------ | ------------ | -------- | -------- |
+| 1    | job_id   | `-j`   | `--job_id`   | 否       | Job ID   |
+| 2    | role     | `-r`   | `--role`     | 否       | 角色     |
+| 3    | party_id | `-p`   | `--party_id` | 否       | Party ID |
+| 4    | status   | `-s`   | `--status`   | 否       | 任务状态 |
+
+**样例**:
+
+``` bash
+flow job query -r guest -p 9999 -s complete
+flow job query -j $JOB_ID
+```
+
+### view
+
+检索任务数据视图。
+**选项**
+
+| 编号 | 参数     | 短格式 | 长格式       | 必要参数 | 参数介绍 |
+| ---- | -------- | ------ | ------------ | -------- | -------- |
+| 1    | job_id   | `-j`   | `--job_id`   | 是       | Job ID   |
+| 2    | role     | `-r`   | `--role`     | 否       | 角色     |
+| 3    | party_id | `-p`   | `--party_id` | 否       | Party ID |
+| 4    | status   | `-s`   | `--status`   | 否       | 任务状态 |
+
+**样例**:
+
+``` bash
+flow job view -j $JOB_ID -s complete
+```
+
+### config
+
+下载指定任务的配置文件到指定目录。
+**选项**
+
+| 编号 | 参数        | 短格式 | 长格式          | 必要参数 | 参数介绍 |
+| ---- | ----------- | ------ | --------------- | -------- | -------- |
+| 1    | job_id      | `-j`   | `--job_id`      | 是       | Job ID   |
+| 2    | role        | `-r`   | `--role`        | 是       | 角色     |
+| 3    | party_id    | `-p`   | `--party_id`    | 是       | Party ID |
+| 4    | output_path | `-o`   | `--output-path` | 是       | 输出目录 |
+
+**样例**:
+
+``` bash
+flow job config -j $JOB_ID -r host -p 10000 --output-path ./examples/
+```
+
+### log
+
+下载指定任务的日志文件到指定目录。
+**选项**
+
+| 编号 | 参数        | 短格式 | 长格式          | 必要参数 | 参数介绍 |
+| ---- | ----------- | ------ | --------------- | -------- | -------- |
+| 1    | job_id      | `-j`   | `--job_id`      | 是       | Job ID   |
+| 2    | output_path | `-o`   | `--output-path` | 是       | 输出目录 |
+
+**样例**:
+
+``` bash
+flow job log -j JOB_ID --output-path ./examples/
+```
+
+### list
+
+展示任务列表。
+**选项**
+
+| 编号 | 参数  | 短格式 | 长格式    | 必要参数 | 参数介绍                 |
+| ---- | ----- | ------ | --------- | -------- | ------------------------ |
+| 1    | limit | `-l`   | `--limit` | 否       | 返回数量限制(默认:10) |
+
+**样例**:
+
+``` bash
+flow job list
+flow job list -l 30
+```
+
+### dsl
+
+预测DSL文件生成器。
+**选项**
+
+| 编号 | 参数           | 短格式 | 长格式             | 必要参数 | 参数介绍                         |
+| ---- | -------------- | ------ | ------------------ | -------- | -------------------------------- |
+| 1    | cpn_list       |        | `--cpn-list`       | 否       | 用户指定组件名列表               |
+| 2    | cpn_path       |        | `--cpn-path`       | 否       | 用户指定带有组件名列表的文件路径 |
+| 3    | train_dsl_path |        | `--train-dsl-path` | 是       | 训练dsl文件路径                  |
+| 4    | output_path    | `-o`   | `--output-path`    | 否       | 输出目录路径                     |
+
+**样例**:
+
+``` bash
+flow job dsl --cpn-path fate_flow/examples/component_list.txt --train-dsl-path fate_flow/examples/test_hetero_lr_job_dsl.json
+
+flow job dsl --cpn-path fate_flow/examples/component_list.txt --train-dsl-path fate_flow/examples/test_hetero_lr_job_dsl.json -o fate_flow/examples/
+
+flow job dsl --cpn-list "dataio_0, hetero_feature_binning_0, hetero_feature_selection_0, evaluation_0" --train-dsl-path fate_flow/examples/test_hetero_lr_job_dsl.json -o fate_flow/examples/
+
+flow job dsl --cpn-list [dataio_0,hetero_feature_binning_0,hetero_feature_selection_0,evaluation_0] --train-dsl-path fate_flow/examples/test_hetero_lr_job_dsl.json -o fate_flow/examples/
+```

+ 101 - 0
FATE-Flow/doc/cli/key.md

@@ -0,0 +1,101 @@
+## Key
+
+### query
+
+Query the public key information of our or partner's fate site
+
+```bash
+flow key query -p 9999
+```
+**Options** 
+
+| parameters | short-format | long-format | required | type | description |
+| :-------- | :-----| :-----| :-----| :-----| -------------- |
+| party_id | `-p` | `--party-id` | yes | string | site id |
+
+**returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return-code |
+| retmsg | string | return information |
+| data | object | return data |
+
+Sample
+
+```json
+{
+  "data": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzxgbxa3cfhvwbu0AFfY/\ nkm7uFZ17J0EEDgaIWlrLakds7XboU5iOT0eReQp/KG3R0fVM9rBtdj8NcBcArtZ9\n2242Atls3jiuza/MPPo9XACnedGW7O+ VAfvVmq2sdmKZMX5l7krEXYN645UZAd8b\nhIh+xf0qGW6IgxyKvqF13VxxB7OMUzUwyY/ZcN2rW1urfdXsCNoQ1cFl3KaarkHl\nn/ gBMcCDvACXoKysFnFE7L4E7CGglYaDBJrfIyti+sbSVNxUDx2at2VXqj/PohTa\nkBKfrgK7sT85gz1sc9uRwhwF4nOY7izq367S7t/W8BJ75gWsr+lhhiIfE19RBbBQ\n /wIDAQAB\n-----END PUBLIC KEY-----",
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### save
+
+Used to save other fate site public key information, that is, for cooperation with other sites
+
+```bash
+flow key save -c fateflow/examples/key/save_public_key.json
+```
+
+**Options** 
+
+| parameters | short format | long format | required | type | description |
+| :-------- | :-----| :-----| :-----| :----- | -------------- |
+| conf_path | `-c` | `-conf-path` | yes | string | configuration-path |
+
+Note: conf_path is the parameter path, the specific parameters are as follows
+
+| parameter name | required | type | description |
+|:---------------| :--- | :----- |---------------------------------|
+| party_id | yes | string | site id |
+| key | yes | string | site public key |
+
+**return**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+
+
+Sample
+
+```json
+{
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### delete
+
+Delete the partner site public key, i.e. cancel the partnership
+
+```bash
+flow key delete -p 9999
+```
+
+**Options** 
+
+| parameters | short-format | long-format | required | type | description |
+| :------ | :----- | :-----| :-----| :-----| -------- |
+| party_id | `-p` | `--party-id` | yes | string | site id |
+
+**returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return-code |
+| retmsg | string | return message |
+
+
+Sample
+
+```json
+{
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 101 - 0
FATE-Flow/doc/cli/key.zh.md

@@ -0,0 +1,101 @@
+## Key
+
+### query
+
+用于查询本方或合作方fate站点公钥信息
+
+```bash
+flow key query -p 9999
+```
+**选项** 
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| party_id | `-p` | `--party-id` |是   | string | 站点id |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+  "data": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzxgbxa3cfhvwbu0AFfY/\nkm7uFZ17J0EEDgaIWlrLakds7XboU5iOT0eReQp/KG3R0fVM9rBtdj8NcBcArtZ9\n2242Atls3jiuza/MPPo9XACnedGW7O+VAfvVmq2sdmKZMX5l7krEXYN645UZAd8b\nhIh+xf0qGW6IgxyKvqF13VxxB7OMUzUwyY/ZcN2rW1urfdXsCNoQ1cFl3KaarkHl\nn/gBMcCDvACXoKysFnFE7L4E7CGglYaDBJrfIyti+sbSVNxUDx2at2VXqj/PohTa\nkBKfrgK7sT85gz1sc9uRwhwF4nOY7izq367S7t/W8BJ75gWsr+lhhiIfE19RBbBQ\n/wIDAQAB\n-----END PUBLIC KEY-----",
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### save
+
+用于保存其它fate站点公钥信息,即为和其他站点合作
+
+```bash
+flow key save -c fateflow/examples/key/save_public_key.json
+```
+
+**选项** 
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| conf_path | `-c`   |`--conf-path`   |是   | string | 配置路径  |
+
+注: conf_path为参数路径,具体参数如下
+
+| 参数名            | 必选 | 类型   | 说明                              |
+|:---------------| :--- | :----- |---------------------------------|
+| party_id       | 是   | string | 站点id                            |
+| key            | 是   | string | 站点公钥                            |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+
+
+样例
+
+```json
+{
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### delete
+
+删除合作方站点公钥,即为取消合作关系
+
+```bash
+flow key delete -p 9999
+```
+
+**选项** 
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| party_id | `-p` | `--party-id` |是   | string | 站点id |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+
+
+样例
+
+```json
+{
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 376 - 0
FATE-Flow/doc/cli/model.md

@@ -0,0 +1,376 @@
+## Model
+
+### load
+
+Load a model generated by `deploy` to Fate-Serving.
+
+
+```bash
+flow model load -c examples/model/publish_load_model.json
+flow model load -j <job_id>
+```
+
+**Options**
+
+| Parameter | Short Flag | Long Flag     | Optional | Description      |
+| --------- | ---------- | ------------- | -------- | ---------------- |
+| conf_path | `-c`       | `--conf-path` | Yes      | Config file path |
+| job_id    | `-j`       | `--job-id`    | Yes      | Job ID           |
+
+**Example**
+
+```json
+{
+  "data": {
+    "detail": {
+      "guest": {
+        "9999": {
+          "retcode": 0,
+          "retmsg": "success"
+        }
+      },
+      "host": {
+        "10000": {
+          "retcode": 0,
+          "retmsg": "success"
+        }
+      }
+    },
+    "guest": {
+      "9999": 0
+    },
+    "host": {
+      "10000": 0
+    }
+  },
+  "jobId": "202111091122168817080",
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### bind
+
+Bind a model generated by `deploy` to Fate-Serving.
+
+```bash
+flow model bind -c examples/model/bind_model_service.json
+flow model bind -c examples/model/bind_model_service.json -j <job_id>
+```
+
+**Options**
+
+| Parameter | Short Flag | Long Flag     | Optional | Description      |
+| --------- | ---------- | ------------- | -------- | ---------------- |
+| conf_path | `-c`       | `--conf-path` | No       | Config file path |
+| job_id    | `-j`       | `--job-id`    | Yes      | Job ID           |
+
+**Example**
+
+```json
+{
+  "retcode": 0,
+  "retmsg": "service id is 123"
+}
+```
+
+### import
+
+Import the model from a file or storage engine.
+
+```bash
+flow model import -c examples/model/import_model.json
+flow model import -c examples/model/restore_model.json --from-database
+```
+
+**Options**
+
+| Parameter     | Short Flag | Long Flag         | Optional | Description                          |
+| ------------- | ---------- | ----------------- | -------- | ------------------------------------ |
+| conf_path     | `-c`       | `--conf-path`     | No       | Config file path                     |
+| from_database |            | `--from-database` | Yes      | Import the model from storage engine |
+
+**Example**
+
+```json
+{
+  "data": {
+    "job_id": "202208261102212849780",
+    "model_id": "arbiter-10000#guest-9999#host-10000#model",
+    "model_version": "foobar",
+    "party_id": "9999",
+    "role": "guest"
+  },
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### export
+
+Export the model to a file or storage engine.
+
+```bash
+flow model export -c examples/model/export_model.json
+flow model export -c examples/model/store_model.json --to-database
+```
+
+**Options**
+
+| Parameter   | Short Flag | Long Flag       | Optional | Description                        |
+| ----------- | ---------- | --------------- | -------- | ---------------------------------- |
+| conf_path   | `-c`       | `--conf-path`   | No       | Config file path                   |
+| to_database |            | `--to-database` | Yes      | Export the model to storage engine |
+
+**Example**
+
+```json
+{
+  "data": {
+    "board_url": "http://127.0.0.1:8080/index.html#/dashboard?job_id=202111091124582110490&role=local&party_id=0",
+    "code": 0,
+    "dsl_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/job_dsl.json",
+    "job_id": "202111091124582110490",
+    "logs_directory": "/root/Codes/FATE-Flow/logs/202111091124582110490",
+    "message": "success",
+    "model_info": {
+      "model_id": "local-0#model",
+      "model_version": "202111091124582110490"
+    },
+    "pipeline_dsl_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/pipeline_dsl.json",
+    "runtime_conf_on_party_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/local/0/job_runtime_on_party_conf.json",
+    "runtime_conf_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/job_runtime_conf.json",
+    "train_runtime_conf_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/train_runtime_conf.json"
+  },
+  "jobId": "202111091124582110490",
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### migrate
+
+Migrate the model.
+
+```bash
+flow model migrate -c examples/model/migrate_model.json
+```
+
+**Options**
+
+| Parameter | Short Flag | Long Flag     | Optional | Description      |
+| --------- | ---------- | ------------- | -------- | ---------------- |
+| conf_path | `-c`       | `--conf-path` | No       | Config file path |
+
+**Example**
+
+```json
+{
+  "data": {
+    "arbiter": {
+      "10000": 0
+    },
+    "detail": {
+      "arbiter": {
+        "10000": {
+          "retcode": 0,
+          "retmsg": "Migrating model successfully. The Config of model has been modified automatically. New model id is: arbiter-100#guest-99#host-100#model, model version is: 202111091127392613050. Model files can be found at '/root/Codes/FATE-Flow/temp/fate_flow/arbiter#100#arbiter-100#guest-99#host-100#model_202111091127392613050.zip'."
+        }
+      },
+      "guest": {
+        "9999": {
+          "retcode": 0,
+          "retmsg": "Migrating model successfully. The Config of model has been modified automatically. New model id is: arbiter-100#guest-99#host-100#model, model version is: 202111091127392613050. Model files can be found at '/root/Codes/FATE-Flow/temp/fate_flow/guest#99#arbiter-100#guest-99#host-100#model_202111091127392613050.zip'."
+        }
+      },
+      "host": {
+        "10000": {
+          "retcode": 0,
+          "retmsg": "Migrating model successfully. The Config of model has been modified automatically. New model id is: arbiter-100#guest-99#host-100#model, model version is: 202111091127392613050. Model files can be found at '/root/Codes/FATE-Flow/temp/fate_flow/host#100#arbiter-100#guest-99#host-100#model_202111091127392613050.zip'."
+        }
+      }
+    },
+    "guest": {
+      "9999": 0
+    },
+    "host": {
+      "10000": 0
+    }
+  },
+  "jobId": "202111091127392613050",
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### tag-list
+
+List tags of the model.
+
+``` bash
+flow model tag-list -j <job_id>
+```
+
+**Options**
+
+| Parameter | Short Flag | Long Flag  | Optional | Description |
+| --------- | ---------- | ---------- | -------- | ----------- |
+| job_id    | `-j`       | `--job_id` | No       | Job ID      |
+
+### tag-model
+
+Add or remove a tag from the model.
+
+```bash
+flow model tag-model -j <job_id> -t <tag_name>
+flow model tag-model -j <job_id> -t <tag_name> --remove
+```
+
+**Options**
+
+| Parameter     | Short Flag | Long Flag       | Optional | Description           |
+| -------- | ------ | ------------ | -------- | -------------- |
+| job_id   | `-j`   | `--job_id`   | No       | Job ID        |
+| tag_name | `-t`   | `--tag-name` | No       | Tag name         |
+| remove   |        | `--remove`   | Yes       | Remove the tag |
+
+### deploy
+
+Configure predict DSL.
+
+```bash
+flow model deploy --model-id <model_id> --model-version <model_version>
+```
+
+**Options**
+
+| Parameter      | Short Flag | Long Flag          | Optional | Description                                                  |
+| -------------- | ---------- | ------------------ | -------- | ------------------------------------------------------------ |
+| model_id       |            | `--model-id`       | No       | Model ID                                                     |
+| model_version  |            | `--model-version`  | No       | Model version                                                |
+| cpn_list       |            | `--cpn-list`       | Yes      | Components list                                              |
+| cpn_path       |            | `--cpn-path`       | Yes      | Load components list from a file                             |
+| dsl_path       |            | `--dsl-path`       | Yes      | Predict DSL file path                                        |
+| cpn_step_index |            | `--cpn-step-index` | Yes      | Specify a checkpoint model to replace the pipeline model<br />Use `:` to separate component name and step index<br />E.g. `--cpn-step-index cpn_a:123` |
+| cpn_step_name  |            | `--cpn-step-name`  | Yes      | Specify a checkpoint model to replace the pipeline model.<br />Use `:` to separate component name and step name<br />E.g. `--cpn-step-name cpn_b:foobar` |
+
+**Example**
+
+```json
+{
+  "retcode": 0,
+  "retmsg": "success",
+  "data": {
+    "model_id": "arbiter-9999#guest-10000#host-9999#model",
+    "model_version": "202111032227378766180",
+    "arbiter": {
+      "party_id": 9999
+    },
+    "guest": {
+      "party_id": 10000
+    },
+    "host": {
+      "party_id": 9999
+    },
+    "detail": {
+      "arbiter": {
+        "party_id": {
+          "retcode": 0,
+          "retmsg": "deploy model of role arbiter 9999 success"
+        }
+      },
+      "guest": {
+        "party_id": {
+          "retcode": 0,
+          "retmsg": "deploy model of role guest 10000 success"
+        }
+      },
+      "host": {
+        "party_id": {
+          "retcode": 0,
+          "retmsg": "deploy model of role host 9999 success"
+        }
+      }
+    }
+  }
+}
+```
+
+### get-predict-dsl
+
+Get predict DSL of the model.
+
+```bash
+flow model get-predict-dsl --model-id <model_id> --model-version <model_version> -o ./examples/
+```
+
+**Options**
+
+| Parameter     | Short Flag | Long Flag         | Optional | Description   |
+| ------------- | ---------- | ----------------- | -------- | ------------- |
+| model_id      |            | `--model-id`      | No       | Model ID      |
+| model_version |            | `--model-version` | No       | Model version |
+| output_path   | `-o`       | `--output-path`   | No       | Output path   |
+
+### get-predict-conf
+
+Get the template of predict config.
+
+```bash
+flow model get-predict-conf --model-id <model_id> --model-version <model_version> -o ./examples/
+```
+
+**Options**
+
+| Parameter     | Short Flag | Long Flag         | Optional | Description   |
+| ------------- | ---------- | ----------------- | -------- | ------------- |
+| model_id      |            | `--model-id`      | No       | Model ID      |
+| model_version |            | `--model-version` | No       | Model version |
+| output_path   | `-o`       | `--output-path`   | No       | Output path   |
+
+### get-model-info
+
+Get model information.
+
+```bash
+flow model get-model-info --model-id <model_id> --model-version <model_version>
+flow model get-model-info --model-id <model_id> --model-version <model_version> --detail
+```
+
+**Options**
+
+| Parameter     | Short Flag | Long Flag         | Optional | Description                  |
+| ------------- | ---------- | ----------------- | -------- | ---------------------------- |
+| model_id      |            | `--model-id`      | No       | Model ID                     |
+| model_version |            | `--model-version` | No       | Model version                |
+| role          | `-r`       | `--role`          | Yes      | Party role                   |
+| party_id      | `-p`       | `--party-id`      | Yes      | Party ID                     |
+| detail        |            | `--detail`        | Yes      | Display detailed information |
+
+### homo-convert
+
+Convert trained homogenous model to the format of another ML framework.
+
+```bash
+flow model homo-convert -c examples/model/homo_convert_model.json
+```
+
+**Options**
+
+| Parameter | Short Flag | Long Flag     | Optional | Description      |
+| --------- | ---------- | ------------- | -------- | ---------------- |
+| conf_path | `-c`       | `--conf-path` | No       | Config file path |
+
+### homo-deploy
+
+Deploy trained homogenous model to a target online serving system. Currently the supported target serving system is KFServing.
+
+```bash
+flow model homo-deploy -c examples/model/homo_deploy_model.json
+```
+
+**Options**
+
+| Parameter | Short Flag | Long Flag     | Optional | Description      |
+| --------- | ---------- | ------------- | -------- | ---------------- |
+| conf_path | `-c`       | `--conf-path` | No       | Config file path |

+ 375 - 0
FATE-Flow/doc/cli/model.zh.md

@@ -0,0 +1,375 @@
+## Model
+
+### load
+
+向 Fate-Serving 加载 `deploy` 生成的模型。
+
+```bash
+flow model load -c examples/model/publish_load_model.json
+flow model load -j <job_id>
+```
+
+**选项**
+
+| 参数      | 短格式 | 长格式        | 可选参数 | 说明     |
+| --------- | ------ | ------------- | -------- | -------- |
+| conf_path | `-c`   | `--conf-path` | 是       | 配置文件 |
+| job_id    | `-j`   | `--job-id`    | 是       | 任务 ID  |
+
+**样例**
+
+```json
+{
+  "data": {
+    "detail": {
+      "guest": {
+        "9999": {
+          "retcode": 0,
+          "retmsg": "success"
+        }
+      },
+      "host": {
+        "10000": {
+          "retcode": 0,
+          "retmsg": "success"
+        }
+      }
+    },
+    "guest": {
+      "9999": 0
+    },
+    "host": {
+      "10000": 0
+    }
+  },
+  "jobId": "202111091122168817080",
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### bind
+
+向 Fate-Serving 绑定 `deploy` 生成的模型。
+
+```bash
+flow model bind -c examples/model/bind_model_service.json
+flow model bind -c examples/model/bind_model_service.json -j <job_id>
+```
+
+**选项**
+
+| 参数      | 短格式 | 长格式        | 可选参数 | 说明     |
+| --------- | ------ | ------------- | -------- | -------- |
+| conf_path | `-c`   | `--conf-path` | 否       | 配置文件 |
+| job_id    | `-j`   | `--job-id`    | 是       | 任务 ID  |
+
+**样例**
+
+```json
+{
+  "retcode": 0,
+  "retmsg": "service id is 123"
+}
+```
+
+### import
+
+从本地或存储引擎中导入模型。
+
+```bash
+flow model import -c examples/model/import_model.json
+flow model import -c examples/model/restore_model.json --from-database
+```
+
+**选项**
+
+| 参数          | 短格式 | 长格式            | 可选参数 | 说明                 |
+| ------------- | ------ | ----------------- | -------- | -------------------- |
+| conf_path     | `-c`   | `--conf-path`     | 否       | 配置文件             |
+| from_database |        | `--from-database` | 是       | 从存储引擎中导入模型 |
+
+**样例**
+
+```json
+{
+  "data": {
+    "job_id": "202208261102212849780",
+    "model_id": "arbiter-10000#guest-9999#host-10000#model",
+    "model_version": "foobar",
+    "party_id": "9999",
+    "role": "guest"
+  },
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### export
+
+导出模型到本地或存储引擎中。
+
+```bash
+flow model export -c examples/model/export_model.json
+flow model export -c examples/model/store_model.json --to-database
+```
+
+**选项**
+
+| 参数        | 短格式 | 长格式          | 可选参数 | 说明                   |
+| ----------- | ------ | --------------- | -------- | ---------------------- |
+| conf_path   | `-c`   | `--conf-path`   | 否       | 配置文件               |
+| to_database |        | `--to-database` | 是       | 将模型导出到存储引擎中 |
+
+**样例**
+
+```json
+{
+  "data": {
+    "board_url": "http://127.0.0.1:8080/index.html#/dashboard?job_id=202111091124582110490&role=local&party_id=0",
+    "code": 0,
+    "dsl_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/job_dsl.json",
+    "job_id": "202111091124582110490",
+    "logs_directory": "/root/Codes/FATE-Flow/logs/202111091124582110490",
+    "message": "success",
+    "model_info": {
+      "model_id": "local-0#model",
+      "model_version": "202111091124582110490"
+    },
+    "pipeline_dsl_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/pipeline_dsl.json",
+    "runtime_conf_on_party_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/local/0/job_runtime_on_party_conf.json",
+    "runtime_conf_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/job_runtime_conf.json",
+    "train_runtime_conf_path": "/root/Codes/FATE-Flow/jobs/202111091124582110490/train_runtime_conf.json"
+  },
+  "jobId": "202111091124582110490",
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### migrate
+
+迁移模型。
+
+```bash
+flow model migrate -c examples/model/migrate_model.json
+```
+
+**选项**
+
+| 参数      | 短格式 | 长格式        | 可选参数 | 说明     |
+| --------- | ------ | ------------- | -------- | -------- |
+| conf_path | `-c`   | `--conf-path` | 否       | 配置文件 |
+
+**样例**
+
+```json
+{
+  "data": {
+    "arbiter": {
+      "10000": 0
+    },
+    "detail": {
+      "arbiter": {
+        "10000": {
+          "retcode": 0,
+          "retmsg": "Migrating model successfully. The configuration of model has been modified automatically. New model id is: arbiter-100#guest-99#host-100#model, model version is: 202111091127392613050. Model files can be found at '/root/Codes/FATE-Flow/temp/fate_flow/arbiter#100#arbiter-100#guest-99#host-100#model_202111091127392613050.zip'."
+        }
+      },
+      "guest": {
+        "9999": {
+          "retcode": 0,
+          "retmsg": "Migrating model successfully. The configuration of model has been modified automatically. New model id is: arbiter-100#guest-99#host-100#model, model version is: 202111091127392613050. Model files can be found at '/root/Codes/FATE-Flow/temp/fate_flow/guest#99#arbiter-100#guest-99#host-100#model_202111091127392613050.zip'."
+        }
+      },
+      "host": {
+        "10000": {
+          "retcode": 0,
+          "retmsg": "Migrating model successfully. The configuration of model has been modified automatically. New model id is: arbiter-100#guest-99#host-100#model, model version is: 202111091127392613050. Model files can be found at '/root/Codes/FATE-Flow/temp/fate_flow/host#100#arbiter-100#guest-99#host-100#model_202111091127392613050.zip'."
+        }
+      }
+    },
+    "guest": {
+      "9999": 0
+    },
+    "host": {
+      "10000": 0
+    }
+  },
+  "jobId": "202111091127392613050",
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+### tag-list
+
+获取模型的标签列表。
+
+``` bash
+flow model tag-list -j <job_id>
+```
+
+**选项**
+
+| 参数   | 短格式 | 长格式     | 可选参数 | 说明    |
+| ------ | ------ | ---------- | -------- | ------- |
+| job_id | `-j`   | `--job_id` | 否       | 任务 ID |
+
+### tag-model
+
+从模型中添加或删除标签。
+
+```bash
+flow model tag-model -j <job_id> -t <tag_name>
+flow model tag-model -j <job_id> -t <tag_name> --remove
+```
+
+**选项**
+
+| 参数     | 短格式 | 长格式       | 可选参数 | 说明           |
+| -------- | ------ | ------------ | -------- | -------------- |
+| job_id   | `-j`   | `--job_id`   | 否       | 任务 ID        |
+| tag_name | `-t`   | `--tag-name` | 否       | 标签名         |
+| remove   |        | `--remove`   | 是       | 移除指定的标签 |
+
+### deploy
+
+配置预测 DSL。
+
+```bash
+flow model deploy --model-id <model_id> --model-version <model_version>
+```
+
+**选项**
+
+| 参数           | 短格式 | 长格式             | 可选参数 | 说明                                                         |
+| -------------- | ------ | ------------------ | -------- | ------------------------------------------------------------ |
+| model_id       |        | `--model-id`       | 否       | 模型 ID                                                      |
+| model_version  |        | `--model-version`  | 否       | 模型版本                                                     |
+| cpn_list       |        | `--cpn-list`       | 是       | 组件列表                                                     |
+| cpn_path       |        | `--cpn-path`       | 是       | 从文件中读入组件列表                                         |
+| dsl_path       |        | `--dsl-path`       | 是       | 预测 DSL 文件                                                |
+| cpn_step_index |        | `--cpn-step-index` | 是       | 用指定的 Checkpoint 模型替换 Pipeline 模型<br />使用 `:` 分隔 component name 与 step index<br />例如 `--cpn-step-index cpn_a:123` |
+| cpn_step_name  |        | `--cpn-step-name`  | 是       | 用指定的 Checkpoint 模型替换 Pipeline 模型<br />使用 `:` 分隔 component name 与 step name<br />例如 `--cpn-step-name cpn_b:foobar` |
+
+**样例**
+
+```json
+{
+  "retcode": 0,
+  "retmsg": "success",
+  "data": {
+    "model_id": "arbiter-9999#guest-10000#host-9999#model",
+    "model_version": "202111032227378766180",
+    "arbiter": {
+      "party_id": 9999
+    },
+    "guest": {
+      "party_id": 10000
+    },
+    "host": {
+      "party_id": 9999
+    },
+    "detail": {
+      "arbiter": {
+        "party_id": {
+          "retcode": 0,
+          "retmsg": "deploy model of role arbiter 9999 success"
+        }
+      },
+      "guest": {
+        "party_id": {
+          "retcode": 0,
+          "retmsg": "deploy model of role guest 10000 success"
+        }
+      },
+      "host": {
+        "party_id": {
+          "retcode": 0,
+          "retmsg": "deploy model of role host 9999 success"
+        }
+      }
+    }
+  }
+}
+```
+
+### get-predict-dsl
+
+获取预测 DSL。
+
+```bash
+flow model get-predict-dsl --model-id <model_id> --model-version <model_version> -o ./examples/
+```
+
+**选项**
+
+| 参数          | 短格式 | 长格式            | 可选参数 | 说明     |
+| ------------- | ------ | ----------------- | -------- | -------- |
+| model_id      |        | `--model-id`      | 否       | 模型 ID  |
+| model_version |        | `--model-version` | 否       | 模型版本 |
+| output_path   | `-o`   | `--output-path`   | 否       | 输出路径 |
+
+### get-predict-conf
+
+获取模型预测模板。
+
+```bash
+flow model get-predict-conf --model-id <model_id> --model-version <model_version> -o ./examples/
+```
+
+**选项**
+
+| 参数          | 短格式 | 长格式            | 可选参数 | 说明     |
+| ------------- | ------ | ----------------- | -------- | -------- |
+| model_id      |        | `--model-id`      | 否       | 模型 ID  |
+| model_version |        | `--model-version` | 否       | 模型版本 |
+| output_path   | `-o`   | `--output-path`   | 否       | 输出路径 |
+
+### get-model-info
+
+获取模型信息。
+
+```bash
+flow model get-model-info --model-id <model_id> --model-version <model_version>
+flow model get-model-info --model-id <model_id> --model-version <model_version> --detail
+```
+
+**选项**
+
+| 参数          | 短格式 | 长格式            | 可选参数 | 说明         |
+| ------------- | ------ | ----------------- | -------- | ------------ |
+| model_id      |        | `--model-id`      | 否       | 模型 ID      |
+| model_version |        | `--model-version` | 否       | 模型版本     |
+| role          | `-r`   | `--role`          | 是       | Party 角色   |
+| party_id      | `-p`   | `--party-id`      | 是       | Party ID     |
+| detail        |        | `--detail`        | 是       | 展示详细信息 |
+
+### homo-convert
+
+基于横向训练的模型,生成其他 ML 框架的模型文件。
+
+```bash
+flow model homo-convert -c examples/model/homo_convert_model.json
+```
+
+**选项**
+
+| 参数      | 短格式 | 长格式        | 可选参数 | 说明     |
+| --------- | ------ | ------------- | -------- | -------- |
+| conf_path | `-c`   | `--conf-path` | 否       | 配置文件 |
+
+### homo-deploy
+
+将横向训练后使用 `homo-convert` 生成的模型部署到在线推理系统中,当前支持创建基于 KFServing 的推理服务。
+
+```bash
+flow model homo-deploy -c examples/model/homo_deploy_model.json
+```
+
+**选项**
+
+| 参数      | 短格式 | 长格式        | 可选参数 | 说明     |
+| --------- | ------ | ------------- | -------- | -------- |
+| conf_path | `-c`   | `--conf-path` | 否       | 配置文件 |

+ 150 - 0
FATE-Flow/doc/cli/privilege.md

@@ -0,0 +1,150 @@
+## Privilege
+
+### grant
+
+Add privileges
+
+```bash
+flow privilege grant -c fateflow/examples/permission/grant.json
+```
+
+**Options**
+
+| parameter name | required | type | description                                                                         |
+|:----------|:----|:-------|-------------------------------------------------------------------------------------|
+| party_id | yes | string | site id                                                                             |
+| component | no | string | component name, can be split by "," for multiple components, "*" for all components |
+| dataset | no | object | list of datasets                                                                    |
+
+
+**sample**
+```json
+{
+  "party_id": 10000,
+  "component": "reader,dataio",
+  "dataset": [
+    {
+      "namespace": "experiment",
+      "name": "breast_hetero_guest"
+    },
+    {
+      "namespace": "experiment",
+      "name": "breast_hetero_host"
+    }
+  ]
+}
+```
+
+**return**
+
+| parameter name | type | description |
+| ------- | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+
+**Sample**
+
+```shell
+{
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### delete
+
+Delete permissions
+
+```bash
+flow privilege delete -c fateflow/examples/permission/delete.json
+```
+**Options**
+
+| parameter name | required | type | description |
+|:----------|:----|:-------|--------------------------|
+| party_id | yes | string | site_id |
+| component | no | string | component name, can be split by "," for multiple components, "*" for all components |
+| dataset | no | object | list of datasets, "*" is all datasets |
+
+**sample**
+```json
+{
+  "party_id": 10000,
+  "component": "reader,dataio",
+  "dataset": [
+    {
+      "namespace": "experiment",
+      "name": "breast_hetero_guest"
+    },
+    {
+      "namespace": "experiment",
+      "name": "breast_hetero_host"
+    }
+  ]
+}
+```
+
+**return**
+
+| parameter name | type | description |
+| ------- | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+
+**Sample**
+
+```shell
+{
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### query
+
+Query permissions
+
+```bash
+flow privilege query -p 10000
+```
+
+**Options**
+
+| parameters | short-format | long-format | required | type | description |
+| :-------- |:-----|:-------------| :--- | :----- |------|
+| party_id | `-p` | `--party-id` | yes | string | site id |
+
+**returns**
+
+
+| parameter name | type | description |
+| ------- | :----- | -------- |
+| retcode | int | return-code |
+| retmsg | string | Return information |
+| data | object | return data |
+
+**Sample**
+
+```json
+{
+    "data": {
+        "component": [
+            "reader",
+            "dataio"
+        ],
+        "dataset": [
+            {
+                "name": "breast_hetero_guest",
+                "namespace": "experiment"
+            },
+            {
+                "name": "breast_hetero_host",
+                "namespace": "experiment"
+            }
+        ]
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+
+```

+ 163 - 0
FATE-Flow/doc/cli/privilege.zh.md

@@ -0,0 +1,163 @@
+## Privilege
+
+### grant
+
+添加权限
+
+```bash
+flow privilege grant -c fateflow/examples/permission/grant.json
+```
+
+**选项**
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| conf_path | `-c`   |`--conf-path`   |是   | string | 配置路径  |
+
+注: conf_path为参数路径,具体参数如下
+
+| 参数名       | 必选  | 类型     | 说明       |
+|:----------|:----|:-------|----------|
+| party_id  | 是   | string | 站点id     |
+| component | 否   | string | 组件名,可用","分割多个组件,"*"为所有组件 |
+| dataset   | 否   | object | 数据集列表    |
+
+
+**样例**
+```json
+{
+  "party_id": 10000,
+  "component": "reader,dataio",
+  "dataset": [
+    {
+      "namespace": "experiment",
+      "name": "breast_hetero_guest"
+    },
+    {
+      "namespace": "experiment",
+      "name": "breast_hetero_host"
+    }
+  ]
+}
+```
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| ------- | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+
+**样例**
+
+```shell
+{
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### delete
+
+删除权限
+
+```bash
+flow privilege delete -c fateflow/examples/permission/delete.json
+```
+**选项**
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| conf_path | `-c`   |`--conf-path`   |是   | string | 配置路径  |
+
+
+注: conf_path为参数路径,具体参数如下
+
+| 参数名       | 必选  | 类型     | 说明                       |
+|:----------|:----|:-------|--------------------------|
+| party_id  | 是   | string | 站点id                     |
+| component | 否   | string | 组件名,可用","分割多个组件,"*"为所有组件 |
+| dataset   | 否   | object | 数据集列表, "*"为所有数据集         |
+
+**样例**
+```json
+{
+  "party_id": 10000,
+  "component": "reader,dataio",
+  "dataset": [
+    {
+      "namespace": "experiment",
+      "name": "breast_hetero_guest"
+    },
+    {
+      "namespace": "experiment",
+      "name": "breast_hetero_host"
+    }
+  ]
+}
+```
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| ------- | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+
+**样例**
+
+```shell
+{
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### query
+
+查询权限
+
+```bash
+flow privilege query -p 10000
+```
+
+**选项**
+
+| 参数    | 短格式  | 长格式          | 必选 | 类型   | 说明   |
+| :-------- |:-----|:-------------| :--- | :----- |------|
+| party_id | `-p` | `--party-id` |是   | string | 站点id |
+
+**返回**
+
+
+| 参数名  | 类型   | 说明     |
+| ------- | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+**样例**
+
+```json
+{
+    "data": {
+        "component": [
+            "reader",
+            "dataio"
+        ],
+        "dataset": [
+            {
+                "name": "breast_hetero_guest",
+                "namespace": "experiment"
+            },
+            {
+                "name": "breast_hetero_host",
+                "namespace": "experiment"
+            }
+        ]
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+
+```

+ 179 - 0
FATE-Flow/doc/cli/provider.md

@@ -0,0 +1,179 @@
+## Provider
+
+### list
+
+List all current component providers and information about the components they provide
+
+```bash
+flow provider list [options]
+```
+
+**Options**
+
+**Returns**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+
+**Example**
+
+output:
+
+```json
+{
+    "data": {
+        "fate": {
+            "1.9.0": {
+                "class_path": {
+                    "anonymous_generator": "util.anonymous_generator_util.Anonymous",
+                    "data_format": "util.data_format_preprocess.DataFormatPreProcess",
+                    "feature_instance": "feature.instance.Instance",
+                    "feature_vector": "feature.sparse_vector.SparseVector",
+                    "hetero_model_merge": "protobuf.model_merge.merge_hetero_models.hetero_model_merge",
+                    "homo_model_convert": "protobuf.homo_model_convert.homo_model_convert",
+                    "interface": "components.components.Components",
+                    "model": "protobuf.generated",
+                    "model_migrate": "protobuf.model_migrate.model_migrate"
+                },
+                "components": [
+                    "heterodatasplit",
+                    "psi",
+                    "heterofastsecureboost",
+                    "heterofeaturebinning",
+                    "scorecard",
+                    "sampleweight",
+                    "homosecureboost",
+                    "onehotencoder",
+                    "secureinformationretrieval",
+                    "homoonehotencoder",
+                    "datatransform",
+                    "dataio",
+                    "heterosshelinr",
+                    "intersection",
+                    "homofeaturebinning",
+                    "secureaddexample",
+                    "union",
+                    "datastatistics",
+                    "columnexpand",
+                    "homonn",
+                    "labeltransform",
+                    "heterosecureboost",
+                    "heterofeatureselection",
+                    "heterolr",
+                    "feldmanverifiablesum",
+                    "heteropoisson",
+                    "evaluation",
+                    "federatedsample",
+                    "homodatasplit",
+                    "ftl",
+                    "localbaseline",
+                    "featurescale",
+                    "featureimputation",
+                    "heteropearson",
+                    "heterokmeans",
+                    "heteronn",
+                    "heterolinr",
+                    "spdztest",
+                    "heterosshelr",
+                    "homolr"
+                ],
+                "path": "${FATE_PROJECT_BASE}/python/federatedml",
+                "python": ""
+            },
+            "default": {
+                "version": "1.9.0"
+            }
+        },
+        "fate_flow": {
+            "1.9.0": {
+                "class_path": {
+                    "anonymous_generator": "util.anonymous_generator_util.Anonymous",
+                    "data_format": "util.data_format_preprocess.DataFormatPreProcess",
+                    "feature_instance": "feature.instance.Instance",
+                    "feature_vector": "feature.sparse_vector.SparseVector",
+                    "hetero_model_merge": "protobuf.model_merge.merge_hetero_models.hetero_model_merge",
+                    "homo_model_convert": "protobuf.homo_model_convert.homo_model_convert",
+                    "interface": "components.components.Components",
+                    "model": "protobuf.generated",
+                    "model_migrate": "protobuf.model_migrate.model_migrate"
+                },
+                "components": [
+                    "writer",
+                    "modelrestore",
+                    "upload",
+                    "apireader",
+                    "modelstore",
+                    "cacheloader",
+                    "modelloader",
+                    "download",
+                    "reader"
+                ],
+                "path": "${FATE_FLOW_BASE}/python/fate_flow",
+                "python": ""
+            },
+            "default": {
+                "version": "1.9.0"
+            }
+        }
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+Contains the `name`, `version number`, `codepath`, `list of provided components`
+
+### register
+
+Register a component provider
+
+```bash
+flow provider register [options]
+```
+
+**Options**
+
+| 参数名                 | 必选 | 类型   | 说明                             |
+| :--------------------- | :--- | :----- | ------------------------------|
+| -c, --conf-path          | 是   | string | 配置路径                         |
+
+**Returns**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+
+**Example**
+
+```bash
+flow provider register -c $FATE_FLOW_BASE/examples/other/register_provider.json
+```
+
+conf:
+
+```json
+{
+  "name": "fate",
+  "version": "1.7.1",
+  "path": "${FATE_FLOW_BASE}/python/component_plugins/fateb/python/federatedml"
+}
+```
+
+output:
+
+```json
+{
+    "data": {
+        "flow-xxx-9380": {
+            "retcode": 0,
+            "retmsg": "success"
+        }
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 180 - 0
FATE-Flow/doc/cli/provider.zh.md

@@ -0,0 +1,180 @@
+## Provider
+
+### list
+
+列出当前所有组件提供者及其提供组件信息
+
+```bash
+flow provider list [options]
+```
+
+**选项**
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+
+**样例** 
+
+输出:
+
+```json
+{
+    "data": {
+        "fate": {
+            "1.9.0": {
+                "class_path": {
+                    "anonymous_generator": "util.anonymous_generator_util.Anonymous",
+                    "data_format": "util.data_format_preprocess.DataFormatPreProcess",
+                    "feature_instance": "feature.instance.Instance",
+                    "feature_vector": "feature.sparse_vector.SparseVector",
+                    "hetero_model_merge": "protobuf.model_merge.merge_hetero_models.hetero_model_merge",
+                    "homo_model_convert": "protobuf.homo_model_convert.homo_model_convert",
+                    "interface": "components.components.Components",
+                    "model": "protobuf.generated",
+                    "model_migrate": "protobuf.model_migrate.model_migrate"
+                },
+                "components": [
+                    "heterodatasplit",
+                    "psi",
+                    "heterofastsecureboost",
+                    "heterofeaturebinning",
+                    "scorecard",
+                    "sampleweight",
+                    "homosecureboost",
+                    "onehotencoder",
+                    "secureinformationretrieval",
+                    "homoonehotencoder",
+                    "datatransform",
+                    "dataio",
+                    "heterosshelinr",
+                    "intersection",
+                    "homofeaturebinning",
+                    "secureaddexample",
+                    "union",
+                    "datastatistics",
+                    "columnexpand",
+                    "homonn",
+                    "labeltransform",
+                    "heterosecureboost",
+                    "heterofeatureselection",
+                    "heterolr",
+                    "feldmanverifiablesum",
+                    "heteropoisson",
+                    "evaluation",
+                    "federatedsample",
+                    "homodatasplit",
+                    "ftl",
+                    "localbaseline",
+                    "featurescale",
+                    "featureimputation",
+                    "heteropearson",
+                    "heterokmeans",
+                    "heteronn",
+                    "heterolinr",
+                    "spdztest",
+                    "heterosshelr",
+                    "homolr"
+                ],
+                "path": "${FATE_PROJECT_BASE}/python/federatedml",
+                "python": ""
+            },
+            "default": {
+                "version": "1.9.0"
+            }
+        },
+        "fate_flow": {
+            "1.9.0": {
+                "class_path": {
+                    "anonymous_generator": "util.anonymous_generator_util.Anonymous",
+                    "data_format": "util.data_format_preprocess.DataFormatPreProcess",
+                    "feature_instance": "feature.instance.Instance",
+                    "feature_vector": "feature.sparse_vector.SparseVector",
+                    "hetero_model_merge": "protobuf.model_merge.merge_hetero_models.hetero_model_merge",
+                    "homo_model_convert": "protobuf.homo_model_convert.homo_model_convert",
+                    "interface": "components.components.Components",
+                    "model": "protobuf.generated",
+                    "model_migrate": "protobuf.model_migrate.model_migrate"
+                },
+                "components": [
+                    "writer",
+                    "modelrestore",
+                    "upload",
+                    "apireader",
+                    "modelstore",
+                    "cacheloader",
+                    "modelloader",
+                    "download",
+                    "reader"
+                ],
+                "path": "${FATE_FLOW_BASE}/python/fate_flow",
+                "python": ""
+            },
+            "default": {
+                "version": "1.9.0"
+            }
+        }
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+包含`组件提供者`的`名称`, `版本号`, `代码路径`, `提供的组件列表`
+
+### register
+
+注册一个组件提供者
+
+```bash
+flow provider register [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                             |
+| :--------------------- | :--- | :----- | ------------------------------|
+| -c, --conf-path          | 是   | string | 配置路径                         |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+
+**样例** 
+
+```bash
+flow provider register -c $FATE_FLOW_BASE/examples/other/register_provider.json
+```
+
+配置文件:
+
+```json
+{
+  "name": "fate",
+  "version": "1.7.1",
+  "path": "${FATE_FLOW_BASE}/python/component_plugins/fateb/python/federatedml"
+}
+```
+
+输出:
+
+```json
+{
+    "data": {
+        "flow-xxx-9380": {
+            "retcode": 0,
+            "retmsg": "success"
+        }
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+
+```

+ 89 - 0
FATE-Flow/doc/cli/resource.md

@@ -0,0 +1,89 @@
+## Resource
+
+### query
+
+For querying fate system resources
+
+```bash
+flow resource query
+```
+
+**Options** 
+
+**Returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | object | return data |
+
+**Example**
+
+```json
+{
+    "data": {
+        "computing_engine_resource": {
+            "f_cores": 32,
+            "f_create_date": "2021-09-21 19:32:59",
+            "f_create_time": 1632223979564,
+            "f_engine_config": {
+                "cores_per_node": 32,
+                "nodes": 1
+            },
+            "f_engine_entrance": "fate_on_eggroll",
+            "f_engine_name": "EGGROLL",
+            "f_engine_type": "computing",
+            "f_memory": 0,
+            "f_nodes": 1,
+            "f_remaining_cores": 32,
+            "f_remaining_memory": 0,
+            "f_update_date": "2021-11-08 16:56:38",
+            "f_update_time": 1636361798812
+        },
+        "use_resource_job": []
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### return
+
+Resources for returning a job
+
+```bash
+flow resource return [options]
+```
+
+**Options** 
+
+| parameter name | required | type | description |
+| :----- | :--- | :----- | ------ |
+| job_id | yes | string | job_id |
+
+**Returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | object | return data |
+
+**Example**
+
+```json
+{
+    "data": [
+        {
+            "job_id": "202111081612427726750",
+            "party_id": "8888",
+            "resource_in_use": true,
+            "resource_return_status": true,
+            "role": "guest"
+        }
+    ],
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 89 - 0
FATE-Flow/doc/cli/resource.zh.md

@@ -0,0 +1,89 @@
+## Resource
+
+### query
+
+用于查询fate系统资源
+
+```bash
+flow resource query
+```
+
+**选项**
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+**样例**
+
+```json
+{
+    "data": {
+        "computing_engine_resource": {
+            "f_cores": 32,
+            "f_create_date": "2021-09-21 19:32:59",
+            "f_create_time": 1632223979564,
+            "f_engine_config": {
+                "cores_per_node": 32,
+                "nodes": 1
+            },
+            "f_engine_entrance": "fate_on_eggroll",
+            "f_engine_name": "EGGROLL",
+            "f_engine_type": "computing",
+            "f_memory": 0,
+            "f_nodes": 1,
+            "f_remaining_cores": 32,
+            "f_remaining_memory": 0,
+            "f_update_date": "2021-11-08 16:56:38",
+            "f_update_time": 1636361798812
+        },
+        "use_resource_job": []
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### return
+
+用于归还某个job的资源
+
+```bash
+flow resource return [options]
+```
+
+**选项**
+
+| 参数名 | 必选 | 类型   | 说明   |
+| :----- | :--- | :----- | ------ |
+| job_id | 是   | string | 任务id |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+**样例**
+
+```json
+{
+    "data": [
+        {
+            "job_id": "202111081612427726750",
+            "party_id": "8888",
+            "resource_in_use": true,
+            "resource_return_status": true,
+            "role": "guest"
+        }
+    ],
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 111 - 0
FATE-Flow/doc/cli/server.md

@@ -0,0 +1,111 @@
+## Server
+
+### versions
+
+List all relevant system version numbers
+
+```bash
+flow server versions
+```
+
+**Options**
+
+None
+
+**Returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | dict | return data |
+| jobId | string | job id |
+
+**Example**
+
+```bash
+flow server versions
+```
+
+Output:
+
+```json
+{
+    "data": {
+        "API": "v1",
+        "CENTOS": "7.2",
+        "EGGROLL": "2.4.0",
+        "FATE": "1.7.0",
+        "FATEBoard": "1.7.0",
+        "FATEFlow": "1.7.0",
+        "JDK": "8",
+        "MAVEN": "3.6.3",
+        "PYTHON": "3.6.5",
+        "SPARK": "2.4.1",
+        "UBUNTU": "16.04"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### reload
+
+The following configuration items will take effect again after `reload`
+
+  - All configurations after # engine services in $FATE_PROJECT_BASE/conf/service_conf.yaml
+  - All configurations in $FATE_FLOW_BASE/python/fate_flow/job_default_config.yaml
+
+```bash
+flow server reload
+```
+
+**Options**
+
+None
+
+**Returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | dict | return data |
+| jobId | string | job id |
+
+**Example**
+
+```bash
+flow server reload
+```
+
+Output:
+
+```json
+{
+    "data": {
+        "job_default_config": {
+            "auto_retries": 0,
+            "auto_retry_delay": 1,
+            "default_component_provider_path": "component_plugins/fate/python/federatedml",
+            "end_status_job_scheduling_time_limit": 300000,
+            "end_status_job_scheduling_updates": 1,
+            "federated_command_trys": 3,
+            "federated_status_collect_type": "PUSH",
+            "job_timeout": 259200,
+            "max_cores_percent_per_job": 1,
+            "output_data_summary_count_limit": 100,
+            "remote_request_timeout": 30000,
+            "task_cores": 4,
+            "task_memory": 0,
+            "task_parallelism": 1,
+            "total_cores_overweight_percent": 1,
+            "total_memory_overweight_percent": 1,
+            "upload_max_bytes": 4194304000
+        },
+        "service_registry": null
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 111 - 0
FATE-Flow/doc/cli/server.zh.md

@@ -0,0 +1,111 @@
+## Server
+
+### versions
+
+列出所有相关系统版本号
+
+```bash
+flow server versions
+```
+
+**选项**
+
+无
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+| jobId   | string | 作业id   |
+
+**样例** 
+
+```bash
+flow server versions
+```
+
+输出:
+
+```json
+{
+    "data": {
+        "API": "v1",
+        "CENTOS": "7.2",
+        "EGGROLL": "2.4.0",
+        "FATE": "1.7.0",
+        "FATEBoard": "1.7.0",
+        "FATEFlow": "1.7.0",
+        "JDK": "8",
+        "MAVEN": "3.6.3",
+        "PYTHON": "3.6.5",
+        "SPARK": "2.4.1",
+        "UBUNTU": "16.04"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### reload
+
+如下配置项在`reload`后会重新生效
+
+  - $FATE_PROJECT_BASE/conf/service_conf.yaml中# engine services后的所有配置
+  - $FATE_FLOW_BASE/python/fate_flow/job_default_config.yaml中所有配置
+
+```bash
+flow server reload
+```
+
+**选项**
+
+无
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+| jobId   | string | 作业id   |
+
+**样例** 
+
+```bash
+flow server reload
+```
+
+输出:
+
+```json
+{
+    "data": {
+        "job_default_config": {
+            "auto_retries": 0,
+            "auto_retry_delay": 1,
+            "default_component_provider_path": "component_plugins/fate/python/federatedml",
+            "end_status_job_scheduling_time_limit": 300000,
+            "end_status_job_scheduling_updates": 1,
+            "federated_command_trys": 3,
+            "federated_status_collect_type": "PUSH",
+            "job_timeout": 259200,
+            "max_cores_percent_per_job": 1,
+            "output_data_summary_count_limit": 100,
+            "remote_request_timeout": 30000,
+            "task_cores": 4,
+            "task_memory": 0,
+            "task_parallelism": 1,
+            "total_cores_overweight_percent": 1,
+            "total_memory_overweight_percent": 1,
+            "upload_max_bytes": 4194304000
+        },
+        "service_registry": null
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 320 - 0
FATE-Flow/doc/cli/table.md

@@ -0,0 +1,320 @@
+## Table
+
+### info
+
+Query information about the fate table (real storage address, number, schema, etc.)
+
+```bash
+flow table info [options]
+```
+
+**options** 
+
+| parameters    | short-format  | long-format | required  | type   | description    |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| table_name | `-t`   |`--table-name`   |yes   | string | fate table name     |
+| namespace | `-n`   |`--namespace`   | yes |string   | fate table namespace |
+
+**returns**
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return information |
+| data | object | return data |
+
+Sample
+
+```json
+{
+    "data": {
+        "address": {
+            "home": null,
+            "name": "breast_hetero_guest",
+            "namespace": "experiment"
+        },
+        "count": 569,
+        "exists": 1,
+        "namespace": "experiment",
+        "partition": 4,
+        "schema": {
+            "header": "y,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9",
+            "sid": "id"
+        },
+        "table_name": "breast_hetero_guest"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### delete
+
+You can delete table data with table delete
+
+```bash
+flow table delete [options]
+```
+
+**Options** 
+
+| parameters    | short-format  | long-format | required  | type   | description    |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| table_name | `-t`   |`--table-name`   |yes   | string | fate table name     |
+| namespace | `-n`   |`--namespace`   | yes |string   | fate table namespace |
+
+**returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | object | return data |
+
+Sample
+
+```json
+{
+    "data": {
+        "namespace": "xxx",
+        "table_name": "xxx"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### bind
+
+Real storage addresses can be mapped to fate storage tables via table bind
+
+```bash
+flow table bind [options]
+```
+
+**options** 
+
+| parameters | short format | long format | required | type | description |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| conf_path | `-c` | `--conf-path` | yes | string | configuration-path |
+
+Note: conf_path is the parameter path, the specific parameters are as follows
+
+| parameter_name | required | type | description |
+| :------------- | :--- | :----- | ------------------------------------- |
+| name | yes | string | fate table name |
+| namespace | yes | string | fate table namespace |
+| engine | yes | string | storage engine, supports "HDFS", "MYSQL", "PATH" |
+| yes | object | real storage address |
+| drop | no | int | Overwrite previous information |
+| head | no | int | Whether there is a data table header |
+| id_delimiter | no | string | Data separator |
+| id_column | no | string | id field |
+| feature_column | no | array | feature_field |
+
+**mete information**
+
+| parameter name | required | type | description |
+|:---------------------|:----|:-------|-------------------------------------------|
+| input_format | no | string | The format of the data (danse, svmlight, tag:value), used to determine |
+| delimiter | no | string | The data separator, default "," |
+| tag_with_value | no | bool | Valid for tag data format, whether to carry value |
+| tag_value_delimiter | no | string | tag:value data separator, default ":" |
+| with_match_id | no | bool | Whether or not to carry match id |
+| with_match_id | no | object | The name of the id column, effective when extend_sid is enabled, e.g., ["email", "phone"] |
+| id_range | no | object | For tag/svmlight format data, which columns are ids |
+| exclusive_data_type | no | string | The format of the special type data columns |
+| data_type | no | string | Column data type, default "float64 |
+| with_label | no | bool | Whether to have a label, default False |
+| label_name | no | string | The name of the label, default "y" |
+| label_type | no | string | Label type, default "int" |
+
+**In version 1.9.0 and later, if the meta parameter is passed in during the table bind phase, no anonymous information about the feature is generated directly. 
+The feature anonymization information of the original data will be updated after the data has passed through the reader component once**
+
+**Sample** 
+
+- hdfs
+
+```json
+{
+    "namespace": "experiment",
+    "name": "breast_hetero_guest",
+    "engine": "HDFS",
+    "address": {
+        "name_node": "hdfs://fate-cluster",
+        "path": "/data/breast_hetero_guest.csv"
+    },
+    "id_delimiter": ",",
+    "head": 1,
+    "partitions": 10
+}
+```
+
+- mysql
+
+```json
+{
+  "engine": "MYSQL",
+  "address": {
+    "user": "fate",
+    "passwd": "fate",
+    "host": "127.0.0.1",
+    "port": 3306,
+    "db": "experiment",
+    "name": "breast_hetero_guest"
+  },
+  "namespace": "experiment",
+  "name": "breast_hetero_guest",
+  "head": 1,
+  "id_delimiter": ",",
+  "partitions": 10,
+  "id_column": "id",
+  "feature_column": "y,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9"
+}
+```
+
+- PATH
+
+```json
+{
+    "namespace": "xxx",
+    "name": "xxx",
+    "engine": "PATH",
+    "address": {
+        "path": "xxx"
+    }
+}
+```
+**return**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return information |
+| data | object | return data |
+
+Sample
+
+```json
+{
+    "data": {
+        "namespace": "xxx",
+        "table_name": "xxx"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+
+### disable
+
+Tables can be made unavailable by table disable
+
+```bash
+flow table disable [options]
+```
+
+**Options** 
+
+| parameters | short-format | long-format | required | type | description |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| table_name | `-t` | `--table-name` | yes | string | fate table name |
+| namespace | `-n` |`--namespace` | yes |string | fate table namespace |
+
+**returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return information |
+| data | object | return data |
+
+Sample
+
+```json
+{
+    "data": {
+        "namespace": "xxx",
+        "table_name": "xxx"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### enable
+
+Tables can be made available with table enable
+
+```bash
+flow table enable [options]
+```
+
+**Options** 
+
+| parameters | short-format | long-format | required | type | description |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| table_name | `-t` | `--table-name` | yes | string | fate table name |
+| namespace | `-n` |`--namespace` | yes |string | fate table namespace |
+
+
+**returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return information |
+| data | object | return data |
+
+Sample
+
+```json
+{
+    "data": [{
+        "namespace": "xxx",
+        "table_name": "xxx"
+    }],
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### disable-delete
+
+Tables that are currently unavailable can be deleted with disable-delete
+
+```bash
+flow table disable-delete 
+```
+
+
+**return**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return-code |
+| retmsg | string | return information |
+| data | object | return data |
+
+Sample
+
+```json
+{
+  "data": [
+    {
+      "namespace": "xxx",
+      "table_name": "xxx"
+    },
+    {
+      "namespace": "xxx",
+      "table_name": "xxx"
+    }
+  ],
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+
+    

+ 318 - 0
FATE-Flow/doc/cli/table.zh.md

@@ -0,0 +1,318 @@
+## Table
+
+### info
+
+用于查询fate表的相关信息(真实存储地址,数量,schema等)
+
+```bash
+flow table info [options]
+```
+
+**选项** 
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| table_name | `-t`   |`--table-name`   |是   | string | fate表名       |
+| namespace | `-n`   |`--namespace`   | 是 |string   | fate表命名空间 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+    "data": {
+        "address": {
+            "home": null,
+            "name": "breast_hetero_guest",
+            "namespace": "experiment"
+        },
+        "count": 569,
+        "exist": 1,
+        "namespace": "experiment",
+        "partition": 4,
+        "schema": {
+            "header": "y,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9",
+            "sid": "id"
+        },
+        "table_name": "breast_hetero_guest"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### delete
+
+可通过table delete删除表数据
+
+```bash
+flow table delete [options]
+```
+
+**选项** 
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| table_name | `-t`   |`--table-name`   |是   | string | fate表名       |
+| namespace | `-n`   |`--namespace`   | 是 |string   | fate表命名空间 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+    "data": {
+        "namespace": "xxx",
+        "table_name": "xxx"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### bind
+
+可通过table bind将真实存储地址映射到fate存储表
+
+```bash
+flow table bind [options]
+```
+
+**选项** 
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| conf_path | `-c`   |`--conf-path`   |是   | string | 配置路径  |
+
+注: conf_path为参数路径,具体参数如下
+
+| 参数名         | 必选 | 类型   | 说明                                  |
+| :------------- | :--- | :----- | ------------------------------------- |
+| name           | 是   | string | fate表名                              |
+| namespace      | 是   | string | fate表命名空间                        |
+| engine         | 是   | string | 存储引擎, 支持"HDFS", "MYSQL", "PATH" |
+| adress         | 是   | object | 真实存储地址                          |
+| drop           | 否   | int    | 覆盖以前的信息                        |
+| head           | 否   | int    | 是否有数据表头                        |
+| id_delimiter   | 否   | string | 数据分隔符                            |
+| id_column      | 否   | string | id字段                                |
+| feature_column | 否   | array  | 特征字段                              |
+
+**mete信息**
+
+| 参数名                  | 必选  | 类型     | 说明                                        |
+|:---------------------|:----|:-------|-------------------------------------------|
+| input_format         | 否   | string | 数据格式(danse、svmlight、tag:value),用来判断       |
+| delimiter            | 否   | string | 数据分隔符,默认","                               |
+| tag_with_value       | 否   | bool   | 对tag的数据格式生效,是否携带value                     |
+| tag_value_delimiter  | 否   | string | tag:value数据分隔符,默认":"                      |
+| with_match_id        | 否   | bool   | 是否携带match id                              |
+| id_list              | 否   | object | id列名称,开启extend_sid下生效,如:["imei", "phone"] |
+| id_range             | 否   | object | 对于tag/svmlight格式数据,哪几列为id                 |
+| exclusive_data_type  | 否   | string | 特殊类型数据列格式                                 |
+| data_type            | 否   | string | 列数据类型,默认"float64                          |
+| with_label           | 否   | bool   | 是否有标签,默认False                             |
+| label_name           | 否   | string | 标签名,默认"y"                                 |
+| label_type           | 否   | string | 标签类型, 默认"int"                             |
+
+**注:在1.9.0及之后的版本中,若在table bind阶段传入meta参数,并不会直接生成特征的匿名信息。 
+当数据经过了一次reader组件后会更新原始数据的特征匿名信息**
+
+**样例** 
+
+- hdfs
+
+```json
+{
+    "namespace": "experiment",
+    "name": "breast_hetero_guest",
+    "engine": "HDFS",
+    "address": {
+        "name_node": "hdfs://fate-cluster",
+        "path": "/data/breast_hetero_guest.csv"
+    },
+    "id_delimiter": ",",
+    "head": 1,
+    "partitions": 10
+}
+```
+
+- mysql
+
+```json
+{
+  "engine": "MYSQL",
+  "address": {
+    "user": "fate",
+    "passwd": "fate",
+    "host": "127.0.0.1",
+    "port": 3306,
+    "db": "experiment",
+    "name": "breast_hetero_guest"
+  },
+  "namespace": "experiment",
+  "name": "breast_hetero_guest",
+  "head": 1,
+  "id_delimiter": ",",
+  "partitions": 10,
+  "id_column": "id",
+  "feature_column": "y,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9"
+}
+```
+
+- PATH
+
+```json
+{
+    "namespace": "xxx",
+    "name": "xxx",
+    "engine": "PATH",
+    "address": {
+        "path": "xxx"
+    }
+}
+```
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+    "data": {
+        "namespace": "xxx",
+        "table_name": "xxx"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+
+### disable
+
+可通过table disable将表置为不可用状态
+
+```bash
+flow table disable [options]
+```
+
+**选项** 
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| table_name | `-t`   |`--table-name`   |是   | string | fate表名       |
+| namespace | `-n`   |`--namespace`   | 是 |string   | fate表命名空间 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+    "data": {
+        "namespace": "xxx",
+        "table_name": "xxx"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### enable
+
+可通过table enable将表置为可用状态
+
+```bash
+flow table enable [options]
+```
+
+**选项** 
+
+| 参数    | 短格式 | 长格式 | 必选 | 类型   | 说明           |
+| :-------- | :--- | :--- | :--- | :----- | -------------- |
+| table_name | `-t`   |`--table-name`   |是   | string | fate表名       |
+| namespace | `-n`   |`--namespace`   | 是 |string   | fate表命名空间 |
+
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+    "data": [{
+        "namespace": "xxx",
+        "table_name": "xxx"
+    }],
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### disable-delete
+
+可通过disable-delete删除当前不可用的表
+
+```bash
+flow table disable-delete 
+```
+
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+样例
+
+```json
+{
+  "data": [
+    {
+      "namespace": "xxx",
+      "table_name": "xxx"
+    },
+    {
+      "namespace": "xxx",
+      "table_name": "xxx"
+    }
+  ],
+  "retcode": 0,
+  "retmsg": "success"
+}
+```

+ 89 - 0
FATE-Flow/doc/cli/tag.md

@@ -0,0 +1,89 @@
+## Tag
+
+### create
+
+Creates a label.
+
+**Options**
+
+| number | parameters | short-format | long-format | required parameters | parameter description |
+| ---- | ------------ | ------ | ------------ | -------- | -------- |
+| 1 | tag_name | `-t` | `-tag-name` | yes | tag_name |
+| 2 | tag_parameter_introduction | `-d` | `--tag-desc` | no | tag_introduction |
+
+**Example**
+
+``` bash
+flow tag create -t tag1 -d "This is the parameter description of tag1."
+flow tag create -t tag2
+```
+
+### update
+
+Update the tag information.
+
+**Options**
+
+| number | parameters | short format | long format | required parameters | parameter description |
+| ---- | ------------ | ------ | ---------------- | -------- | ---------- |
+| 1 | tag_name | `-t` | `--tag-name` | yes | tag_name |
+| 2 | new_tag_name | | `--new-tag-name` | no | new-tag-name |
+| 3 | new_tag_desc | | `--new-tag-desc` | no | new tag introduction |
+
+**Example**
+
+``` bash
+flow tag update -t tag1 --new-tag-name tag2
+flow tag update -t tag1 --new-tag-desc "This is the introduction of the new parameter."
+```
+
+### list
+
+Show the list of tags.
+
+**options**
+
+| number | parameters | short-format | long-format | required-parameters | parameter-introduction |
+| ---- | ----- | ------ | --------- | -------- | ---------------------------- |
+| 1 | limit | `-l` | `-limit` | no | Returns a limit on the number of results (default: 10) |
+
+**Example**
+
+``` bash
+flow tag list
+flow tag list -l 3
+```
+
+### query
+
+Retrieve tags.
+
+**Options**
+
+| number | parameters | short-format | long-format | required parameters | parameter description |
+| ---- | ---------- | ------ | -------------- | -------- | -------------------------------------- |
+| 1 | tag_name | `-t` | `-tag-name` | yes | tag_name |
+| 2 | with_model | | `-with-model` | no | If specified, information about models with this tag will be displayed |
+
+**Example**
+
+``` bash
+flow tag query -t $TAG_NAME
+flow tag query -t $TAG_NAME --with-model
+```
+
+### delete
+
+Delete the tag.
+
+**Options**
+
+| number | parameters | short-format | long-format | required-parameters | parameters introduction |
+| ---- | -------- | ------ | ------------ | -------- | --------
+| 1 | tag_name | `-t` | `---tag-name` | yes | tag_name |
+
+**Example**
+
+``` bash
+flow tag delete -t tag1
+```

+ 89 - 0
FATE-Flow/doc/cli/tag.zh.md

@@ -0,0 +1,89 @@
+## Tag
+
+### create
+
+创建标签。
+
+**选项**
+
+| 编号 | 参数         | 短格式 | 长格式       | 必要参数 | 参数介绍 |
+| ---- | ------------ | ------ | ------------ | -------- | -------- |
+| 1    | tag_name     | `-t`   | `--tag-name` | 是       | 标签名   |
+| 2    | tag_参数介绍 | `-d`   | `--tag-desc` | 否       | 标签介绍 |
+
+**样例**
+
+``` bash
+flow tag create -t tag1 -d "This is the 参数介绍 of tag1."
+flow tag create -t tag2
+```
+
+### update
+
+更新标签信息。
+
+**选项**
+
+| 编号 | 参数         | 短格式 | 长格式           | 必要参数 | 参数介绍   |
+| ---- | ------------ | ------ | ---------------- | -------- | ---------- |
+| 1    | tag_name     | `-t`   | `--tag-name`     | 是       | 标签名     |
+| 2    | new_tag_name |        | `--new-tag-name` | 否       | 新标签名   |
+| 3    | new_tag_desc |        | `--new-tag-desc` | 否       | 新标签介绍 |
+
+**样例**
+
+``` bash
+flow tag update -t tag1 --new-tag-name tag2
+flow tag update -t tag1 --new-tag-desc "This is the new 参数介绍."
+```
+
+### list
+
+展示标签列表。
+
+**选项**
+
+| 编号 | 参数  | 短格式 | 长格式    | 必要参数 | 参数介绍                     |
+| ---- | ----- | ------ | --------- | -------- | ---------------------------- |
+| 1    | limit | `-l`   | `--limit` | 否       | 返回结果数量限制(默认:10) |
+
+**样例**
+
+``` bash
+flow tag list
+flow tag list -l 3
+```
+
+### query
+
+检索标签。
+
+**选项**
+
+| 编号 | 参数       | 短格式 | 长格式         | 必要参数 | 参数介绍                               |
+| ---- | ---------- | ------ | -------------- | -------- | -------------------------------------- |
+| 1    | tag_name   | `-t`   | `--tag-name`   | 是       | 标签名                                 |
+| 2    | with_model |        | `--with-model` | 否       | 如果指定,具有该标签的模型信息将被展示 |
+
+**样例**
+
+``` bash
+flow tag query -t $TAG_NAME
+flow tag query -t $TAG_NAME --with-model
+```
+
+### delete
+
+删除标签。
+
+**选项**
+
+| 编号 | 参数     | 短格式 | 长格式       | 必要参数 | 参数介绍 |
+| ---- | -------- | ------ | ------------ | -------- | -------- |
+| 1    | tag_name | `-t`   | `--tag-name` | 是       | 标签名   |
+
+**样例**
+
+``` bash
+flow tag delete -t tag1
+```

+ 38 - 0
FATE-Flow/doc/cli/task.md

@@ -0,0 +1,38 @@
+## Task
+
+### query
+
+Retrieve Task information
+
+**Options**
+
+| number | parameters | short format | long format | required parameters | parameter description |
+| ---- | -------------- | ------ | ------------------ | -------- | -------- |
+| 1 | job_id | `-j` | `--job_id` | no | Job ID |
+| 2 | role | `-r` | `--role` | no | role
+| 3 | party_id | `-p` | `--party_id` | no | Party ID |
+| 4 | component_name | `-cpn` | `--component_name` | no | component_name |
+| 5 | status | `-s` | `--status` | No | Task status |
+
+**Example**
+
+``` bash
+flow task query -j $JOB_ID -p 9999 -r guest
+flow task query -cpn hetero_feature_binning_0 -s complete
+```
+
+### list
+
+Show the list of Tasks.
+**Options**
+
+| number | parameters | short format | long format | required parameters | parameter description |
+| ---- | ----- | ------ | --------- | -------- | ---------------------------- |
+| 1 | limit | `-l` | `-limit` | no | Returns a limit on the number of results (default: 10) |
+
+**Example**
+
+``` bash
+flow task list
+flow task list -l 25
+```

+ 38 - 0
FATE-Flow/doc/cli/task.zh.md

@@ -0,0 +1,38 @@
+## Task
+
+### query
+
+检索Task信息
+
+**选项**
+
+| 编号 | 参数           | 短格式 | 长格式             | 必要参数 | 参数介绍 |
+| ---- | -------------- | ------ | ------------------ | -------- | -------- |
+| 1    | job_id         | `-j`   | `--job_id`         | 否       | Job ID   |
+| 2    | role           | `-r`   | `--role`           | 否       | 角色     |
+| 3    | party_id       | `-p`   | `--party_id`       | 否       | Party ID |
+| 4    | component_name | `-cpn` | `--component_name` | 否       | 组件名   |
+| 5    | status         | `-s`   | `--status`         | 否       | 任务状态 |
+
+**样例**
+
+``` bash
+flow task query -j $JOB_ID -p 9999 -r guest
+flow task query -cpn hetero_feature_binning_0 -s complete
+```
+
+### list
+
+展示Task列表。
+**选项**
+
+| 编号 | 参数  | 短格式 | 长格式    | 必要参数 | 参数介绍                     |
+| ---- | ----- | ------ | --------- | -------- | ---------------------------- |
+| 1    | limit | `-l`   | `--limit` | 否       | 返回结果数量限制(默认:10) |
+
+**样例**
+
+``` bash
+flow task list
+flow task list -l 25
+```

+ 604 - 0
FATE-Flow/doc/cli/tracking.md

@@ -0,0 +1,604 @@
+## Tracking
+
+### metrics
+
+Get a list of all metrics names generated by a component task
+
+```bash
+flow tracking metrics [options]
+```
+
+**Options**
+
+| parameter name | required | type | description |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id | yes | string | job-id |
+| -r, --role | yes | string | participant-role |
+| -p, --partyid | yes | string |participant-id |
+| -cpn, --component-name | yes | string | Component name, consistent with that in job dsl |
+
+**Returns**
+
+| parameter-name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | dict | return data |
+
+**Example**
+
+```bash
+flow tracking metrics -j 202111081618357358520 -r guest -p 9999 -cpn evaluation_0
+```
+
+Output:
+
+```json
+{
+    "data": {
+        "train": [
+            "hetero_lr_0",
+            "hetero_lr_0_ks_fpr",
+            "hetero_lr_0_ks_tpr",
+            "hetero_lr_0_lift",
+            "hetero_lr_0_gain",
+            "hetero_lr_0_accuracy",
+            "hetero_lr_0_precision",
+            "hetero_lr_0_recall",
+            "hetero_lr_0_roc",
+            "hetero_lr_0_confusion_mat",
+            "hetero_lr_0_f1_score",
+            "hetero_lr_0_quantile_pr"
+        ]
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### metric-all
+
+Get all the output metrics for a component task
+
+```bash
+flow tracking metric-all [options]
+```
+
+**Options**
+
+| parameter-name | required | type | description |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id | yes | string | job-id |
+| -r, --role | yes | string | participant-role |
+| -p, --partyid | yes | string |participant-id |
+| -cpn, --component-name | yes | string | Component name, consistent with that in job dsl |
+
+**Returns**
+
+| parameter-name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | dict | return data |
+| jobId | string | job id |
+
+**Example**
+
+```bash
+flow tracking metric-all -j 202111081618357358520 -r guest -p 9999 -cpn evaluation_0
+```
+
+Output (limited space, only some of the metric data is shown and some values are omitted in the middle of the array type data):
+
+```json
+{
+    "data": {
+        "train": {
+            "hetero_lr_0": {
+                "data": [
+                    [
+                        "auc",
+                        0.293893
+                    ],
+                    [
+                        "ks",
+                        0.0
+                    ]
+                ],
+                "meta": {
+                    "metric_type": "EVALUATION_SUMMARY",
+                    "name": "hetero_lr_0"
+                }
+            },
+            "hetero_lr_0_accuracy": {
+                "data": [
+                    [
+                        0.0,
+                        0.372583
+                    ],
+                    [
+                        0.99,
+                        0.616872
+                    ]
+                ],
+                "meta": {
+                    "curve_name": "hetero_lr_0",
+                    "metric_type": "ACCURACY_EVALUATION",
+                    "name": "hetero_lr_0_accuracy",
+                    "thresholds": [
+                        0.999471,
+                        0.002577
+                    ]
+                }
+            },
+            "hetero_lr_0_confusion_mat": {
+                "data": [],
+                "meta": {
+                    "fn": [
+                        357,
+                        0
+                    ],
+                    "fp": [
+                        0,
+                        212
+                    ],
+                    "metric_type": "CONFUSION_MAT",
+                    "name": "hetero_lr_0_confusion_mat",
+                    "thresholds": [
+                        0.999471,
+                        0.0
+                    ],
+                    "tn": [
+                        212,
+                        0
+                    ],
+                    "tp": [
+                        0,
+                        357
+                    ]
+                }
+            }
+        }
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### parameters
+
+After the job is submitted, the system resolves the actual component task parameters based on the component_parameters in the job conf combined with the system default component parameters
+
+```bash
+flow tracking parameters [options]
+```
+
+**Options**
+
+| parameter_name | required | type | description |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id | yes | string | job-id |
+| -r, --role | yes | string | participant-role |
+| -p, --partyid | yes | string |participant-id |
+| -cpn, --component-name | yes | string | Component name, consistent with that in job dsl |
+
+
+**Returns**
+
+| parameter-name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | dict | return data |
+| jobId | string | job id |
+
+**Example**
+
+```bash
+flow tracking parameters -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0
+```
+
+Output:
+
+```json
+{
+    "data": {
+        "ComponentParam": {
+            "_feeded_deprecated_params": [],
+            "_is_raw_conf": false,
+            "_name": "HeteroLR#hetero_lr_0",
+            "_user_feeded_params": [
+                "batch_size",
+                "penalty",
+                "max_iter",
+                "learning_rate",
+                "init_param",
+                "optimizer",
+                "init_param.init_method",
+                "alpha"
+            ],
+            "alpha": 0.01,
+            "batch_size": 320,
+            "callback_param": {
+                "callbacks": [],
+                "early_stopping_rounds": null,
+                "metrics": [],
+                "save_freq": 1,
+                "use_first_metric_only": false,
+                "validation_freqs": null
+            },
+            "cv_param": {
+                "history_value_type": "score",
+                "mode": "hetero",
+                "n_splits": 5,
+                "need_cv": false,
+                "output_fold_history": true,
+                "random_seed": 1,
+                "role": "guest",
+                "shuffle": true
+            },
+            "decay": 1,
+            "decay_sqrt": true,
+            "early_stop": "diff",
+            "early_stopping_rounds": null,
+            "encrypt_param": {
+                "key_length": 1024,
+                "method": "Paillier"
+            },
+            "encrypted_mode_calculator_param": {
+                "mode": "strict",
+                "re_encrypted_rate": 1
+            },
+            "floating_point_precision": 23,
+            "init_param": {
+                "fit_intercept": true,
+                "init_const": 1,
+                "init_method": "random_uniform",
+                "random_seed": null
+            },
+            "learning_rate": 0.15,
+            "max_iter": 3,
+            "metrics": [
+                "auc",
+                "ks"
+            ],
+            "multi_class": "ovr",
+            "optimizer": "rmsprop",
+            "penalty": "L2",
+            "predict_param": {
+                "threshold": 0.5
+            },
+            "sqn_param": {
+                "memory_M": 5,
+                "random_seed": null,
+                "sample_size": 5000,
+                "update_interval_L": 3
+            },
+            "stepwise_param": {
+                "direction": "both",
+                "max_step": 10,
+                "mode": "hetero",
+                "need_stepwise": false,
+                "nvmax": null,
+                "nvmin": 2,
+                "role": "guest",
+                "score_name": "AIC"
+            },
+            "tol": 0.0001,
+            "use_first_metric_only": false,
+            "validation_freqs": null
+        },
+        "module": "HeteroLR"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### output-data
+
+Get the component output
+
+```bash
+flow tracking output-data [options]
+```
+
+**options**
+
+| parameter-name | required | type | description |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id | yes | string | job-id |
+| -r, --role | yes | string | participant-role |
+| -p, --partyid | yes | string |participant-id |
+| -cpn, --component-name | yes | string | Component name, consistent with that in job dsl |
+| -o, --output-path | yes | string | Path to output data |
+
+**Returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | Return code |
+| retmsg | string | return message |
+| data | dict | return data |
+| jobId | string | job id |
+
+**Example**
+
+```bash
+flow tracking output-data -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0 -o . /
+```
+
+Output :
+
+```json
+{
+    "retcode": 0,
+    "directory": "$FATE_PROJECT_BASE/job_202111081618357358520_hetero_lr_0_guest_9999_output_data",
+    "retmsg": "Download successfully, please check $FATE_PROJECT_BASE/job_202111081618357358520_hetero_lr_0_guest_9999_output_data directory "
+}
+```
+
+### output-data-table
+
+Get the output data table name of the component
+
+```bash
+flow tracking output-data-table [options]
+```
+
+**options**
+
+| parameter-name | required | type | description |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id | yes | string | job-id |
+| -r, --role | yes | string | participant-role |
+| -p, --partyid | yes | string |participant-id |
+| -cpn, --component-name | yes | string | Component name, consistent with that in job dsl |
+
+**Returns**
+
+| parameter-name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | dict | return data |
+| jobId | string | job id |
+
+**Example**
+
+```bash
+flow tracking output-data-table -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0
+```
+
+output:
+
+```json
+{
+    "data": [
+        {
+            "data_name": "train",
+            "table_name": "9688fa00406c11ecbd0bacde48001122",
+            "table_namespace": "output_data_202111081618357358520_hetero_lr_0_0"
+        }
+    ],
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### output-model
+
+Get the output model of a component task
+
+```bash
+flow tracking output-model [options]
+```
+
+**options**
+
+| parameter-name | required | type | description |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id | yes | string | job-id |
+| -r, --role | yes | string | participant-role |
+| -p, --partyid | yes | string |participant-id |
+| -cpn, --component-name | yes | string | Component name, consistent with that in job dsl |
+
+**Returns**
+
+| parameter-name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | dict | return data |
+| jobId | string | job id |
+
+**Example**
+
+```bash
+flow tracking output-model -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0
+```
+
+Output:
+
+```json
+{
+    "data": {
+        "bestIteration": -1,
+        "encryptedWeight": {},
+        "header": [
+            "x0",
+            "x1",
+            "x2",
+            "x3",
+            "x4",
+            "x5",
+            "x6",
+            "x7",
+            "x8",
+            "x9"
+        ],
+        "intercept": 0.24451607054764884,
+        "isConverged": false,
+        "iters": 3,
+        "lossHistory": [],
+        "needOneVsRest": false,
+        "weight": {
+            "x0": 0.04639947589856569,
+            "x1": 0.19899685467216902,
+            "x2": -0.18133550931649306,
+            "x3": 0.44928868756862206,
+            "x4": 0.05285905125502288,
+            "x5": 0.319187932844076,
+            "x6": 0.42578983446194013,
+            "x7": -0.025765956309895477,
+            "x8": -0.3699194462271593,
+            "x9": -0.1212094750908295
+        }
+    },
+    "meta": {
+        "meta_data": {
+            "alpha": 0.01,
+            "batchSize": "320",
+            "earlyStop": "diff",
+            "fitIntercept": true,
+            "learningRate": 0.15,
+            "maxIter": "3",
+            "needOneVsRest": false,
+            "optimizer": "rmsprop",
+            "partyWeight": 0.0,
+            "penalty": "L2",
+            "reEncryptBatches": "0",
+            "revealStrategy": "",
+            "tol": 0.0001
+        },
+        "module_name": "HeteroLR"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### get-summary
+
+Each component allows to set some summary information for easy observation and analysis
+
+```bash
+flow tracking get-summary [options]
+```
+
+**Options**
+
+| parameter-name | required | type | description |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id | yes | string | job-id |
+| -r, --role | yes | string | participant-role |
+| -p, --partyid | yes | string |participant-id |
+| -cpn, --component-name | yes | string | Component name, consistent with that in job dsl |
+
+**Returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | dict | return data |
+| jobId | string | job id |
+
+**Example**
+
+```bash
+flow tracking get-summary -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0
+```
+
+Output:
+
+```json
+{
+    "data": {
+        "best_iteration": -1,
+        "coef": {
+            "x0": 0.04639947589856569,
+            "x1": 0.19899685467216902,
+            "x2": -0.18133550931649306,
+            "x3": 0.44928868756862206,
+            "x4": 0.05285905125502288,
+            "x5": 0.319187932844076,
+            "x6": 0.42578983446194013,
+            "x7": -0.025765956309895477,
+            "x8": -0.3699194462271593,
+            "x9": -0.1212094750908295
+        },
+        "intercept": 0.24451607054764884,
+        "is_converged": false,
+        "one_vs_rest": false
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### tracking-source
+
+For querying the parent and source tables of a table
+
+```bash
+flow table tracking-source [options]
+```
+
+**Options**
+
+| parameter-name | required | type | description |
+| :-------- | :--- | :----- | -------------- |
+| name | yes | string | fate table name |
+| namespace | yes | string | fate table namespace |
+
+**Returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | object | return data |
+
+**Example**
+
+```json
+{
+    "data": [{"parent_table_name": "61210fa23c8d11ec849a5254004fdc71", "parent_table_namespace": "output_data_202111031759294631020_hetero _lr_0_0", "source_table_name": "breast_hetero_guest", "source_table_namespace": "experiment"}],
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### tracking-job
+
+For querying the usage of a particular table
+
+```bash
+flow table tracking-job [options]
+```
+
+**Options**
+
+| parameter name | required | type | description |
+| :-------- | :--- | :----- | -------------- |
+| name | yes | string | fate table name |
+| namespace | yes | string | fate table namespace |
+
+**Returns**
+
+| parameter name | type | description |
+| :------ | :----- | -------- |
+| retcode | int | return code |
+| retmsg | string | return message |
+| data | object | return data |
+
+**Example**
+
+```json
+{
+    "data": {"count":2, "jobs":["202111052115375327830", "202111031816501123160"]},
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 604 - 0
FATE-Flow/doc/cli/tracking.zh.md

@@ -0,0 +1,604 @@
+## Tracking
+
+### metrics
+
+获取某个组件任务产生的所有指标名称列表
+
+```bash
+flow tracking metrics [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                          |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id           | 是   | string | 作业id                        |
+| -r, --role             | 是   | string | 参与角色                      |
+| -p, --partyid          | 是   | string | 参与方id                      |
+| -cpn, --component-name | 是   | string | 组件名,与job dsl中的保持一致 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+
+**样例** 
+
+```bash
+flow tracking metrics -j 202111081618357358520 -r guest -p 9999 -cpn evaluation_0
+```
+
+输出:
+
+```json
+{
+    "data": {
+        "train": [
+            "hetero_lr_0",
+            "hetero_lr_0_ks_fpr",
+            "hetero_lr_0_ks_tpr",
+            "hetero_lr_0_lift",
+            "hetero_lr_0_gain",
+            "hetero_lr_0_accuracy",
+            "hetero_lr_0_precision",
+            "hetero_lr_0_recall",
+            "hetero_lr_0_roc",
+            "hetero_lr_0_confusion_mat",
+            "hetero_lr_0_f1_score",
+            "hetero_lr_0_quantile_pr"
+        ]
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### metric-all
+
+获取组件任务的所有输出指标
+
+```bash
+flow tracking metric-all [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                          |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id           | 是   | string | 作业id                        |
+| -r, --role             | 是   | string | 参与角色                      |
+| -p, --partyid          | 是   | string | 参与方id                      |
+| -cpn, --component-name | 是   | string | 组件名,与job dsl中的保持一致 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+| jobId   | string | 作业id   |
+
+**样例** 
+
+```bash
+flow tracking metric-all -j 202111081618357358520 -r guest -p 9999 -cpn evaluation_0
+```
+
+输出(篇幅有限,仅显示部分指标的数据且数组型数据中间省略了一些值):
+
+```json
+{
+    "data": {
+        "train": {
+            "hetero_lr_0": {
+                "data": [
+                    [
+                        "auc",
+                        0.293893
+                    ],
+                    [
+                        "ks",
+                        0.0
+                    ]
+                ],
+                "meta": {
+                    "metric_type": "EVALUATION_SUMMARY",
+                    "name": "hetero_lr_0"
+                }
+            },
+            "hetero_lr_0_accuracy": {
+                "data": [
+                    [
+                        0.0,
+                        0.372583
+                    ],
+                    [
+                        0.99,
+                        0.616872
+                    ]
+                ],
+                "meta": {
+                    "curve_name": "hetero_lr_0",
+                    "metric_type": "ACCURACY_EVALUATION",
+                    "name": "hetero_lr_0_accuracy",
+                    "thresholds": [
+                        0.999471,
+                        0.002577
+                    ]
+                }
+            },
+            "hetero_lr_0_confusion_mat": {
+                "data": [],
+                "meta": {
+                    "fn": [
+                        357,
+                        0
+                    ],
+                    "fp": [
+                        0,
+                        212
+                    ],
+                    "metric_type": "CONFUSION_MAT",
+                    "name": "hetero_lr_0_confusion_mat",
+                    "thresholds": [
+                        0.999471,
+                        0.0
+                    ],
+                    "tn": [
+                        212,
+                        0
+                    ],
+                    "tp": [
+                        0,
+                        357
+                    ]
+                }
+            }
+        }
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### parameters
+
+提交作业后,系统依据job conf中的component_parameters结合系统默认组件参数,最终解析得到的实际组件任务运行参数
+
+```bash
+flow tracking parameters [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                          |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id           | 是   | string | 作业id                        |
+| -r, --role             | 是   | string | 参与角色                      |
+| -p, --partyid          | 是   | string | 参与方id                      |
+| -cpn, --component-name | 是   | string | 组件名,与job dsl中的保持一致 |
+
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+| jobId   | string | 作业id   |
+
+**样例** 
+
+```bash
+flow tracking parameters  -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0
+```
+
+输出:
+
+```json
+{
+    "data": {
+        "ComponentParam": {
+            "_feeded_deprecated_params": [],
+            "_is_raw_conf": false,
+            "_name": "HeteroLR#hetero_lr_0",
+            "_user_feeded_params": [
+                "batch_size",
+                "penalty",
+                "max_iter",
+                "learning_rate",
+                "init_param",
+                "optimizer",
+                "init_param.init_method",
+                "alpha"
+            ],
+            "alpha": 0.01,
+            "batch_size": 320,
+            "callback_param": {
+                "callbacks": [],
+                "early_stopping_rounds": null,
+                "metrics": [],
+                "save_freq": 1,
+                "use_first_metric_only": false,
+                "validation_freqs": null
+            },
+            "cv_param": {
+                "history_value_type": "score",
+                "mode": "hetero",
+                "n_splits": 5,
+                "need_cv": false,
+                "output_fold_history": true,
+                "random_seed": 1,
+                "role": "guest",
+                "shuffle": true
+            },
+            "decay": 1,
+            "decay_sqrt": true,
+            "early_stop": "diff",
+            "early_stopping_rounds": null,
+            "encrypt_param": {
+                "key_length": 1024,
+                "method": "Paillier"
+            },
+            "encrypted_mode_calculator_param": {
+                "mode": "strict",
+                "re_encrypted_rate": 1
+            },
+            "floating_point_precision": 23,
+            "init_param": {
+                "fit_intercept": true,
+                "init_const": 1,
+                "init_method": "random_uniform",
+                "random_seed": null
+            },
+            "learning_rate": 0.15,
+            "max_iter": 3,
+            "metrics": [
+                "auc",
+                "ks"
+            ],
+            "multi_class": "ovr",
+            "optimizer": "rmsprop",
+            "penalty": "L2",
+            "predict_param": {
+                "threshold": 0.5
+            },
+            "sqn_param": {
+                "memory_M": 5,
+                "random_seed": null,
+                "sample_size": 5000,
+                "update_interval_L": 3
+            },
+            "stepwise_param": {
+                "direction": "both",
+                "max_step": 10,
+                "mode": "hetero",
+                "need_stepwise": false,
+                "nvmax": null,
+                "nvmin": 2,
+                "role": "guest",
+                "score_name": "AIC"
+            },
+            "tol": 0.0001,
+            "use_first_metric_only": false,
+            "validation_freqs": null
+        },
+        "module": "HeteroLR"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### output-data
+
+获取组件输出
+
+```bash
+flow tracking output-data [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                          |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id           | 是   | string | 作业id                        |
+| -r, --role             | 是   | string | 参与角色                      |
+| -p, --partyid          | 是   | string | 参与方id                      |
+| -cpn, --component-name | 是   | string | 组件名,与job dsl中的保持一致 |
+| -o, --output-path      | 是   | string | 输出数据的存放路径            |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+| jobId   | string | 作业id   |
+
+**样例** 
+
+```bash
+flow tracking output-data  -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0 -o ./
+```
+
+输出:
+
+```json
+{
+    "retcode": 0,
+    "directory": "$FATE_PROJECT_BASE/job_202111081618357358520_hetero_lr_0_guest_9999_output_data",
+    "retmsg": "Download successfully, please check $FATE_PROJECT_BASE/job_202111081618357358520_hetero_lr_0_guest_9999_output_data directory"
+}
+```
+
+### output-data-table
+
+获取组件的输出数据表名
+
+```bash
+flow tracking output-data-table [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                          |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id           | 是   | string | 作业id                        |
+| -r, --role             | 是   | string | 参与角色                      |
+| -p, --partyid          | 是   | string | 参与方id                      |
+| -cpn, --component-name | 是   | string | 组件名,与job dsl中的保持一致 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+| jobId   | string | 作业id   |
+
+**样例** 
+
+```bash
+flow tracking output-data-table  -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0
+```
+
+输出:
+
+```json
+{
+    "data": [
+        {
+            "data_name": "train",
+            "table_name": "9688fa00406c11ecbd0bacde48001122",
+            "table_namespace": "output_data_202111081618357358520_hetero_lr_0_0"
+        }
+    ],
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### output-model
+
+获取某个组件任务的输出模型
+
+```bash
+flow tracking output-model [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                          |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id           | 是   | string | 作业id                        |
+| -r, --role             | 是   | string | 参与角色                      |
+| -p, --partyid          | 是   | string | 参与方id                      |
+| -cpn, --component-name | 是   | string | 组件名,与job dsl中的保持一致 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+| jobId   | string | 作业id   |
+
+**样例** 
+
+```bash
+flow tracking output-model  -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0
+```
+
+输出:
+
+```json
+{
+    "data": {
+        "bestIteration": -1,
+        "encryptedWeight": {},
+        "header": [
+            "x0",
+            "x1",
+            "x2",
+            "x3",
+            "x4",
+            "x5",
+            "x6",
+            "x7",
+            "x8",
+            "x9"
+        ],
+        "intercept": 0.24451607054764884,
+        "isConverged": false,
+        "iters": 3,
+        "lossHistory": [],
+        "needOneVsRest": false,
+        "weight": {
+            "x0": 0.04639947589856569,
+            "x1": 0.19899685467216902,
+            "x2": -0.18133550931649306,
+            "x3": 0.44928868756862206,
+            "x4": 0.05285905125502288,
+            "x5": 0.319187932844076,
+            "x6": 0.42578983446194013,
+            "x7": -0.025765956309895477,
+            "x8": -0.3699194462271593,
+            "x9": -0.1212094750908295
+        }
+    },
+    "meta": {
+        "meta_data": {
+            "alpha": 0.01,
+            "batchSize": "320",
+            "earlyStop": "diff",
+            "fitIntercept": true,
+            "learningRate": 0.15,
+            "maxIter": "3",
+            "needOneVsRest": false,
+            "optimizer": "rmsprop",
+            "partyWeight": 0.0,
+            "penalty": "L2",
+            "reEncryptBatches": "0",
+            "revealStrategy": "",
+            "tol": 0.0001
+        },
+        "module_name": "HeteroLR"
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### get-summary
+
+每个组件允许设置一些摘要信息,便于观察分析
+
+```bash
+flow tracking get-summary [options]
+```
+
+**选项**
+
+| 参数名                 | 必选 | 类型   | 说明                          |
+| :--------------------- | :--- | :----- | ----------------------------- |
+| -j, --job-id           | 是   | string | 作业id                        |
+| -r, --role             | 是   | string | 参与角色                      |
+| -p, --partyid          | 是   | string | 参与方id                      |
+| -cpn, --component-name | 是   | string | 组件名,与job dsl中的保持一致 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | dict   | 返回数据 |
+| jobId   | string | 作业id   |
+
+**样例** 
+
+```bash
+flow tracking get-summary -j 202111081618357358520 -r guest -p 9999 -cpn hetero_lr_0
+```
+
+输出:
+
+```json
+{
+    "data": {
+        "best_iteration": -1,
+        "coef": {
+            "x0": 0.04639947589856569,
+            "x1": 0.19899685467216902,
+            "x2": -0.18133550931649306,
+            "x3": 0.44928868756862206,
+            "x4": 0.05285905125502288,
+            "x5": 0.319187932844076,
+            "x6": 0.42578983446194013,
+            "x7": -0.025765956309895477,
+            "x8": -0.3699194462271593,
+            "x9": -0.1212094750908295
+        },
+        "intercept": 0.24451607054764884,
+        "is_converged": false,
+        "one_vs_rest": false
+    },
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### tracking-source
+
+用于查询某张表的父表及源表
+
+```bash
+flow table tracking-source [options]
+```
+
+**选项**
+
+| 参数名    | 必选 | 类型   | 说明           |
+| :-------- | :--- | :----- | -------------- |
+| name      | 是   | string | fate表名       |
+| namespace | 是   | string | fate表命名空间 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+**样例**
+
+```json
+{
+    "data": [{"parent_table_name": "61210fa23c8d11ec849a5254004fdc71", "parent_table_namespace": "output_data_202111031759294631020_hetero_lr_0_0", "source_table_name": "breast_hetero_guest", "source_table_namespace": "experiment"}],
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+### tracking-job
+
+用于查询某张表的使用情况
+
+```bash
+flow table tracking-job [options]
+```
+
+**选项**
+
+| 参数名    | 必选 | 类型   | 说明           |
+| :-------- | :--- | :----- | -------------- |
+| name      | 是   | string | fate表名       |
+| namespace | 是   | string | fate表命名空间 |
+
+**返回**
+
+| 参数名  | 类型   | 说明     |
+| :------ | :----- | -------- |
+| retcode | int    | 返回码   |
+| retmsg  | string | 返回信息 |
+| data    | object | 返回数据 |
+
+**样例**
+
+```json
+{
+    "data": {"count":2,"job":["202111052115375327830", "202111031816501123160"]},
+    "retcode": 0,
+    "retmsg": "success"
+}
+```

+ 412 - 0
FATE-Flow/doc/configuration_instruction.md

@@ -0,0 +1,412 @@
+# Configuration Instructions
+
+## 1. Description
+
+Contains the general configuration of the `FATE project` and the configuration of each subsystem
+
+## 2. Global configuration
+
+- Path: `${FATE_PROJECT_BASE}/conf/server_conf.yaml`
+- Description: Commonly used configuration, generally needed to determine when deploying
+- Note: Configuration items that are not listed below in the configuration file are internal system parameters and are not recommended to be modified
+
+```yaml
+# If FATEFlow uses the registry, FATEFlow will register the FATEFlow Server address and the published model download address to the registry for the online system FATEServing; it will also get the FATEServing address from the registry.
+use_registry: false
+# Whether to enable higher security serialization mode
+use_deserialize_safe_module: false
+dependent_distribution: false
+# party id: required for site authentication
+party_id:
+# Hook module configuration
+hook_module:
+  # Client authentication hooks
+  client_authentication: fate_flow.hook.flow.client_authentication
+  # site-side authentication hooks
+  site_authentication: fate_flow.hook.flow.site_authentication
+  # Permission authentication hooks
+  permission: fate_flow.hook.flow.permission
+# In addition to using flow's hooks for authentication and authentication, we also support authentication and authentication interfaces registered with third-party services
+# The name of the service registered by the third-party authentication and authentication service
+hook_server_name:
+# Authentication
+authentication:
+  # Client authentication configuration
+  client:
+    # Client authentication switch
+    switch: false
+    http_app_key:
+    http_secret_key:
+  # Site authentication configuration
+  site:
+    # Authentication switch
+    switch: false
+# Authentication
+permission:
+  # Authentication switch
+  switch: false
+  # Component authentication switch
+  component: false
+  # Data set authentication switch
+  dataset: false
+fateflow:
+  # you must set real ip address, 127.0.0.1 and 0.0.0.0 is not supported
+  host: 127.0.0.1
+  http_port: 9380
+  grpc_port: 9360
+  # The nginx address needs to be configured for high availability
+  nginx:
+    host:
+    http_port:
+    grpc_port:
+  # use random instance_id instead of {host}:{http_port}
+  random_instance_id: false
+  # support rollsite/nginx/fateflow as a coordination proxy
+  # rollsite support fate on eggroll, use grpc protocol
+  # nginx support fate on eggroll and fate on spark, use http or grpc protocol, default is http
+  # fateflow support fate on eggroll and fate on spark, use http protocol, but not support exchange network mode
+
+  # format(proxy: rollsite) means rollsite use the rollsite configuration of fate_one_eggroll and nginx use the nginx configuration of fate_one_spark
+  # you also can customize the config like this(set fateflow of the opposite party as proxy):
+  # proxy:
+  #   name: fateflow
+  #   host: xx
+  #   http_port: xx
+  #   grpc_port: xx
+  proxy: rollsite
+  # support default/http/grpc
+  protocol: default
+database:
+  name: fate_flow
+  user: fate
+  passwd: fate
+  host: 127.0.0.1
+  port: 3306
+  max_connections: 100
+  stale_timeout: 30
+# The registry address and its authentication parameters
+zookeeper:
+  hosts:
+    - 127.0.0.1:2181
+  use_acl: false
+  user: fate
+  password: fate
+# engine services
+default_engines:
+  computing: standalone
+  federation: standalone
+  storage: standalone
+fate_on_standalone:
+  standalone:
+    cores_per_node: 20
+    nodes: 1
+fate_on_eggroll:
+  clustermanager:
+    # CPU cores of the machine where eggroll nodemanager service is running
+    cores_per_node: 16
+    # the number of eggroll nodemanager machine
+    nodes: 1
+  rollsite:
+    host: 127.0.0.1
+    port: 9370
+fate_on_spark:
+  spark:
+    # default use SPARK_HOME environment variable
+    home:
+    cores_per_node: 20
+    nodes: 2
+  linkis_spark:
+    cores_per_node: 20
+    nodes: 2
+    host: 127.0.0.1
+    port: 9001
+    token_code: MLSS
+    python_path: /data/projects/fate/python
+  hive:
+    host: 127.0.0.1
+    port: 10000
+    auth_mechanism:
+    username:
+    password:
+  linkis_hive:
+    host: 127.0.0.1
+    port: 9001
+  hdfs:
+    name_node: hdfs://fate-cluster
+    # default /
+    path_prefix:
+  rabbitmq:
+    host: 192.168.0.4
+    mng_port: 12345
+    port: 5672
+    user: fate
+    password: fate
+    # default conf/rabbitmq_route_table.yaml
+    route_table:
+  pulsar:
+    host: 192.168.0.5
+    port: 6650
+    mng_port: 8080
+    cluster: standalone
+    # all parties should use a same tenant
+    tenant: fl-tenant
+    # message ttl in minutes
+    topic_ttl: 5
+    # default conf/pulsar_route_table.yaml
+    route_table:
+  nginx:
+    host: 127.0.0.1
+    http_port: 9300
+    grpc_port: 9310
+# external services
+fateboard:
+  host: 127.0.0.1
+  port: 8080
+
+# on API `/model/load` and `/model/load/do`
+# automatic upload models to the model store if it exists locally but does not exist in the model storage
+# or download models from the model store if it does not exist locally but exists in the model storage
+# this config will not affect API `/model/store` or `/model/restore`
+enable_model_store: false
+# default address for export model
+model_store_address:
+  # use mysql as the model store engine
+#  storage: mysql
+#  database: fate_model
+#  user: fate
+#  password: fate
+#  host: 127.0.0.1
+#  port: 3306
+  # other optional configs send to the engine
+#  max_connections: 10
+#  stale_timeout: 10
+  # use redis as the model store engine
+#  storage: redis
+#  host: 127.0.0.1
+#  port: 6379
+#  db: 0
+#  password:
+  # the expiry time of keys, in seconds. defaults None (no expiry time)
+#  ex:
+  # use tencent cos as model store engine
+  storage: tencent_cos
+  Region:
+  SecretId:
+  SecretKey:
+  Bucket:
+
+# The address of the FATE Serving Server needs to be configured if the registry is not used
+servings:
+  hosts:
+    - 127.0.0.1:8000
+fatemanager:
+  host: 127.0.0.1
+  port: 8001
+  federatedId: 0
+
+```
+
+## 3. FATE Flow Configuration
+
+### 3.1 FATE Flow Server Configuration
+
+- Path: `${FATE_FLOW_BASE}/python/fate_flow/settings.py`
+- Description: Advanced configuration, generally no changes are needed
+- Note: Configuration items that are not listed below in the configuration file are internal system parameters and are not recommended to be modified
+
+```python
+# Thread pool size of grpc server used by FATE Flow Server for multiparty FATE Flow Server communication, not set default equal to the number of CPU cores of the machine
+GRPC_SERVER_MAX_WORKERS = None
+
+# Switch
+# The upload data interface gets data from the client by default, this value can be configured at the time of the interface call using use_local_data
+UPLOAD_DATA_FROM_CLIENT = True
+# Whether to enable multi-party communication authentication, need to be used with FATE Cloud
+CHECK_NODES_IDENTITY = False
+# Whether to enable the resource authentication function, need to use with FATE Cloud
+USE_AUTHENTICATION = False
+# Resource privileges granted by default
+PRIVILEGE_COMMAND_WHITELIST = []
+```
+
+### 3.2 FATE Flow Default Job Configuration
+
+- Path: `${FATE_FLOW_BASE}/conf/job_default_config.yaml`
+- Description: Advanced configuration, generally no changes are needed
+- Note: Configuration items that are not listed below in the configuration file are internal system parameters and are not recommended to be modified
+- Take effect: use flow server reload or restart fate flow server
+
+```yaml
+# component provider, relative path to get_fate_python_directory
+default_component_provider_path: federatedml
+
+# resource
+# total_cores_overweight_percent
+total_cores_overweight_percent: 1 # 1 means no overweight
+total_memory_overweight_percent: 1 # 1 means no overweight
+# Default task parallelism per job, you can configure a custom value using job_parameters:task_parallelism when submitting the job configuration
+task_parallelism: 1
+# The default number of CPU cores per task per job, which can be configured using job_parameters:task_cores when submitting the job configuration
+task_cores: 4
+# This configuration does not take effect as memory resources are not supported for scheduling at the moment
+task_memory: 0 # mb
+# The ratio of the maximum number of CPU cores allowed for a job to the total number of resources, e.g., if the total resources are 10 and the value is 0.5, then a job is allowed to request up to 5 CPUs, i.e., task_cores * task_parallelism <= 10 * 0.5
+max_cores_percent_per_job: 1 # 1 means total
+
+# scheduling
+# Default job execution timeout, you can configure a custom value using job_parameters:timeout when submitting the job configuration
+job_timeout: 259200 # s
+# Timeout for communication when sending cross-participant scheduling commands or status
+remote_request_timeout: 30000 # ms
+# Number of retries to send cross-participant scheduling commands or status
+federated_command_trys: 3
+end_status_job_scheduling_time_limit: 300000 # ms
+end_status_job_scheduling_updates: 1
+# Default number of auto retries, you can configure a custom value using job_parameters:auto_retries when submitting the job configuration
+auto_retries: 0
+# Default retry interval
+auto_retry_delay: 1 #seconds
+# Default multiparty status collection method, supports PULL and PUSH; you can also specify the current job collection mode in the job configuration
+federated_status_collect_type: PUSH
+
+# upload
+upload_max_bytes: 104857600 # bytes
+
+#component output
+output_data_summary_count_limit: 100
+```
+
+## 4. FATE Board Configuration
+
+- Path: `${FATE_BOARD_BASE}/conf/application.properties`
+- Description: Commonly used configuration, generally needed to determine when deploying
+- Note: Configuration items that are not listed below in the configuration file are internal system parameters and are not recommended to be modified
+
+```properties
+# Service listening port
+server.port=8080
+# fateflow address, referring to the http port address of fateflow
+fateflow.url==http://127.0.0.1:9380
+# db address, same as the above global configuration service_conf.yaml inside the database configuration
+fateboard.datasource.jdbc-url=jdbc:mysql://localhost:3306/fate_flow?characterEncoding=utf8&characterSetResults=utf8&autoReconnect= true&failOverReadOnly=false&serverTimezone=GMT%2B8
+# db configuration, same as the above global configuration service_conf.yaml inside the database configuration
+fateboard.datasource.username=
+# db configuration, same as the above global configuration service_conf.yaml inside the database configuration
+fateboard.datasource.password=
+server.tomcat.max-threads=1000
+server.tomcat.max-connections=20000
+spring.servlet.multipart.max-file-size=10MB
+spring.servlet.multipart.max-request-size=100MB
+# Administrator account configuration
+server.board.login.username=admin
+server.board.login.password=admin
+server.ssl.key-store=classpath:
+server.ssl.key-store-password=
+server.ssl.key-password=
+server.ssl.key-alias=
+# When fateflo server enables api access authentication, you need to configure
+HTTP_APP_KEY=
+HTTP_SECRET_KEY=
+```
+
+## 5. EggRoll
+
+### 5.1 System configuration
+
+- Path: `${EGGROLL_HOME}/conf/eggroll.properties`
+- Description: Commonly used configuration, generally needed to determine when deploying
+- Note: Configuration items that are not listed below in the configuration file are internal system parameters and are not recommended to be modified
+
+```properties
+[eggroll]
+# core
+# MySQL connection configuration, generally required for production applications
+eggroll.resourcemanager.clustermanager.jdbc.driver.class.name=com.mysql.cj.jdbc.
+# MySQL connection configuration, generally required for production applications
+eggroll.resourcemanager.clustermanager.jdbc.url=jdbc:mysql://localhost:3306/eggroll_meta?useSSL=false&serverTimezone=UTC& characterEncoding=utf8&allowPublicKeyRetrieval=true
+# Connect to MySQL account, this configuration is required for general production applications
+eggroll.resourcemanager.clustermanager.jdbc.username=
+# Connect to MySQL password, generally required for production applications
+eggroll.resourcemanager.clustermanager.jdbc.password=
+
+# Data storage directory
+eggroll.data.dir=data/
+# Log storage directory
+eggroll.logs.dir=logs/
+eggroll.resourcemanager.clustermanager.host=127.0.0.1
+eggroll.resourcemanager.clustermanager.port=4670
+eggroll.resourcemanager.nodemanager.port=4670
+
+# python path
+eggroll.resourcemanager.bootstrap.eggg_pair.venv=
+# pythonpath, usually you need to specify the python directory of eggroll and the python directory of fate
+eggroll.resourcemanager.bootstrap.eggg_pair.pythonpath=python
+
+# java path
+eggroll.resourcemanager.bootstrap.eggg_frame.javahome=
+# java service startup parameters, no special needs, no need to configure
+eggroll.resourcemanager.bootstrap.eggg_frame.jvm.options=
+# grpc connection hold time for multi-party communication
+eggroll.core.grpc.channel.keepalive.timeout.sec=20
+
+# session
+# Number of computing processes started per nodemanager in an eggroll session; replaced by the default parameters of the fate flow if using fate for committing tasks
+eggroll.session.processors.per.node=4
+
+# rollsite
+eggroll.rollsite.coordinator=webank
+eggroll.rollsite.host=127.0.0.1
+eggroll.rollsite.port=9370
+eggroll.rollsite.party.id=10001
+eggroll.rollsite.route.table.path=conf/route_table.json
+
+eggroll.rollsite.push.max.retry=3
+eggroll.rollsite.push.long.retry=2
+eggroll.rollsite.push.batches.per.stream=10
+eggroll.rollsite.adapter.sendbuf.size=100000
+```
+
+### 5.2 Routing table configuration
+
+- Path: `${EGGROLL_HOME}/conf/route_table.json`
+- Description: Commonly used configuration, generally needed to determine when deploying
+  - The routing table is mainly divided into two levels
+  - The first level indicates the site, if the corresponding target site configuration is not found, then use **default**
+  - The second level represents the service, if you can not find the corresponding target service, then use **default**
+  - The second level, usually set **default** as the address of our **rollsite** service, and **fateflow** as the grpc address of our **fate flow server** service
+
+```json
+{
+  "route_table":
+  {
+    "10001":
+    {
+      "default":[
+        {
+          "port": 9370,
+          "ip": "127.0.0.1"
+        }
+      ],
+      "fateflow":[
+        {
+          "port": 9360,
+          "ip": "127.0.0.1"
+        }
+      ]
+    },
+    "10002":
+    {
+      "default":[
+        {
+          "port": 9470,
+          "ip": "127.0.0.1"
+        }
+      ]
+    }
+  },
+  "permission":
+  {
+    "default_allow": true
+  }
+}
+```

+ 419 - 0
FATE-Flow/doc/configuration_instruction.zh.md

@@ -0,0 +1,419 @@
+# 配置说明
+
+## 1. 说明
+
+包含`FATE项目`总配置以及各个子系统的配置
+
+## 2. 全局配置
+
+- 路径:`${FATE_PROJECT_BASE}/conf/server_conf.yaml`
+- 说明:常用配置,一般部署时需要确定
+- 注意:配置文件中未被列举如下的配置项属于系统内部参数,不建议修改
+
+```yaml
+# FATEFlow是否使用注册中心,使用注册中心的情况下,FATEFlow会注册FATEFlow Server地址以及发布的模型下载地址到注册中心以供在线系统FATEServing使用;同时也会从注册中心获取FATEServing地址
+use_registry: false
+# 是否启用更高安全级别的序列化模式
+use_deserialize_safe_module: false
+# fate on spark模式下是否启动依赖分发
+dependent_distribution: false
+# 是否启动密码加密(数据库密码),开启后配置encrypt_module和private_key才生效
+encrypt_password: false
+# 加密包及加密函数(“#”号拼接)
+encrypt_module: fate_arch.common.encrypt_utils#pwdecrypt
+# 加密私钥
+private_key:
+# 站点id,站点鉴权时需要配置
+party_id:
+# 钩子模块配置
+hook_module:
+  # 客户端认证钩子
+  client_authentication: fate_flow.hook.flow.client_authentication
+  # 站点端认证钩子
+  site_authentication: fate_flow.hook.flow.site_authentication
+  # 权限认证钩子
+  permission: fate_flow.hook.flow.permission
+# 除了支持使用flow的钩子进行认证、鉴权,也支持使用第三方服务注册的认证和鉴权接口
+# 第三方认证、鉴权服务注册的服务名
+hook_server_name:
+# 认证
+authentication:
+  # 客户端认证配置
+  client:
+    # 客户端认证开关
+    switch: false
+    http_app_key:
+    http_secret_key:
+  # 站点认证配置
+  site:
+    # 认证开关
+    switch: false
+# 鉴权
+permission:
+  # 鉴权开关
+  switch: false
+  # 组件鉴权开关
+  component: false
+  # 数据集鉴权开关
+  dataset: false
+fateflow:
+  # 必须使用真实绑定的ip地址,避免因为多网卡/多IP引发的额外问题
+  # you must set real ip address, 127.0.0.1 and 0.0.0.0 is not supported
+  host: 127.0.0.1
+  http_port: 9380
+  grpc_port: 9360
+  # 高可用性时需要配置nginx地址
+  nginx:
+    host:
+    http_port:
+    grpc_port:
+  # 实例id默认为{host}:{http_port},可以通过random_instance_id配置生成随机id
+  random_instance_id: false
+  # 支持使用rollsite/nginx/fateflow作为多方任务协调通信代理
+  # rollsite支持fate on eggroll的场景,仅支持grpc协议,支持P2P组网及星型组网模式
+  # nginx支持所有引擎场景,支持http与grpc协议,默认为http,支持P2P组网及星型组网模式
+  # fateflow支持所有引擎场景,支持http与grpc协议,默认为http,仅支持P2P组网模式,也即只支持互相配置对端fateflow地址
+  # 格式(proxy: rollsite)表示使用rollsite并使用下方fate_one_eggroll配置大类中的rollsite配置;配置nginx表示使用下方fate_one_spark配置大类中的nginx配置
+  # 也可以直接配置对端fateflow的地址,如下所示:
+  # proxy:
+  #   name: fateflow
+  #   host: xx
+  #   http_port: xx
+  #   grpc_port: xx
+  proxy: rollsite
+  # support default/http/grpc
+  protocol: default
+database:
+  name: fate_flow
+  user: fate
+  passwd: fate
+  host: 127.0.0.1
+  port: 3306
+  max_connections: 100
+  stale_timeout: 30
+# 注册中心地址及其身份认证参数
+zookeeper:
+  hosts:
+    - 127.0.0.1:2181
+  use_acl: false
+  user: fate
+  password: fate
+# engine services
+default_engines:
+  computing: standalone
+  federation: standalone
+  storage: standalone
+fate_on_standalone:
+  standalone:
+    cores_per_node: 20
+    nodes: 1
+fate_on_eggroll:
+  clustermanager:
+    # eggroll nodemanager服务所在机器的CPU核数
+    cores_per_node: 16
+    # eggroll nodemanager服务的机器数量
+    nodes: 1
+  rollsite:
+    host: 127.0.0.1
+    port: 9370
+fate_on_spark:
+  spark:
+    # default use SPARK_HOME environment variable
+    home:
+    cores_per_node: 20
+    nodes: 2
+  linkis_spark:
+    cores_per_node: 20
+    nodes: 2
+    host: 127.0.0.1
+    port: 9001
+    token_code: MLSS
+    python_path: /data/projects/fate/python
+  hive:
+    host: 127.0.0.1
+    port: 10000
+    auth_mechanism:
+    username:
+    password:
+  linkis_hive:
+    host: 127.0.0.1
+    port: 9001
+  hdfs:
+    name_node: hdfs://fate-cluster
+    # default /
+    path_prefix:
+  rabbitmq:
+    host: 192.168.0.4
+    mng_port: 12345
+    port: 5672
+    user: fate
+    password: fate
+    # default conf/rabbitmq_route_table.yaml
+    route_table:
+  pulsar:
+    host: 192.168.0.5
+    port: 6650
+    mng_port: 8080
+    cluster: standalone
+    # all parties should use a same tenant
+    tenant: fl-tenant
+    # message ttl in minutes
+    topic_ttl: 5
+    # default conf/pulsar_route_table.yaml
+    route_table:
+  nginx:
+    host: 127.0.0.1
+    http_port: 9300
+    grpc_port: 9310
+# external services
+fateboard:
+  host: 127.0.0.1
+  port: 8080
+
+# on API `/model/load` and `/model/load/do`
+# automatic upload models to the model store if it exists locally but does not exist in the model storage
+# or download models from the model store if it does not exist locally but exists in the model storage
+# this config will not affect API `/model/store` or `/model/restore`
+enable_model_store: false
+# 模型导出(export model)操作默认的导出地址
+model_store_address:
+  # use mysql as the model store engine
+#  storage: mysql
+#  database: fate_model
+#  user: fate
+#  password: fate
+#  host: 127.0.0.1
+#  port: 3306
+  # other optional configs send to the engine
+#  max_connections: 10
+#  stale_timeout: 10
+  # use redis as the model store engine
+#  storage: redis
+#  host: 127.0.0.1
+#  port: 6379
+#  db: 0
+#  password:
+  # the expiry time of keys, in seconds. defaults None (no expiry time)
+#  ex:
+  # use tencent cos as model store engine
+  storage: tencent_cos
+  Region:
+  SecretId:
+  SecretKey:
+  Bucket:
+
+# 不使用注册中心的情况下,需要配置FATE Serving Server的地址
+servings:
+  hosts:
+    - 127.0.0.1:8000
+fatemanager:
+  host: 127.0.0.1
+  port: 8001
+  federatedId: 0
+
+```
+
+## 3. FATE Flow配置
+
+### 3.1 FATE Flow Server配置
+
+- 路径:`${FATE_FLOW_BASE}/python/fate_flow/settings.py`
+- 说明:高级配置,一般不需要做改动
+- 注意:配置文件中未被列举如下的配置项属于系统内部参数,不建议修改
+
+```python
+# FATE Flow Server用于多方FATE Flow Server通信的grpc server的线程池大小,不设置默认等于机器CPU核数
+GRPC_SERVER_MAX_WORKERS = None
+
+# Switch
+# 上传数据接口默认从客户端获取数据,该值可以在接口调用时使用use_local_data配置自定义值
+UPLOAD_DATA_FROM_CLIENT = True
+# 是否开启多方通信身份认证功能,需要配合FATE Cloud使用
+CHECK_NODES_IDENTITY = False
+# 是否开启资源鉴权功能,需要配合FATE Cloud使用
+USE_AUTHENTICATION = False
+# 默认授予的资源权限
+PRIVILEGE_COMMAND_WHITELIST = []
+```
+
+### 3.2 FATE Flow 默认作业配置
+
+- 路径:`${FATE_FLOW_BASE}/conf/job_default_config.yaml`
+- 说明:高级配置,一般不需要做改动
+- 注意:配置文件中未被列举如下的配置项属于系统内部参数,不建议修改
+- 生效:使用flow server reload或者重启fate flow server
+
+```yaml
+# component provider, relative path to get_fate_python_directory
+default_component_provider_path: federatedml
+
+# resource
+# 总资源超配百分比
+total_cores_overweight_percent: 1  # 1 means no overweight
+total_memory_overweight_percent: 1  # 1 means no overweight
+# 默认的每个作业的任务并行度,可以在提交作业配置时使用job_parameters:task_parallelism配置自定义值
+task_parallelism: 1
+# 默认的每个作业中每个任务使用的CPU核数,可以在提交作业配置时使用job_parameters:task_cores配置自定义值
+task_cores: 4
+# 暂时不支持内存资源的调度,该配置不生效
+task_memory: 0  # mb
+# 一个作业最大允许申请的CPU核数占总资源数量的比例,如总资源为10,此值为0.5,则表示一个作业最多允许申请5个CPU,也即task_cores * task_parallelism <= 10 * 0.5
+max_cores_percent_per_job: 1  # 1 means total
+
+# scheduling
+# 默认的作业执行超时时间,可以在提交作业配置时使用job_parameters:timeout配置自定义值
+job_timeout: 259200 # s
+# 发送跨参与方调度命令或者状态时,通信的超时时间
+remote_request_timeout: 30000  # ms
+# 发送跨参与方调度命令或者状态时,通信的重试次数
+federated_command_trys: 3
+end_status_job_scheduling_time_limit: 300000 # ms
+end_status_job_scheduling_updates: 1
+# 默认自动重试次数, 可以在提交作业配置时使用job_parameters:auto_retries配置自定义值
+auto_retries: 0
+# 默认重试次数间隔
+auto_retry_delay: 1  #seconds
+# 默认的多方状态收集方式,支持PULL和PUSH;也可在作业配置指定当前作业的收集模式
+federated_status_collect_type: PUSH
+
+# upload
+upload_max_bytes: 104857600 # bytes
+
+#component output
+output_data_summary_count_limit: 100
+```
+
+## 4. FATE Board配置
+
+- 路径:`${FATE_BOARD_BASE}/conf/application.properties`
+- 说明:常用配置,一般部署时需要确定
+- 注意:配置文件中未被列举如下的配置项属于系统内部参数,不建议修改
+
+```properties
+# 服务监听端口
+server.port=8080
+# fateflow地址,指fateflow的http端口地址
+fateflow.url==http://127.0.0.1:9380
+# db地址,同上述全局配置service_conf.yaml里面的database配置
+fateboard.datasource.jdbc-url=jdbc:mysql://localhost:3306/fate_flow?characterEncoding=utf8&characterSetResults=utf8&autoReconnect=true&failOverReadOnly=false&serverTimezone=GMT%2B8
+# db配置,同上述全局配置service_conf.yaml里面的database配置
+fateboard.datasource.username=
+# db配置,同上述全局配置service_conf.yaml里面的database配置
+fateboard.datasource.password=
+server.tomcat.max-threads=1000
+server.tomcat.max-connections=20000
+spring.servlet.multipart.max-file-size=10MB
+spring.servlet.multipart.max-request-size=100MB
+# 管理员账号配置
+server.board.login.username=admin
+server.board.login.password=admin
+server.ssl.key-store=classpath:
+server.ssl.key-store-password=
+server.ssl.key-password=
+server.ssl.key-alias=
+# 当fateflo server开启api访问鉴权时,需要配置
+HTTP_APP_KEY=
+HTTP_SECRET_KEY=
+```
+
+## 5. EggRoll
+
+### 5.1 系统配置
+
+- 路径:`${EGGROLL_HOME}/conf/eggroll.properties`
+- 说明:常用配置,一般部署时需要确定
+- 注意:配置文件中未被列举如下的配置项属于系统内部参数,不建议修改
+
+```properties
+[eggroll]
+# core
+# 连接MySQL配置,一般生产应用需要此配置
+eggroll.resourcemanager.clustermanager.jdbc.driver.class.name=com.mysql.cj.jdbc.Driver
+# 连接MySQL配置,一般生产应用需要此配置
+eggroll.resourcemanager.clustermanager.jdbc.url=jdbc:mysql://localhost:3306/eggroll_meta?useSSL=false&serverTimezone=UTC&characterEncoding=utf8&allowPublicKeyRetrieval=true
+# 连接MySQL账户,一般生产应用需要此配置
+eggroll.resourcemanager.clustermanager.jdbc.username=
+# 连接MySQL密码,一般生产应用需要此配置
+eggroll.resourcemanager.clustermanager.jdbc.password=
+
+# 数据存储目录
+eggroll.data.dir=data/
+# 日志存储目录
+eggroll.logs.dir=logs/
+eggroll.resourcemanager.clustermanager.host=127.0.0.1
+eggroll.resourcemanager.clustermanager.port=4670
+eggroll.resourcemanager.nodemanager.port=4670
+
+# python路径
+eggroll.resourcemanager.bootstrap.egg_pair.venv=
+# pythonpath, 一般需要指定eggroll的python目录以及fate的python目录
+eggroll.resourcemanager.bootstrap.egg_pair.pythonpath=python
+
+# java路径
+eggroll.resourcemanager.bootstrap.egg_frame.javahome=
+# java服务启动参数,无特别需要,无需配置
+eggroll.resourcemanager.bootstrap.egg_frame.jvm.options=
+# 多方通信时,grpc连接保持时间
+eggroll.core.grpc.channel.keepalive.timeout.sec=20
+
+# session
+# 一个eggroll会话中,每个nodemanager启动的计算进程数量;若使用fate进行提交任务,则会被fate flow的默认参数所代替
+eggroll.session.processors.per.node=4
+
+# rollsite
+eggroll.rollsite.coordinator=webank
+eggroll.rollsite.host=127.0.0.1
+eggroll.rollsite.port=9370
+eggroll.rollsite.party.id=10001
+eggroll.rollsite.route.table.path=conf/route_table.json
+
+eggroll.rollsite.push.max.retry=3
+eggroll.rollsite.push.long.retry=2
+eggroll.rollsite.push.batches.per.stream=10
+eggroll.rollsite.adapter.sendbuf.size=100000
+```
+
+### 5.2 路由表配置
+
+- 路径:`${EGGROLL_HOME}/conf/route_table.json`
+- 说明:常用配置,一般部署时需要确定 
+  - 路由表主要分两个层级表示
+  - 第一级表示站点,若找不到对应的目标站点配置,则使用**default**
+  - 第二级表示服务,若找不到对应的目标服务,则使用**default**
+  - 第二级,通常将**default**设为本方**rollsite**服务地址,将**fateflow**设为本方**fate flow server**服务的grpc地址
+
+```json
+{
+  "route_table":
+  {
+    "10001":
+    {
+      "default":[
+        {
+          "port": 9370,
+          "ip": "127.0.0.1"
+        }
+      ],
+      "fateflow":[
+        {
+          "port": 9360,
+          "ip": "127.0.0.1"
+        }
+      ]
+    },
+    "10002":
+    {
+      "default":[
+        {
+          "port": 9470,
+          "ip": "127.0.0.1"
+        }
+      ]
+    }
+  },
+  "permission":
+  {
+    "default_allow": true
+  }
+}
+```

+ 40 - 0
FATE-Flow/doc/document_navigation.md

@@ -0,0 +1,40 @@
+# Document Navigation
+
+## 1. General Document Variables
+
+You will see the following `document variables` in all `FATE Flow` documentation, with the following meanings.
+
+- FATE_PROJECT_BASE: denotes the `FATE project` deployment directory, containing configuration, fate algorithm packages, fate clients and subsystems: `bin`, `conf`, `examples`, `fate`, `fateflow`, `fateboard`, `eggroll`, etc.
+- FATE_BASE: The deployment directory of `FATE`, named `fate`, contains algorithm packages, clients: `federatedml`, `fate arch`, `fate client`, usually the path is `${FATE_PROJECT_BASE}/fate`
+- FATE_FLOW_BASE: The deployment directory of `FATE Flow`, named `fateflow`, containing `fate flow server`, etc., usually the path is `${FATE_PROJECT_BASE}/fateflow`
+- FATE_BOARD_BASE: the deployment directory of `FATE Board`, name `fateboard`, contains `fateboard`, usually the path is `${FATE_PROJECT_BASE}/fateboard`
+- EGGROLL_HOME: the deployment directory for `EggRoll`, named `eggroll`, containing `rollsite`, `clustermanager`, `nodemanager`, etc., usually in `${FATE_PROJECT_BASE}/eggroll`
+
+    Deploy the `FATE project` with reference to the main repository [FederatedAI/FATE](https://github.com/FederatedAI/FATE), the main directory structure is as follows
+
+    ![](./images/fate_deploy_directory.png){: style="height:566px;width:212px"}
+
+- FATE_VERSION: The version number of `FATE`, e.g. 1.7.0
+- FATE_FLOW_VERSION: the version number of `FATE Flow`, e.g. 1.7.0
+- version: Generally in the deployment documentation, it means the version number of `FATE project`, such as `1.7.0`, `1.6.0`.
+- version_tag: generally in the deployment documentation, indicates the `FATE project` version tag, such as `release`, `rc1`, `rc10`
+
+## 2. Glossary of terms
+
+`component_name`: the name of the component when the task is submitted, a task can have more than one of the same component, but the `component_name` is not the same, equivalent to an instance of the class
+
+`componet_module_name`: the class name of the component
+
+`model_alias`: similar to `component_name`, which is the name of the output model that the user can configure inside dsl
+
+Example.
+
+In the figure `dataio_0` is `component_name`, `DataIO` is `componet_module_name`, `dataio` is `model_alias`
+
+! [](https://user-images.githubusercontent.com/1758850/124451776-52ee4500-ddb8-11eb-94f2-d43d5174ca4d.png)
+
+## 3. Reading guide
+
+1. you can first read [overall design](. /fate_flow.zh.md)
+2. Refer to the main repository [FATE](https://github.com/FederatedAI/FATE) for deployment, either standalone (installer, Docker, source compiler) or cluster (Ansible, Docker, Kuberneters)
+3. You can refer to the directory in order of navigation

+ 52 - 0
FATE-Flow/doc/document_navigation.zh.md

@@ -0,0 +1,52 @@
+# 文档导航
+
+## 1. 通用文档变量
+
+您会在所有`FATE Flow`的文档看到如下`文档变量`,其含义如下:
+
+- FATE_PROJECT_BASE:表示`FATE项目`部署目录,包含配置、fate算法包、fate客户端以及子系统: `bin`, `conf`, `examples`, `fate`, `fateflow`, `fateboard`, `eggroll`等
+- FATE_BASE:表示`FATE`的部署目录,名称`fate`,包含算法包、客户端: `federatedml`, `fate arch`, `fate client`, 通常路径为`${FATE_PROJECT_BASE}/fate`
+- FATE_FLOW_BASE:表示`FATE Flow`的部署目录,名称`fateflow`,包含`fate flow server`等, 通常路径为`${FATE_PROJECT_BASE}/fateflow`
+- FATE_BOARD_BASE:表示`FATE Board`的部署目录,名称`fateboard`,包含`fateboard`, 通常路径为`${FATE_PROJECT_BASE}/fateboard`
+- EGGROLL_HOME:表示`EggRoll`的部署目录,名称`eggroll`,包含`rollsite`, `clustermanager`, `nodemanager`等, 通常路径为`${FATE_PROJECT_BASE}/eggroll`
+
+    参考`FATE项目`主仓库[FederatedAI/FATE](https://github.com/FederatedAI/FATE)部署`FATE项目`,主要目录结构如下:
+
+    ![](./images/fate_deploy_directory.png){: style="height:566px;width:212px"}
+
+- FATE_VERSION:表示`FATE`的版本号,如1.7.0
+- FATE_FLOW_VERSION:表示`FATE Flow`的版本号,如1.7.0
+- version:一般在部署文档中,表示`FATE项目`版本号,如`1.7.0`, `1.6.0`
+- version_tag:一般在部署文档中,表示`FATE项目`版本标签,如`release`, `rc1`, `rc10`
+
+## 2. 术语表
+
+`party`, 站点,一般物理上指一个FATE单机或者FATE集群
+
+`job`, 作业
+
+`task`, 任务, 一个作业由多个任务构成
+
+`component`, 组件,静态名称,提交作业时需要两个描述配置文件,分别描述该作业需要执行的组件列表、组件依赖关系、组件运行参数
+
+`dsl`, 指用来描述作业中组件关系的语言, 可以描述组件列表以及组件依赖关系
+
+`component_name`: 提交作业时组件的名称,一个作业可以有多个同样的组件的,但是 `component_name` 是不一样的,相当于类的实例, 一个`component_name`对应的组件会生成一个`task`运行
+
+`componet_module_name`: 组件的类名
+
+`model_alias`: 跟 `component_name` 类似,就是用户在 dsl 里面是可以配置输出的 model 名称的
+
+示例:
+
+图中 `dataio_0` 是 `component_name`,`DataIO` 是 `componet_module_name`,`dataio` 是 `model_alias`
+
+![](https://user-images.githubusercontent.com/1758850/124451776-52ee4500-ddb8-11eb-94f2-d43d5174ca4d.png)
+
+`party status`, 指任务中每方的执行状态,`status`是由所有方的`party status`推断出,如所有`party status`为`success`,`status`才为success
+
+## 3. 阅读指引
+
+1. 可以先阅读[整体设计](./fate_flow.zh.md)
+2. 参考主仓库[FATE](https://github.com/FederatedAI/FATE)部署, 可选单机版(安装版, Docker, 源码编译)或集群版(Ansible, Docker, Kuberneters)
+3. 可依据导航目录顺序进行参考

+ 102 - 0
FATE-Flow/doc/faq.md

@@ -0,0 +1,102 @@
+# FAQ
+
+## 1. Description
+
+## 2. Log descriptions
+
+In general, to troubleshoot a problem, the following logs are required.
+
+## v1.7+
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/$job_id/fate_flow_schedule.log`, this is the internal scheduling log of a certain task
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/$job_id/*` These are all the execution logs of a certain task
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/fate_flow/fate_flow_stat.log`, this is some logs that are not related to tasks
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/fate_flow/fate_flow_schedule.log`, this is the overall scheduling log of all tasks
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/fate_flow/fate_flow_detect.log`, which is the overall exception detection log for all tasks
+
+### v1.7-
+
+- `${FATE_PROJECT_BASE}/logs/$job_id/fate_flow_schedule.log`, this is the internal scheduling log for a particular task
+
+- `${FATE_PROJECT_BASE}/logs/$job_id/*` These are all the execution logs of a certain task
+
+- `${FATE_PROJECT_BASE}/logs/fate_flow/fate_flow_stat.log`, this is some logs that are not related to the task
+
+- `${FATE_PROJECT_BASE}/logs/fate_flow/fate_flow_schedule.log`, this is the overall scheduling log of all tasks
+
+- `${FATE_PROJECT_BASE}/logs/fate_flow/fate_flow_detect.log`, this is the overall exception detection log of all tasks
+
+## 3. Offline
+
+### upload failed
+
+- checking eggroll related services for exceptions.
+
+### submit job is stuck
+
+- check if both rollsite services have been killed
+
+### submit_job returns grpc exception
+
+- submit job link: guest fate_flow -> guest rollsite -> host rollsite -> host fate_flow
+- check that each service in the above link is not hung, it must be ensured that each node is functioning properly.
+- checking that the routing table is correctly configured.
+
+### dataio component exception: not enough values to unpack (expected 2, got 1)
+
+- the data separator does not match the separator in the configuration
+
+### Exception thrown at task runtime: "Count of data_instance is 0"
+
+- task has an intersection component and the intersection match rate is 0, need to check if the output data ids of guest and host can be matched.
+
+## 4. Serving
+
+### load model retcode returns 100, what are the possible reasons?
+
+- no fate-servings deployed
+
+- flow did not fetch the fate-servings address
+
+- flow reads the address of the fate-servings in priority order:
+
+  1. read from zk
+
+  2. if zk is not open, it will read from the fate-servings configuration file, the configuration path is
+
+     - 1.5+: `${FATE_PROJECT_BASE}/conf/service_conf.yaml`
+
+     - 1.5-: `${FATE_PROJECT_BASE}/arch/conf/server_conf.json`
+
+### load model retcode returns 123, what are the possible reasons?
+
+- Incorrect model information.
+- This error code is thrown by fate-servings not finding the model.
+
+### bind model operation prompted "no service id"?
+
+- Customize the service_id in the bind configuration
+
+### Where is the configuration of servings? How do I configure it?
+
+- v1.5+ Configuration path: `${FATE_PROJECT_BASE}/conf/service_conf.yaml`
+
+```yaml
+servings:
+  hosts:
+    - 127.0.0.1:8000
+```
+
+- v1.5- Configuration path: `${FATE_PROJECT_BASE}/arch/conf/server_conf.json`
+
+```json
+{
+    "servers": {
+        "servings": ["127.0.0.1:8000"]
+    }
+}
+```

+ 102 - 0
FATE-Flow/doc/faq.zh.md

@@ -0,0 +1,102 @@
+# 常见问题
+
+## 1. 说明
+
+## 2. 日志说明
+
+一般来说,排查问题,需要如下几个日志:
+
+### v1.7+
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/$job_id/fate_flow_schedule.log`,这个是某个任务的内部调度日志
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/$job_id/*` 这些是某个任务的所有执行日志
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/fate_flow/fate_flow_stat.log`,这个是与任务无关的一些日志
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/fate_flow/fate_flow_schedule.log`,这个是所有任务的整体调度日志
+
+- `${FATE_PROJECT_BASE}/fateflow/logs/fate_flow/fate_flow_detect.log`,这个是所有任务的整体异常探测日志
+
+### v1.7-
+
+- `${FATE_PROJECT_BASE}/logs/$job_id/fate_flow_schedule.log`,这个是某个任务的内部调度日志
+
+- `${FATE_PROJECT_BASE}/logs/$job_id/*` 这些是某个任务的所有执行日志
+
+- `${FATE_PROJECT_BASE}/logs/fate_flow/fate_flow_stat.log`,这个是与任务无关的一些日志
+
+- `${FATE_PROJECT_BASE}/logs/fate_flow/fate_flow_schedule.log`,这个是所有任务的整体调度日志
+
+- `${FATE_PROJECT_BASE}/logs/fate_flow/fate_flow_detect.log`,这个是所有任务的整体异常探测日志
+
+## 3. 离线部分
+
+### upload失败
+
+- 检查eggroll相关服务是否异常;
+
+### 提交任务(submit_job)卡住
+
+- 检查双方rollsite服务是否被kill了
+
+### 提交任务(submit_job)返回grpc异常
+
+- 提交任务的链路: guest fate_flow -> guest rollsite -> host rollsite -> host fate_flow
+- 检查上面的链路中的每个服务是否挂了,必须保证每个节点都正常运行;
+- 检查路由表的配置是否正确;
+
+### dataio组件异常: not enough values to unpack (expected 2, got 1)
+
+- 数据的分隔符和配置中的分割符不一致
+
+### 任务运行时抛出异常:"Count of data_instance is 0"
+
+- 任务中有交集组件并且交集匹配率为0,需要检查guest和host的输出数据id是否能匹配上;
+
+## 4. 在线部分
+
+### 推模型(load)retcode返回100,可能的原因有哪些?
+
+- 没有部署fate-servings
+
+- flow没有获取到fate-servings的地址
+
+- flow读取fate-servings的地址的优先级排序: 
+
+  1. 从zk读取
+
+  2. 没有打开zk的话,会从fate的服务配置文件读取,配置路径在
+
+     - 1.5+: `${FATE_PROJECT_BASE}/conf/service_conf.yaml`
+
+     - 1.5-: `${FATE_PROJECT_BASE}/arch/conf/server_conf.json`
+
+### 推模型(load)retcode返回123,可能原因有哪些?
+
+- 模型信息有误;
+- 此错误码是fate-servings没有找到模型而抛出的;
+
+### 绑定模型(bind)操作时提示"no service id"?
+
+- 在bind配置中自定义service_id
+
+### servings的配置在哪?怎么配?
+
+- 1.5+ 配置路径: `${FATE_PROJECT_BASE}/conf/service_conf.yaml`
+
+```yaml
+servings:
+  hosts:
+    - 127.0.0.1:8000
+```
+
+- 1.5- 配置路径: `${FATE_PROJECT_BASE}/arch/conf/server_conf.json`
+
+```json
+{
+    "servers": {
+        "servings": ["127.0.0.1:8000"]
+    }
+}
+```

+ 110 - 0
FATE-Flow/doc/fate_flow.md

@@ -0,0 +1,110 @@
+# Overall Design
+
+## 1. Logical Architecture
+
+- DSL defined jobs
+- Top-down vertical subtask flow scheduling, multi-participant joint subtask coordination
+- Independent isolated task execution work processes
+- Support for multiple types and versions of components
+- Computational abstraction API
+- Storage abstraction API
+- Cross-party transfer abstraction API
+
+![](./images/fate_flow_logical_arch.png)
+
+## 2. Service Architecture
+
+### 2.1 FATE
+
+![](./images/fate_arch.png)
+
+### 2.2 FATE Flow
+
+![](./images/fate_flow_arch.png)
+
+## 3. [Scheduling Architecture](./fate_flow_job_scheduling.md)
+
+### 3.1 A new scheduling architecture based on shared-state
+
+- Stripping state (resources, jobs) and managers (schedulers, resource managers)
+- Resource state and job state are persisted in MySQL and shared globally to provide reliable transactional operations
+- Improve the high availability and scalability of managed services
+- Jobs can be intervened to support restart, rerun, parallel control, resource isolation, etc.
+
+![](./images/fate_flow_scheduling_arch.png)
+
+### 3.2 State-Driven Scheduling
+
+- Resource coordination
+- Pull up the child process Executor to run the component
+- Executor reports state to local Server and also to scheduler
+- Multi-party task state calculation of federal task state
+- Upstream and downstream task states compute job states
+
+![](./images/fate_flow_resource_process.png)
+
+## 4. [Multiparty Resource Coordination](./fate_flow_resource_management.md)
+
+- The total resource size of each engine is configured through the configuration file, and the system is subsequently interfaced
+- The cores_per_node in the total resource size indicates the number of cpu cores per compute node, and nodes indicates the number of compute nodes.
+- FATEFlow server reads the resource size configuration from the configuration file when it starts and registers the update to the database
+- The resources are requested in Job dimension, and take effect when Job Conf is submitted, formula: task_parallelism*task_cores
+- See separate section of the documentation for details
+
+## 5. [Data Flow Tracking](./fate_flow_tracking.md)
+
+- Definition
+ - metric type: metric type, such as auc, loss, ks, etc.
+ - metric namespace: custom metric namespace, e.g. train, predict
+ - metric name: custom metric name, e.g. auc0, hetero_lr_auc0
+ - metric data: metric data in key-value form
+ - metric meta: metric meta information in key-value form, support flexible drawing
+- API
+ - log_metric_data(metric_namespace, metric_name, metrics)
+ - set_metric_meta(metric_namespace, metric_name, metric_meta)
+ - get_metric_data(metric_namespace, metric_name)
+ - get_metric_meta(metric_namespace, metric_name)
+
+## 6. [Realtime Monitoring](./fate_flow_monitoring.md)
+
+- Job process survivability detection
+- Job timeout detection
+- Resource recovery detection
+- Base engine session timeout detection
+
+![](./images/fate_flow_detector.png)
+
+## 7. [Task Component Registry](./fate_flow_component_registry.md)
+
+![](./images/fate_flow_component_registry.png)
+
+## 8. [Multi-Party Federated Model Registry](./fate_flow_model_registry.md)
+
+- Using Google Protocol Buffer as the model storage protocol, using cross-language sharing, each algorithmic model consists of two parts: ModelParam & ModelMeta
+- A Pipeline generates a series of algorithmic models
+- The model named Pipeline stores Pipeline modeling DSL and online inference DSL
+- Under federal learning, model consistency needs to be guaranteed for all participants, i.e., model binding
+- model_key is the model identifier defined by the user when submitting the task
+- The model IDs of the federated parties are the party identification information role, party_id, plus model_key
+- The model version of the federated parties must be unique and consistent, and FATE-Flow directly sets it to job_id
+
+![](./images/fate_flow_pipelined_model.png){: style="height:400px;width:450px"}
+
+![](./images/fate_flow_model_storage.png){: style="height:400px;width:800px"}
+
+## 9. [Data Access](./fate_flow_data_access.md)
+
+- Upload.
+ - External storage is imported directly to FATE Storage, creating a new DTable
+ - When the job runs, Reader reads directly from Storage
+
+- Table Bind.
+ - Key the external storage address to a new DTable in FATE
+ - When the job is running, Reader reads data from external storage via Meta and transfers it to FATE Storage
+ - Connecting to the Big Data ecosystem: HDFS, Hive/MySQL
+
+![](./images/fate_flow_inputoutput.png)
+
+## 10. [Multi-Party Collaboration Authority Management](./fate_flow_authority_management.md)
+
+![](./images/fate_flow_authorization.png)

+ 110 - 0
FATE-Flow/doc/fate_flow.zh.md

@@ -0,0 +1,110 @@
+# 整体设计
+
+## 1. 逻辑架构
+
+- DSL定义作业
+- 自顶向下的纵向子任务流调度、多参与方联合子任务协调
+- 独立隔离的任务执行工作进程
+- 支持多类型多版本组件
+- 计算抽象API
+- 存储抽象API
+- 跨方传输抽象API
+
+![](./images/fate_flow_logical_arch.png)
+
+## 2. 整体架构
+
+### 2.1 FATE整体架构
+
+![](./images/fate_arch.png)
+
+### 2.2 FATE Flow整体架构
+
+![](./images/fate_flow_arch.png)
+
+## 3. [调度架构](./fate_flow_job_scheduling.zh.md)
+
+### 3.1 基于共享状态的全新调度架构
+
+- 剥离状态(资源、作业)与管理器(调度器、资源管理器)
+- 资源状态与作业状态持久化存于MySQL,全局共享,提供可靠事务性操作
+- 提高管理服务的高可用与扩展性
+- 作业可介入,支持实现如重启、重跑、并行控制、资源隔离等
+
+![](./images/fate_flow_scheduling_arch.png)
+
+### 3.2 状态驱动调度
+
+- 资源协调
+- 拉起子进程Executor运行组件
+- Executor上报状态到本方Server,并且同时上报到调度方
+- 多方任务状态计算联邦任务状态
+- 上下游任务状态计算作业作态
+
+![](./images/fate_flow_resource_process.png)
+
+## 4. [多方资源协调](./fate_flow_resource_management.zh.md)
+
+- 每个引擎总资源大小通过配置文件配置,后续实现系统对接
+- 总资源大小中的cores_per_node表示每个计算节点cpu核数,nodes表示计算节点个数
+- FATEFlow server启动时从配置文件读取资源大小配置,并注册更新到数据库
+- 以Job维度申请资源,Job Conf提交时生效,公式:task_parallelism*task_cores
+- 详细请看文档单独章节
+
+## 5. [数据流动追踪](./fate_flow_tracking.zh.md)
+
+- 定义
+ - metric type: 指标类型,如auc, loss, ks等等
+ - metric namespace: 自定义指标命名空间,如train, predict
+ - metric name: 自定义指标名称,如auc0,hetero_lr_auc0
+ - metric data: key-value形式的指标数据
+ - metric meta: key-value形式的指标元信息,支持灵活画图
+- API
+ - log_metric_data(metric_namespace, metric_name, metrics)
+ - set_metric_meta(metric_namespace, metric_name, metric_meta)
+ - get_metric_data(metric_namespace, metric_name)
+ - get_metric_meta(metric_namespace, metric_name)
+
+## 6. [作业实时监测](./fate_flow_monitoring.zh.md)
+
+- 工作进程存活性检测
+- 作业超时检测
+- 资源回收检测
+- 基础引擎会话超时检测
+
+![](./images/fate_flow_detector.png)
+
+## 7. [任务组件中心](./fate_flow_component_registry.zh.md)
+
+![](./images/fate_flow_component_registry.png)
+
+## 8. [多方联合模型注册中心](./fate_flow_model_registry.zh.md)
+
+- 使用Google Protocol Buffer作为模型存储协议,利用跨语言共享,每个算法模型由两部分组成:ModelParam & ModelMeta
+- 一个Pipeline产生一系列算法模型
+- 命名为Pipeline的模型存储Pipeline建模DSL及在线推理DSL
+- 联邦学习下,需要保证所有参与方模型一致性,即模型绑定
+- model_key为用户提交任务时定义的模型标识
+- 联邦各方的模型ID由本方标识信息role、party_id,加model_key
+- 联邦各方的模型版本必须唯一且保持一致,FATE-Flow直接设置为job_id
+
+![](./images/fate_flow_pipelined_model.png){: style="height:400px;width:450px"}
+
+![](./images/fate_flow_model_storage.png){: style="height:400px;width:800px"}
+
+## 9. [数据接入](./fate_flow_data_access.zh.md)
+
+- Upload:
+ - 外部存储直接导入到FATE Storage,创建一个新的DTable
+ - 作业运行时,Reader直接从Storage读取
+
+- Table Bind:
+ - 外部存储地址关键到FATE一个新的DTable
+ - 作业运行时,Reader通过Meta从外部存储读取数据并转存到FATE Storage
+ - 打通大数据生态:HDFS,Hive/MySQL
+
+![](./images/fate_flow_inputoutput.png)
+
+## 10. [多方合作权限管理](./fate_flow_authority_management.zh.md)
+
+![](./images/fate_flow_authorization.png)

+ 152 - 0
FATE-Flow/doc/fate_flow_authority_management.md

@@ -0,0 +1,152 @@
+# Certification program
+
+## 1. Description
+
+- Authentication includes: client authentication and site authentication
+
+- Authentication configuration: ```$FATE_BASE/conf/service_conf.yaml```.
+
+  ```yaml
+  ## Site authentication requires configuration of the party site id
+  party_id:
+  # Hook module, need to configure different hooks according to different scenarios
+  hook_module:
+    client_authentication: fate_flow.hook.flow.client_authentication
+    site_authentication: fate_flow.hook.flow.site_authentication
+  # Third-party authentication service name
+  hook_server_name:
+  authentication:
+    client:
+      # Client authentication switch
+      switch: false
+      http_app_key:
+      http_secret_key:
+    site:
+      # Site authentication switch
+      switch: false
+  ```
+  
+- Authentication method: Support flow's own authentication module authentication and third-party service authentication. The authentication hooks can be modified by hook_module, currently the following hooks are supported.
+  - client_authentication supports "fate_flow.hook.flow.client_authentication" and "fate_flow.hook.api.client_authentication", where the former is the client authentication method of flow. the former is the client authentication method of flow, the latter is the client authentication method of third-party services.
+  - site_authentication supports "fate_flow.hook.flow.site_authentication" and "fate_flow.hook.api.site_authentication", where the former is the site authentication method of flow and the latter is the third-party The former is the site authentication method of flow, and the latter is the third-party site authentication method.
+	
+
+## 2. client authentication
+
+### 2.1 flow authentication
+#### 2.1.1 Configuration
+```yaml
+hook_module:
+  client_authentication: fate_flow.hook.flow.client_authentication
+authentication:
+  client:
+    switch: true
+    http_app_key: "xxx"
+    http_secret_key: "xxx"
+```
+
+
+
+#### 2.2.2 Interface Authentication Method
+
+All client requests sent to Flow need to add the following header
+```
+
+`TIMESTAMP`: Unix timestamp in milliseconds, e.g. `1634890066095` means `2021-10-22 16:07:46 GMT+0800`, note that the difference between this time and the current time of the server cannot exceed 60 seconds
+
+`NONCE`: random string, can use UUID, such as `782d733e-330f-11ec-8be9-a0369fa972af`
+
+`APP_KEY`: must be consistent with `http_app_key` in the Flow configuration file
+
+`SIGNATURE`: signature generated based on `http_secret_key` and request parameters in the Flow configuration file
+
+```
+#### 2.2.3 Signature generation method
+
+- Combine the following elements in order
+
+`TIMESTAMP`
+
+`NONCE`
+
+`APP_KEY`
+
+request path + query parameters, if there are no query parameters then the final `? `, such as `/v1/job/submit` or `/v1/data/upload?table_name=dvisits_hetero_guest&namespace=experiment`
+
+If `Content-Type` is `application/json`, then it is the original JSON, i.e. the request body; if not, this item is filled with the empty string
+
+If `Content-Type` is `application/x-www-form-urlencoded` or `multipart/form-data`, all parameters need to be sorted alphabetically and `urlencode`, refer to RFC 3986 (i.e. except `a-zA-Z0-9- . _~`), note that the file does not participate in the signature; if not, this item is filled with the empty string
+
+- Concatenate all parameters with the newline character `\n` and encode them in `ASCII`.
+
+- Use the `HMAC-SHA1` algorithm to calculate the binary digest using the `http_secret_key` key in the Flow configuration file
+
+- Encode the binary digest using base64
+
+#### 2.2.4 Example
+
+You can refer to [Fate SDK](https://github.com/FederatedAI/FATE/blob/master/python/fate_client/flow_sdk/client/base.py#L63)
+
+### 2.2 Third party service authentication
+#### 2.2.1 Configuration
+```yaml
+hook_module:
+  client_authentication: fate_flow.hook.api.client_authentication
+authentication:
+  client:
+    switch: true
+hook_server_name: "xxx"
+```
+
+#### 2.2.2 Interface Authentication Method
+- The third party service needs to register the client authentication interface with flow, refer to [Client Authentication Service Registration](./third_party_service_registry.md#321-client-authentication-client_authentication)
+- If the authentication fails, flow will return the authentication failure directly to the client.
+
+## 3. Site Authentication
+
+### 3.1 flow authentication
+
+#### 3.1.1 Configuration
+```yaml
+party_id: 9999
+hook_module:
+  site_authentication: fate_flow.hook.flow.site_authentication
+authentication:
+  client:
+    switch: true
+    http_app_key: "xxx"
+    http_secret_key: "xxx"
+```
+
+#### 3.1.2 Authentication scheme
+- flow generates a pair of public and private keys when it starts, and needs to exchange public keys with each other with its partners. When sending a request, it uses the public key to generate a signature by RSA algorithm, and the requested site verifies the signature by its co-key.
+- flow provides a key management cli as follows
+
+#### 3.1.3 Key Management
+- Add the partner's public key
+
+{{snippet('cli/key.md', '### save')}}
+
+- Delete a partner's public key
+
+{{snippet('cli/key.md', '### delete')}}
+
+
+- Query the co-key
+
+{{snippet('cli/key.md', '### query')}}
+
+### 3.2 Third-party service authentication
+#### 3.2.1 Configuration
+```yaml
+hook_module:
+  site_authentication: fate_flow.hook.api.site_authentication
+authentication:
+  site:
+    switch: true
+hook_server_name: "xxx"
+```
+
+#### 3.2.2 Interface Authentication Method
+- Third party services need to register the site authentication interface with flow, refer to [site authentication service registration](./third_party_service_registry.md#3222-site_authentication)
+- If the authentication fails, flow will directly return the authentication failure to the initiator.

+ 153 - 0
FATE-Flow/doc/fate_flow_authority_management.zh.md

@@ -0,0 +1,153 @@
+# 认证方案
+
+## 1. 说明
+
+- 认证包含:客户端认证和站点认证
+
+- 认证配置: `$FATE_BASE/conf/service_conf.yaml`:
+
+  ```yaml
+  # 站点鉴权时需要配置本方站点id
+  party_id:
+  # 钩子模块,需要根据不同场景配置不同的钩子
+  hook_module:
+    client_authentication: fate_flow.hook.flow.client_authentication
+    site_authentication: fate_flow.hook.flow.site_authentication
+  # 第三方认证服务名
+  hook_server_name:
+  authentication:
+    client:
+      # 客户端认证开关
+      switch: false
+      http_app_key:
+      http_secret_key:
+    site:
+      # 站点认证开关
+      switch: false
+  ```
+  
+- 认证方式:支持flow自带的认证模块认证和第三方服务认证。可通过hook_module修改认证钩子,当前支持如下钩子:
+  - client_authentication支持"fate_flow.hook.flow.client_authentication"和"fate_flow.hook.api.client_authentication", 其中前者是flow的客户端认证方式,后者是第三方服务客户端认证方式;
+  - site_authentication支持"fate_flow.hook.flow.site_authentication"和"fate_flow.hook.api.site_authentication",其中前者是flow的站点端认证方式,后者是第三方服务站点认证方式。
+	
+
+## 2. 客户端认证
+
+### 2.1 flow认证
+#### 2.1.1 配置
+`````yaml
+hook_module:
+  client_authentication: fate_flow.hook.flow.client_authentication
+authentication:
+  client:
+    switch: true
+    http_app_key: "xxx"
+    http_secret_key: "xxx"
+`````
+
+
+
+#### 2.2.2 接口鉴权方式
+
+则所有客户端发送到 Flow 的请求都需要增加以下 header
+
+`TIMESTAMP`:Unix timestamp,单位毫秒,如 `1634890066095` 表示 `2021-10-22 16:07:46 GMT+0800`,注意该时间与服务器当前时间的差距不能超过 60 秒
+
+`NONCE`:随机字符串,可以使用 UUID,如 `782d733e-330f-11ec-8be9-a0369fa972af`
+
+`APP_KEY`:需与 Flow 配置文件中的 `http_app_key` 一致
+
+`SIGNATURE`:基于 Flow 配置文件中的 `http_secret_key` 和请求参数生成的签名
+
+#### 2.2.3 签名生成方法
+
+- 按照顺序组合下列内容
+
+`TIMESTAMP`
+
+`NONCE`
+
+`APP_KEY`
+
+请求路径+查询参数,如没有查询参数则不需要末尾的 `?`,如 `/v1/job/submit` 或 `/v1/data/upload?table_name=dvisits_hetero_guest&namespace=experiment`
+
+如果 `Content-Type` 为 `application/json`,则为原始 JSON,即 request body;如果不是,此项使用空字符串填充
+
+如果 `Content-Type` 为 `application/x-www-form-urlencoded` 或 `multipart/form-data`,则需要把所有参数以字母顺序排序并 `urlencode`,转码方式参照 RFC 3986(即除 `a-zA-Z0-9-._~` 以外的字符都要转码),注意文件不参与签名;如果不是,此项使用空字符串填充
+
+- 把所有参数用换行符 `\n` 连接然后以 `ASCII` 编码
+
+- 使用 `HMAC-SHA1` 算法,以 Flow 配置文件中的 `http_secret_key` 为密钥,算出二进制摘要
+
+- 使用 base64 编码二进制摘要
+
+#### 2.2.4 示例
+
+可以参考 [Fate SDK](https://github.com/FederatedAI/FATE/blob/master/python/fate_client/flow_sdk/client/base.py#L63) 
+
+
+
+
+### 2.2 第三方服务认证
+#### 2.2.1 配置
+```yaml
+hook_module:
+  client_authentication: fate_flow.hook.api.client_authentication
+authentication:
+  client:
+    switch: true
+hook_server_name: "xxx"
+```
+
+#### 2.2.2 接口鉴权方式
+- 第三方服务需要向flow注册客户端认证接口,具体参考[客户端认证服务注册](./third_party_service_registry.zh.md#321-client_authentication)
+- 若认证失败,flow会直接返回认证失败给客户端。
+
+## 3. 站点认证
+
+### 3.1 flow认证
+
+#### 3.1.1 配置
+```yaml
+party_id: 9999
+hook_module:
+  site_authentication: fate_flow.hook.flow.site_authentication
+authentication:
+  client:
+    switch: true
+    http_app_key: "xxx"
+    http_secret_key: "xxx"
+```
+
+#### 3.1.2 认证方案
+- flow启动时会生成一对公钥和私钥,需要和合作方交换彼此的公钥,发送请求时通过RSA算法使用公钥生成签名,被请求站点通过其共钥验证签名。
+- flow提供密钥管理cli,如下
+
+#### 3.1.3 密钥管理
+- 添加合作方公钥
+
+{{snippet('cli/key.zh.md', '### save')}}
+
+- 删除合作方公钥
+
+{{snippet('cli/key.zh.md', '### delete')}}
+
+
+- 查询共钥
+
+{{snippet('cli/key.zh.md', '### query')}}
+
+### 3.2 第三方服务认证
+#### 3.2.1 配置
+```yaml
+hook_module:
+  site_authentication: fate_flow.hook.api.site_authentication
+authentication:
+  site:
+    switch: true
+hook_server_name: "xxx"
+```
+
+#### 3.2.2 接口鉴权方式
+- 第三方服务需要向flow注册站点认证接口,具体参考[站点认证服务注册](./third_party_service_registry.zh.md#3222-site_authentication)
+- 若认证失败,flow会直接返回认证失败给发起方。

+ 164 - 0
FATE-Flow/doc/fate_flow_client.md

@@ -0,0 +1,164 @@
+# FATE Flow Client
+
+## Description
+
+- Introduces how to install and use the `FATE Flow Client`, which is usually included in the `FATE Client`, which contains several clients of the `FATE Project`: `Pipeline`, `FATE Flow Client` and `FATE Test`.
+- Introducing the command line provided by `FATE Flow Client`, all commands will have a common invocation entry, you can type `flow` in the command line to get all the command categories and their subcommands.
+
+```bash
+    [IN]
+    flow
+
+    [OUT]
+    Usage: flow COMMAND [OPTIONS]
+
+      Fate Flow Client
+
+    Options.
+      -h, --help Show this message and exit.
+
+    Commands: -h, --help
+      Component Component Operations
+      data Data Operations
+      init Flow CLI Init Command
+      Job Job Operations
+      model Model Operations
+      queue Queue Operations
+      table Table Operations
+      task Task Operations
+```
+
+For more information, please consult the following documentation or use the `flow --help` command.
+
+- All commands are described
+
+## Install FATE Client
+
+### Online installation
+
+FATE Client will be distributed to `pypi`, you can install the corresponding version directly using tools such as `pip`, e.g.
+
+```bash
+pip install fate-client
+```
+
+or
+
+```bash
+pip install atmosphere-client==${version}
+```
+
+### Installing on a FATE cluster
+
+Please install on a machine with version 1.5.1 and above of FATE.
+
+Installation command.
+
+```shell
+cd $FATE_PROJECT_BASE/
+# Enter the virtual environment of FATE PYTHON
+source bin/init_env.sh
+# Execute the installation
+cd fate/python/fate_client && python setup.py install
+```
+
+Once the installation is complete, type ``flow`` on the command line and enter, the installation will be considered successful if you get the following return.
+
+```shell
+Usage: flow [OPTIONS] COMMAND [ARGS]...
+
+  Fate Flow Client
+
+Options:
+  -h, --help Show this message and exit.
+
+Commands:
+  component Component Operations
+  data Data Operations
+  init Flow CLI Init Command
+  Job Job Operations
+  model Model Operations
+  queue Queue Operations
+  Table Table Operations
+  tag Tag Operations
+  task Task Operations
+Task Operations
+
+## Initialization
+
+Before using the fate-client, you need to initialize it. It is recommended to use the configuration file of fate-client to initialize it.
+
+### Specify the fateflow service address
+
+```bash
+### Specify the IP address and port of the fateflow service for initialization
+flow init --ip 192.168.0.1 --port 9380
+```
+
+### via the configuration file on the FATE cluster
+
+```shell
+### Go to the FATE installation path, e.g. /data/projects/fate
+cd $FATE_PROJECT_BASE/
+flow init -c conf/service_conf.yaml
+```
+
+The initialization is considered successful if you get the following return.
+
+```json
+{
+    "retcode": 0,
+    "retmsg": "Fate Flow CLI has been initialized successfully."
+}
+```
+
+## Verify
+
+Mainly verify that the client can connect to the `FATE Flow Server`, e.g. try to query the current job status
+
+```bash
+flow job query
+```
+
+Usually the `retcode` in the return is `0`.
+
+```json
+{
+    "data": [],
+    "retcode": 0,
+    "retmsg": "no job could be found"
+}
+```
+
+If it returns something like the following, it means that the connection is not available, please check the network situation
+
+```json
+{
+    "retcode": 100,
+    "retmsg": "Connection refused. Please check if the fate flow service is started"
+}
+```
+
+{{snippet('cli/data.md')}}
+
+{{snippet('cli/table.md')}}
+
+{{snippet('cli/job.md')}}
+
+{{snippet('cli/task.md')}}
+
+{{snippet('cli/tracking.md')}}
+
+{{snippet('cli/model.md')}}
+
+{{snippet('cli/checkpoint.md')}}
+
+{{snippet('cli/provider.md')}}
+
+{{snippet('cli/resource.md')}}
+
+{{snippet('cli/privilege.md')}}
+
+{{snippet('cli/tag.md')}}
+
+{{snippet('cli/server.md')}}

+ 164 - 0
FATE-Flow/doc/fate_flow_client.zh.md

@@ -0,0 +1,164 @@
+# 命令行客户端
+
+## 说明
+
+- 介绍如何安装使用`FATE Flow Client`,其通常包含在`FATE Client`中,`FATE Client`包含了`FATE项目`多个客户端:`Pipeline`, `FATE Flow Client` 和 `FATE Test`
+- 介绍`FATE Flow Client`提供的命令行,所有的命令将有一个共有调用入口,您可以在命令行中键入`flow`以获取所有的命令分类及其子命令。
+
+```bash
+    [IN]
+    flow
+
+    [OUT]
+    Usage: flow COMMAND [OPTIONS]
+
+      Fate Flow Client
+
+    Options:
+      -h, --help  Show this message and exit.
+
+    Commands:
+      component   Component Operations
+      data        Data Operations
+      init        Flow CLI Init Command
+      job         Job Operations
+      model       Model Operations
+      queue       Queue Operations
+      table       Table Operations
+      task        Task Operations
+```
+
+更多信息,请查阅如下文档或使用`flow --help`命令。
+
+- 介绍所有命令使用说明
+
+## 安装FATE Client
+
+### 在线安装
+
+FATE Client会发布到`pypi`,可直接使用`pip`等工具安装对应版本,如
+
+```bash
+pip install fate-client
+```
+
+或者
+
+```bash
+pip install fate-client==${version}
+```
+
+### 在FATE集群上安装
+
+请在装有1.5.1及其以上版本fate的机器中进行安装:
+
+安装命令:
+
+```shell
+cd $FATE_PROJECT_BASE/
+# 进入FATE PYTHON的虚拟环境
+source bin/init_env.sh
+# 执行安装
+cd fate/python/fate_client && python setup.py install
+```
+
+安装完成之后,在命令行键入`flow` 并回车,获得如下返回即视为安装成功:
+
+```shell
+Usage: flow [OPTIONS] COMMAND [ARGS]...
+
+  Fate Flow Client
+
+Options:
+  -h, --help  Show this message and exit.
+
+Commands:
+  component  Component Operations
+  data       Data Operations
+  init       Flow CLI Init Command
+  job        Job Operations
+  model      Model Operations
+  queue      Queue Operations
+  table      Table Operations
+  tag        Tag Operations
+  task       Task Operations
+```
+
+## 初始化
+
+在使用fate-client之前需要对其进行初始化,推荐使用fate的配置文件进行初始化,初始化命令如下:
+
+### 指定fateflow服务地址
+
+```bash
+# 指定fateflow的IP地址和端口进行初始化
+flow init --ip 192.168.0.1 --port 9380
+```
+
+### 通过FATE集群上的配置文件
+
+```shell
+# 进入FATE的安装路径,例如/data/projects/fate
+cd $FATE_PROJECT_BASE/
+flow init -c conf/service_conf.yaml
+```
+
+获得如下返回视为初始化成功:
+
+```json
+{
+    "retcode": 0,
+    "retmsg": "Fate Flow CLI has been initialized successfully."
+}
+```
+
+## 验证
+
+主要验证客户端是否能连接上`FATE Flow Server`,如尝试查询当前的作业情况
+
+```bash
+flow job query
+```
+
+一般返回中的`retcode`为`0`即可
+
+```json
+{
+    "data": [],
+    "retcode": 0,
+    "retmsg": "no job could be found"
+}
+```
+
+如返回类似如下,则表明连接不上,请检查网络情况
+
+```json
+{
+    "retcode": 100,
+    "retmsg": "Connection refused. Please check if the fate flow service is started"
+}
+```
+
+{{snippet('cli/data.zh.md')}}
+
+{{snippet('cli/table.zh.md')}}
+
+{{snippet('cli/job.zh.md')}}
+
+{{snippet('cli/task.zh.md')}}
+
+{{snippet('cli/tracking.zh.md')}}
+
+{{snippet('cli/model.zh.md')}}
+
+{{snippet('cli/checkpoint.zh.md')}}
+
+{{snippet('cli/provider.zh.md')}}
+
+{{snippet('cli/resource.zh.md')}}
+
+{{snippet('cli/privilege.zh.md')}}
+
+{{snippet('cli/tag.zh.md')}}
+
+{{snippet('cli/server.zh.md')}}

+ 19 - 0
FATE-Flow/doc/fate_flow_component_registry.md

@@ -0,0 +1,19 @@
+# Task Component Registry
+
+## 1. Description
+
+- After `FATE Flow` version 1.7, it started to support multiple versions of component packages at the same time, for example, you can put both `fate` algorithm component packages of `1.7.0` and `1.7.1` versions
+- We refer to the provider of the algorithm component package as the `component provider`, and the `name` and `version` uniquely identify the `component provider`.
+- When submitting a job, you can specify which component package to use for this job via `job dsl`, please refer to [componentprovider](./fate_flow_job_scheduling.md#35-Component-Providers)
+
+## 2. Default Component Provider
+
+Deploying a `FATE` cluster will include a default component provider, which is usually found in the `${FATE_PROJECT_BASE}/python/federatedml` directory
+
+## 3. current component provider
+
+{{snippet('cli/provider.md', '### list')}}
+
+## 4. new component provider
+
+{{snippet('cli/provider.md', '### register')}}

+ 19 - 0
FATE-Flow/doc/fate_flow_component_registry.zh.md

@@ -0,0 +1,19 @@
+# 任务组件注册中心
+
+## 1. 说明
+
+- `FATE Flow` 1.7版本后,开始支持多版本组件包同时存在,例如可以同时放入`1.7.0`和`1.7.1`版本的`fate`算法组件包
+- 我们将算法组件包的提供者称为`组件提供者`,`名称`和`版本`唯一确定`组件提供者`
+- 在提交作业时,可通过`job dsl`指定本次作业使用哪个组件包,具体请参考[组件provider](./fate_flow_job_scheduling.zh.md#35-组件provider)
+
+## 2. 默认组件提供者
+
+部署`FATE`集群将包含一个默认的组件提供者,其通常在 `${FATE_PROJECT_BASE}/python/federatedml` 目录下
+
+## 3. 当前组件提供者
+
+{{snippet('cli/provider.zh.md', '### list')}}
+
+## 4. 新组件提供者
+
+{{snippet('cli/provider.zh.md', '### register')}}

+ 136 - 0
FATE-Flow/doc/fate_flow_data_access.md

@@ -0,0 +1,136 @@
+# Data Access
+
+## 1. Description
+
+- The storage tables of fate are identified by table name and namespace.
+
+- fate provides an upload component for users to upload data to a storage system supported by the fate compute engine.
+
+- If the user's data already exists in a storage system supported by fate, the storage information can be mapped to a fate storage table by table bind.
+
+- If the table bind's table storage type is not consistent with the current default engine, the reader component will automatically convert the storage type;
+
+  
+
+## 2. data upload
+
+{{snippet('cli/data.md', '### upload')}}
+
+## 3. table binding
+
+{{snippet('cli/table.md', '### bind')}}
+
+
+## 4. table information query
+
+{{snippet('cli/table.md', '### info')}}
+
+## 5. Delete table data
+
+{{snippet('cli/table.md', '### delete')}}
+
+
+
+## 6. Download data
+
+{{snippet('cli/data.md', '### download')}}
+
+## 7.  disable data
+
+{{snippet('cli/table.md', '### disable')}}
+
+## 8.  enable data 
+
+{{snippet('cli/table.md', '### enable')}}
+
+## 9.  delete disable data 
+
+{{snippet('cli/table.md', '### disable-delete')}}
+
+
+## 10. Writer
+
+{{snippet('cli/data.md', '### writer')}}
+
+
+## 11. reader component
+
+**Brief description:** 
+
+- The reader component is a data input component of fate;
+- The reader component converts input data into data of the specified storage type;
+
+**Parameter configuration**:
+
+The input table of the reader is configured in the conf when submitting the job:
+
+```json
+{
+  "role": {
+    "guest": {
+      "0": {
+        "reader_0": {
+          "table": {
+            "name": "breast_hetero_guest",
+            "namespace": "experiment"
+          }
+        }
+      }
+    }
+  }
+}
+
+```
+
+**Component Output**
+
+The output data storage engine of the component is determined by the configuration file conf/service_conf.yaml, with the following configuration items:
+
+```yaml
+default_engines:
+  storage: eggroll
+```
+
+- The computing engine and storage engine have certain support dependencies on each other, the list of dependencies is as follows.
+
+  | computing_engine | storage_engine |
+  | :--------------- | :---------------------------- |
+  | standalone | standalone |
+  | eggroll | eggroll |
+  | spark | hdfs(distributed), localfs(standalone) |
+
+- The reader component's input data storage type supports: eggroll, hdfs, localfs, mysql, path, etc;
+- reader component output data type is determined by default_engines.storage configuration (except for path)
+
+## 12. api-reader
+
+**Brief description:** 
+
+- The data input of api-reader component is id, and the data output is feature;
+- request parameters can be user-defined, e.g. version number, back month, etc..
+- The component will request third-party services, and the third-party services need to implement upload, query, download interfaces and register with the fate flow, which can be referred to [api-reader related service registration](./third_party_service_registry.md#31-apireader)
+
+**Parameter configuration**:
+
+Configure the api-reader parameter in the conf when submitting the job:
+
+```json
+{
+  "role": {
+    "guest": {
+      "0": { "api_reader_0": {
+        "server_name": "xxx",
+        "parameters": { "version": "xxx"},
+        "id_delimiter": ",",
+        "head": true
+        }
+      }
+    }
+  }
+}
+```
+Parameter meaning:
+- server_name: the name of the service to be requested
+- parameters: the parameters of the requested feature
+- id_delimiter: the data separator to be returned
+- head: whether the returned data contains a header or not

+ 129 - 0
FATE-Flow/doc/fate_flow_data_access.zh.md

@@ -0,0 +1,129 @@
+# 数据接入
+
+## 1. 说明
+
+- fate的存储表是由table name和namespace标识。
+
+- fate提供upload组件供用户上传数据至fate计算引擎所支持的存储系统内;
+
+- 若用户的数据已经存在于fate所支持的存储系统,可通过table bind方式将存储信息映射到fate存储表;
+
+- 若table bind的表存储类型与当前默认引擎不一致,reader组件会自动转化存储类型;
+
+## 2.  数据上传
+
+{{snippet('cli/data.zh.md', '### upload')}}
+
+## 3.  表绑定
+
+{{snippet('cli/table.zh.md', '### bind')}}
+
+## 4. 表信息查询
+
+{{snippet('cli/table.zh.md', '### info')}}
+
+## 5. 删除表数据
+
+{{snippet('cli/table.zh.md', '### delete')}}
+
+## 6.  数据下载
+
+{{snippet('cli/data.zh.md', '### download')}}
+
+## 7.  将数据设置为“不可用”状态
+
+{{snippet('cli/table.zh.md', '### disable')}}
+
+## 8.  将数据设置为“可用”状态
+
+{{snippet('cli/table.zh.md', '### enable')}}
+
+## 9.  删除“不可用”数据
+
+{{snippet('cli/table.zh.md', '### disable-delete')}}
+
+## 10.  writer组件
+
+{{snippet('cli/data.zh.md', '### writer')}}
+
+## 11.  reader组件
+
+**简要描述:** 
+
+- reader组件为fate的数据输入组件;
+- reader组件可将输入数据转化为指定存储类型数据;
+
+**参数配置**:
+
+submit job时的conf中配置reader的输入表:
+
+```json
+{
+  "role": {
+    "guest": {
+      "0": {
+        "reader_0": {
+          "table": {
+            "name": "breast_hetero_guest",
+            "namespace": "experiment"
+          }
+        }
+      }
+    }
+  }
+}
+
+```
+
+**组件输出**
+
+组件的输出数据存储引擎是由配置决定,配置文件conf/service_conf.yaml,配置项为:
+
+```yaml
+default_engines:
+  storage: eggroll
+```
+
+- 计算引擎和存储引擎之间具有一定的支持依赖关系,依赖列表如下:
+
+  | computing_engine | storage_engine                |
+  | :--------------- | :---------------------------- |
+  | standalone       | standalone                    |
+  | eggroll          | eggroll                       |
+  | spark            | hdfs(分布式), localfs(单机版) |
+
+- reader组件输入数据的存储类型支持: eggroll、hdfs、localfs、mysql、path等;
+- reader组件的输出数据类型由default_engines.storage配置决定(path除外)
+
+## 12.  api-reader组件
+
+**简要描述:** 
+
+- api-reader组件的数据输入为id,数据输出为特征;
+- 请求参数可以由用户自定义,如:版本号、回溯月份等;
+- 组件会请求第三方服务,第三方服务需要实现upload、query、download接口并向fate flow注册,可参考[api-reader相关服务注册](./third_party_service_registry.zh.md#31-apireader)
+
+**参数配置**:
+
+submit job时的conf中配置api-reader参数:
+
+```json
+{
+  "role": {
+    "guest": {
+      "0": {"api_reader_0": {
+        "server_name": "xxx",
+        "parameters": {"version": "xxx"},
+        "id_delimiter": ",",
+        "head": true
+        }
+      }
+    }
+  }
+}
+```
+参数含义:
+- server_name: 需要请求的服务名
+- parameters: 需要请求的特征参数
+- id_delimiter:返回的数据分隔符
+- head: 返回的数据是否含有数据头

+ 17 - 0
FATE-Flow/doc/fate_flow_http_api.md

@@ -0,0 +1,17 @@
+# REST API
+
+## 1. Description
+
+### 2. Error codes
+
+`400 Bad Request` request body has both json and form
+
+`401 Unauthorized` Missing one or more header(s)
+
+`400 Invalid TIMESTAMP` `TIMESTAMP` could not be parsed
+
+`425 TIMESTAMP is more than 60 seconds away from the server time` The `TIMESTAMP` in the header is more than 60 seconds away from the server time
+
+`401 Unknown APP_KEY` header in `APP_KEY` does not match `http_app_key` in the Flow configuration file
+
+`403 Forbidden` Signature verification failed

+ 30 - 0
FATE-Flow/doc/fate_flow_http_api.zh.md

@@ -0,0 +1,30 @@
+# REST API
+
+## 1. 说明
+
+## 2. 设计规范
+
+### 2.1 HTTP Method
+
+- HTTP Method: 一律采用`POST`
+- Content Type: application/json
+
+### 2.2 URL规则(现有)
+
+/一级/二级/N级/最后一级
+
+- 一级:接口版本,如v1
+- 二级:主资源名称,如job
+- N级:子资源名称,如list, 允许有多个N级
+- 最后一级:操作: create/update/query/get/delete
+
+### 2.3 URL规则(建议改进)
+
+/一级/二级/三级/四级/N级/最后一级
+
+- 一级:系统名称: fate
+- 三级:子系统名称: flow
+- 二级:接口版本,如v1
+- 四级:主资源名称,如job
+- N级:子资源名称,如list, 允许有多个N级
+- 最后一级:操作: create/update/query/get/delete

+ 640 - 0
FATE-Flow/doc/fate_flow_http_api_call_demo.md

@@ -0,0 +1,640 @@
+# REST API CLIENT
+
+## 1. Description
+###  Use python request fate flow api
+
+## 2. data upload/download
+
+```python
+import json
+import os
+
+import requests
+from anaconda_project.internal.test.multipart import MultipartEncoder
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def upload():
+    uri = "/data/upload"
+    file_name = "./data/breast_hetero_guest.csv"
+    with open(file_name, 'rb') as fp:
+        data = MultipartEncoder(
+            fields={'file': (os.path.basename(file_name), fp, 'application/octet-stream')}
+        )
+        config_data = {
+            "file": file_name,
+            "id_delimiter": ",",
+            "head": 1,
+            "partition": 4,
+            "namespace": "experiment",
+            "table_name": "breast_hetero_guest"
+        }
+
+        response = requests.post(
+            url=base_url + uri,
+            data=data,
+            params=json.dumps(config_data),
+            headers={'Content-Type': data.content_type}
+        )
+        print(response.text)
+
+
+def download():
+    uri = "/data/download"
+    config_data = {
+        "output_path": "./download_breast_guest.csv",
+        "namespace": "experiment",
+        "table_name": "breast_hetero_guest"
+    }
+    response = requests.post(url=base_url + uri, json=config_data)
+    print(response.text)
+
+
+def upload_history():
+    uri = "/data/upload/history"
+    config_data = {
+        "limit": 5
+    }
+    response = requests.post(url=base_url + uri, json=config_data)
+    print(response.text)
+
+
+```
+## 3. table
+```python
+import requests
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def table_bind():
+    uri = "/table/bind"
+    data = {
+        "head": 1,
+        "partition": 8,
+        "address": {"user": "fate", "passwd": "fate", "host": "127.0.0.1", "port": 3306, "db": "xxx", "name": "xxx"},
+        "id_name": "id",
+        "feature_column": "y,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,x10,x11,x12",
+        "engine": "MYSQL",
+        "id_delimiter": ",",
+        "namespace": "wzh",
+        "name": "wzh",
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def table_delete():
+    uri = "/table/delete"
+    data = {
+        "table_name": "breast_hetero_guest",
+        "namespace": "experiment"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def table_info():
+    uri = "/table/table_info"
+    data = {
+        "table_name": "xxx",
+        "namespace": "xxx"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def table_list():
+    uri = "/table/list"
+    data = {"job_id": "202204221515021092240", "role": "guest", "party_id": "20001"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def tracking_source():
+    uri = "/table/tracking/source"
+    data = {"table_name": "xxx", "namespace": "xxx"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def tracking_job():
+    uri = "/table/tracking/job"
+    data = {"table_name": "xxx", "namespace": "xxx"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+```
+
+## 4. job
+
+```python
+
+import tarfile
+
+import requests
+
+base_url = "http:/127.0.0.1:9380/v1"
+
+
+def submit():
+    uri = "/job/submit"
+    data = {
+        "dsl": {
+            "components": {
+                "reader_0": {
+                    "module": "Reader",
+                    "output": {
+                        "data": [
+                            "table"
+                        ]
+                    }
+                },
+                "dataio_0": {
+                    "module": "DataIO",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "reader_0.table"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ],
+                        "model": [
+                            "dataio"
+                        ]
+                    },
+                    "need_deploy": True
+                },
+                "intersection_0": {
+                    "module": "Intersection",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "dataio_0.train"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ]
+                    }
+                },
+                "hetero_feature_binning_0": {
+                    "module": "HeteroFeatureBinning",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "intersection_0.train"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ],
+                        "model": [
+                            "hetero_feature_binning"
+                        ]
+                    }
+                },
+                "hetero_feature_selection_0": {
+                    "module": "HeteroFeatureSelection",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "hetero_feature_binning_0.train"
+                            ]
+                        },
+                        "isometric_model": [
+                            "hetero_feature_binning_0.hetero_feature_binning"
+                        ]
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ],
+                        "model": [
+                            "selected"
+                        ]
+                    }
+                },
+                "hetero_lr_0": {
+                    "module": "HeteroLR",
+                    "input": {
+                        "data": {
+                            "train_data": [
+                                "hetero_feature_selection_0.train"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ],
+                        "model": [
+                            "hetero_lr"
+                        ]
+                    }
+                },
+                "evaluation_0": {
+                    "module": "Evaluation",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "hetero_lr_0.train"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "evaluate"
+                        ]
+                    }
+                }
+            }
+        },
+        "runtime_conf": {
+            "dsl_version": "2",
+            "initiator": {
+                "role": "guest",
+                "party_id": 20001
+            },
+            "role": {
+                "guest": [
+                    20001
+                ],
+                "host": [
+                    10001
+                ],
+                "arbiter": [
+                    10001
+                ]
+            },
+            "job_parameters": {
+                "common": {
+                    "task_parallelism": 2,
+                    "computing_partitions": 8,
+                    "task_cores": 4,
+                    "auto_retries": 1
+                }
+            },
+            "component_parameters": {
+                "common": {
+                    "intersection_0": {
+                        "intersect_method": "raw",
+                        "sync_intersect_ids": True,
+                        "only_output_key": False
+                    },
+                    "hetero_lr_0": {
+                        "penalty": "L2",
+                        "optimizer": "rmsprop",
+                        "alpha": 0.01,
+                        "max_iter": 3,
+                        "batch_size": 320,
+                        "learning_rate": 0.15,
+                        "init_param": {
+                            "init_method": "random_uniform"
+                        }
+                    }
+                },
+                "role": {
+                    "guest": {
+                        "0": {
+                            "reader_0": {
+                                "table": {
+                                    "name": "breast_hetero_guest",
+                                    "namespace": "experiment"
+                                }
+                            },
+                            "dataio_0": {
+                                "with_label": True,
+                                "label_name": "y",
+                                "label_type": "int",
+                                "output_format": "dense"
+                            }
+                        }
+                    },
+                    "host": {
+                        "0": {
+                            "reader_0": {
+                                "table": {
+                                    "name": "breast_hetero_host",
+                                    "namespace": "experiment"
+                                }
+                            },
+                            "dataio_0": {
+                                "with_label": False,
+                                "output_format": "dense"
+                            },
+                            "evaluation_0": {
+                                "need_run": False
+                            }
+                        }
+                    }
+                }
+            }
+        }
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def stop():
+    uri = "/job/stop"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def rerun():
+    uri = "/job/rerun"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def query():
+    uri = "/job/query"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def list_job():
+    uri = "/job/list/job"
+    data = {"limit": 1}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def update():
+    uri = "/job/update"
+    data = {"job_id": "202204251958539401540", "role": "guest", "party_id": 20001, "notes": "this is a test"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def parameter_update():
+    uri = "/job/parameter/update"
+    data = {"component_parameters": {"common": {"hetero_lr_0": {"max_iter": 10}}},
+            "job_parameters": {"common": {"auto_retries": 2}}, "job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def config():
+    uri = "/job/config"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def log_download():
+    uri = "/job/log/download"
+    data = {"job_id": "202204251958539401540a"}
+    download_tar_file_name = "./test.tar.gz"
+    res = requests.post(base_url + uri, json=data)
+    with open(download_tar_file_name, "wb") as fw:
+        for chunk in res.iter_content(1024):
+            if chunk:
+                fw.write(chunk)
+    tar = tarfile.open(download_tar_file_name, "r:gz")
+    file_names = tar.getnames()
+    for file_name in file_names:
+        tar.extract(file_name)
+    tar.close()
+
+
+def log_path():
+    uri = "/job/log/path"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def task_query():
+    uri = "/job/task/query"
+    data = {"job_id": "202204251958539401540", "role": "guest", "party_id": 20001}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def list_task():
+    uri = "/job/list/task"
+    data = {"job_id": "202204251958539401540", "role": "guest", "party_id": 20001}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+def job_clean():
+    uri = "/job/clean"
+    data = {"job_id": "202204251958539401540", "role": "guest", "party_id": 20001}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+def clean_queue():
+    uri = "/job/clean/queue"
+    res = requests.post(base_url + uri)
+    print(res.text)
+
+
+```
+
+## 5. tracking
+```python
+import tarfile
+
+import requests
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def job_data_view():
+    uri = "/tracking/job/data_view"
+    data = {"job_id": "202203311009181495690", "role": "guest", "party_id": 20001}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_metric_all():
+    uri = "/tracking/component/metric/all"
+    data = {"job_id": "202203311009181495690", "role": "guest", "party_id": 20001, "component_name": "HeteroSecureBoost_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+# {"data":{"train":{"loss":{"data":[[0,0.6076415445876732],[1,0.5374539452565573],[2,0.4778598986135903],[3,0.42733599866560723],[4,0.38433409799127843]],"meta":{"Best":0.38433409799127843,"curve_name":"loss","metric_type":"LOSS","name":"train","unit_name":"iters"}}}},"retcode":0,"retmsg":"success"}
+
+
+def component_metric():
+    uri = "/tracking/component/metrics"
+    data = {"job_id": "202203311009181495690", "role": "guest", "party_id": 20001, "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+def component_metric_data():
+    uri = "/tracking/component/metric_data"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0",
+            "metric_name": "intersection",
+            "metric_namespace": "train"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_parameters():
+    uri = "/tracking/component/parameters"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_output_model():
+    uri = "/tracking/component/output/model"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_output_data():
+    uri = "/tracking/component/output/data"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_output_data_download():
+    uri = "/tracking/component/output/data/download"
+    download_tar_file_name = "data.tar.gz"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.get(base_url + uri, json=data)
+    print(res.text)
+    with open(download_tar_file_name, "wb") as fw:
+        for chunk in res.iter_content(1024):
+            if chunk:
+                fw.write(chunk)
+    tar = tarfile.open(download_tar_file_name, "r:gz")
+    file_names = tar.getnames()
+    for file_name in file_names:
+        tar.extract(file_name)
+    tar.close()
+
+
+def component_output_data_table():
+    uri = "/tracking/component/output/data/table"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0a"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_component_summary_download():
+    uri = "/tracking/component/summary/download"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_list():
+    uri = "/tracking/component/list"
+    data = {"job_id": "202203311009181495690"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+component_list()
+```
+
+## 6. resource
+```python
+import requests
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def resource_query():
+    uri = "/resource/query"
+    data = {"engine_name": "EGGROLL"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+
+def resource_return():
+    uri = "/resource/return"
+    data = {"job_id": "202204261616175720130"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+resource_return()
+```
+
+## 7. permission
+```python
+import requests
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def grant_privilege():
+    uri = "/permission/grant/privilege"
+    data = {
+        "src_role": "guest",
+        "src_party_id": "9999",
+        "privilege_role": "all",
+        "privilege_component": "all",
+        "privilege_command": "all"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+# grant_privilege()
+
+def delete_privilege():
+    uri = "/permission/delete/privilege"
+    data = {
+        "src_role": "guest",
+        "src_party_id": "9999",
+        "privilege_role": "guest",
+        "privilege_component": "dataio",
+        "privilege_command": "create"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+# delete_privilege()
+
+
+def query_privilege():
+    uri = "/permission/query/privilege"
+    data = {
+        "src_role": "guest",
+        "src_party_id": "9999"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+query_privilege()
+
+```
+
+

+ 640 - 0
FATE-Flow/doc/fate_flow_http_api_call_demo.zh.md

@@ -0,0 +1,640 @@
+# REST API 调用
+
+## 1. 说明
+###  使用python请求fate flow 接口
+
+## 2. 数据上传/下载
+
+```python
+import json
+import os
+
+import requests
+from anaconda_project.internal.test.multipart import MultipartEncoder
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def upload():
+    uri = "/data/upload"
+    file_name = "./data/breast_hetero_guest.csv"
+    with open(file_name, 'rb') as fp:
+        data = MultipartEncoder(
+            fields={'file': (os.path.basename(file_name), fp, 'application/octet-stream')}
+        )
+        config_data = {
+            "file": file_name,
+            "id_delimiter": ",",
+            "head": 1,
+            "partition": 4,
+            "namespace": "experiment",
+            "table_name": "breast_hetero_guest"
+        }
+
+        response = requests.post(
+            url=base_url + uri,
+            data=data,
+            params=json.dumps(config_data),
+            headers={'Content-Type': data.content_type}
+        )
+        print(response.text)
+
+
+def download():
+    uri = "/data/download"
+    config_data = {
+        "output_path": "./download_breast_guest.csv",
+        "namespace": "experiment",
+        "table_name": "breast_hetero_guest"
+    }
+    response = requests.post(url=base_url + uri, json=config_data)
+    print(response.text)
+
+
+def upload_history():
+    uri = "/data/upload/history"
+    config_data = {
+        "limit": 5
+    }
+    response = requests.post(url=base_url + uri, json=config_data)
+    print(response.text)
+
+
+```
+## 3. 数据表操作
+```python
+import requests
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def table_bind():
+    uri = "/table/bind"
+    data = {
+        "head": 1,
+        "partition": 8,
+        "address": {"user": "fate", "passwd": "fate", "host": "127.0.0.1", "port": 3306, "db": "xxx", "name": "xxx"},
+        "id_name": "id",
+        "feature_column": "y,x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,x10,x11,x12",
+        "engine": "MYSQL",
+        "id_delimiter": ",",
+        "namespace": "wzh",
+        "name": "wzh",
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def table_delete():
+    uri = "/table/delete"
+    data = {
+        "table_name": "breast_hetero_guest",
+        "namespace": "experiment"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def table_info():
+    uri = "/table/table_info"
+    data = {
+        "table_name": "xxx",
+        "namespace": "xxx"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def table_list():
+    uri = "/table/list"
+    data = {"job_id": "202204221515021092240", "role": "guest", "party_id": "20001"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def tracking_source():
+    uri = "/table/tracking/source"
+    data = {"table_name": "xxx", "namespace": "xxx"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def tracking_job():
+    uri = "/table/tracking/job"
+    data = {"table_name": "xxx", "namespace": "xxx"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+```
+
+## 4. 任务
+
+```python
+
+import tarfile
+
+import requests
+
+base_url = "http:/127.0.0.1:9380/v1"
+
+
+def submit():
+    uri = "/job/submit"
+    data = {
+        "dsl": {
+            "components": {
+                "reader_0": {
+                    "module": "Reader",
+                    "output": {
+                        "data": [
+                            "table"
+                        ]
+                    }
+                },
+                "dataio_0": {
+                    "module": "DataIO",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "reader_0.table"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ],
+                        "model": [
+                            "dataio"
+                        ]
+                    },
+                    "need_deploy": True
+                },
+                "intersection_0": {
+                    "module": "Intersection",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "dataio_0.train"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ]
+                    }
+                },
+                "hetero_feature_binning_0": {
+                    "module": "HeteroFeatureBinning",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "intersection_0.train"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ],
+                        "model": [
+                            "hetero_feature_binning"
+                        ]
+                    }
+                },
+                "hetero_feature_selection_0": {
+                    "module": "HeteroFeatureSelection",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "hetero_feature_binning_0.train"
+                            ]
+                        },
+                        "isometric_model": [
+                            "hetero_feature_binning_0.hetero_feature_binning"
+                        ]
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ],
+                        "model": [
+                            "selected"
+                        ]
+                    }
+                },
+                "hetero_lr_0": {
+                    "module": "HeteroLR",
+                    "input": {
+                        "data": {
+                            "train_data": [
+                                "hetero_feature_selection_0.train"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "train"
+                        ],
+                        "model": [
+                            "hetero_lr"
+                        ]
+                    }
+                },
+                "evaluation_0": {
+                    "module": "Evaluation",
+                    "input": {
+                        "data": {
+                            "data": [
+                                "hetero_lr_0.train"
+                            ]
+                        }
+                    },
+                    "output": {
+                        "data": [
+                            "evaluate"
+                        ]
+                    }
+                }
+            }
+        },
+        "runtime_conf": {
+            "dsl_version": "2",
+            "initiator": {
+                "role": "guest",
+                "party_id": 20001
+            },
+            "role": {
+                "guest": [
+                    20001
+                ],
+                "host": [
+                    10001
+                ],
+                "arbiter": [
+                    10001
+                ]
+            },
+            "job_parameters": {
+                "common": {
+                    "task_parallelism": 2,
+                    "computing_partitions": 8,
+                    "task_cores": 4,
+                    "auto_retries": 1
+                }
+            },
+            "component_parameters": {
+                "common": {
+                    "intersection_0": {
+                        "intersect_method": "raw",
+                        "sync_intersect_ids": True,
+                        "only_output_key": False
+                    },
+                    "hetero_lr_0": {
+                        "penalty": "L2",
+                        "optimizer": "rmsprop",
+                        "alpha": 0.01,
+                        "max_iter": 3,
+                        "batch_size": 320,
+                        "learning_rate": 0.15,
+                        "init_param": {
+                            "init_method": "random_uniform"
+                        }
+                    }
+                },
+                "role": {
+                    "guest": {
+                        "0": {
+                            "reader_0": {
+                                "table": {
+                                    "name": "breast_hetero_guest",
+                                    "namespace": "experiment"
+                                }
+                            },
+                            "dataio_0": {
+                                "with_label": True,
+                                "label_name": "y",
+                                "label_type": "int",
+                                "output_format": "dense"
+                            }
+                        }
+                    },
+                    "host": {
+                        "0": {
+                            "reader_0": {
+                                "table": {
+                                    "name": "breast_hetero_host",
+                                    "namespace": "experiment"
+                                }
+                            },
+                            "dataio_0": {
+                                "with_label": False,
+                                "output_format": "dense"
+                            },
+                            "evaluation_0": {
+                                "need_run": False
+                            }
+                        }
+                    }
+                }
+            }
+        }
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def stop():
+    uri = "/job/stop"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def rerun():
+    uri = "/job/rerun"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def query():
+    uri = "/job/query"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def list_job():
+    uri = "/job/list/job"
+    data = {"limit": 1}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def update():
+    uri = "/job/update"
+    data = {"job_id": "202204251958539401540", "role": "guest", "party_id": 20001, "notes": "this is a test"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def parameter_update():
+    uri = "/job/parameter/update"
+    data = {"component_parameters": {"common": {"hetero_lr_0": {"max_iter": 10}}},
+            "job_parameters": {"common": {"auto_retries": 2}}, "job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def config():
+    uri = "/job/config"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def log_download():
+    uri = "/job/log/download"
+    data = {"job_id": "202204251958539401540a"}
+    download_tar_file_name = "./test.tar.gz"
+    res = requests.post(base_url + uri, json=data)
+    with open(download_tar_file_name, "wb") as fw:
+        for chunk in res.iter_content(1024):
+            if chunk:
+                fw.write(chunk)
+    tar = tarfile.open(download_tar_file_name, "r:gz")
+    file_names = tar.getnames()
+    for file_name in file_names:
+        tar.extract(file_name)
+    tar.close()
+
+
+def log_path():
+    uri = "/job/log/path"
+    data = {"job_id": "202204251958539401540"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def task_query():
+    uri = "/job/task/query"
+    data = {"job_id": "202204251958539401540", "role": "guest", "party_id": 20001}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def list_task():
+    uri = "/job/list/task"
+    data = {"job_id": "202204251958539401540", "role": "guest", "party_id": 20001}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+def job_clean():
+    uri = "/job/clean"
+    data = {"job_id": "202204251958539401540", "role": "guest", "party_id": 20001}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+def clean_queue():
+    uri = "/job/clean/queue"
+    res = requests.post(base_url + uri)
+    print(res.text)
+
+
+```
+
+## 5. 指标
+```python
+import tarfile
+
+import requests
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def job_data_view():
+    uri = "/tracking/job/data_view"
+    data = {"job_id": "202203311009181495690", "role": "guest", "party_id": 20001}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_metric_all():
+    uri = "/tracking/component/metric/all"
+    data = {"job_id": "202203311009181495690", "role": "guest", "party_id": 20001, "component_name": "HeteroSecureBoost_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+# {"data":{"train":{"loss":{"data":[[0,0.6076415445876732],[1,0.5374539452565573],[2,0.4778598986135903],[3,0.42733599866560723],[4,0.38433409799127843]],"meta":{"Best":0.38433409799127843,"curve_name":"loss","metric_type":"LOSS","name":"train","unit_name":"iters"}}}},"retcode":0,"retmsg":"success"}
+
+
+def component_metric():
+    uri = "/tracking/component/metrics"
+    data = {"job_id": "202203311009181495690", "role": "guest", "party_id": 20001, "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+def component_metric_data():
+    uri = "/tracking/component/metric_data"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0",
+            "metric_name": "intersection",
+            "metric_namespace": "train"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_parameters():
+    uri = "/tracking/component/parameters"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_output_model():
+    uri = "/tracking/component/output/model"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_output_data():
+    uri = "/tracking/component/output/data"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_output_data_download():
+    uri = "/tracking/component/output/data/download"
+    download_tar_file_name = "data.tar.gz"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.get(base_url + uri, json=data)
+    print(res.text)
+    with open(download_tar_file_name, "wb") as fw:
+        for chunk in res.iter_content(1024):
+            if chunk:
+                fw.write(chunk)
+    tar = tarfile.open(download_tar_file_name, "r:gz")
+    file_names = tar.getnames()
+    for file_name in file_names:
+        tar.extract(file_name)
+    tar.close()
+
+
+def component_output_data_table():
+    uri = "/tracking/component/output/data/table"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0a"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_component_summary_download():
+    uri = "/tracking/component/summary/download"
+    data = {"job_id": "202203311009181495690",
+            "role": "guest",
+            "party_id": 20001,
+            "component_name": "Intersection_0"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+def component_list():
+    uri = "/tracking/component/list"
+    data = {"job_id": "202203311009181495690"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+component_list()
+```
+
+## 6. 资源
+```python
+import requests
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def resource_query():
+    uri = "/resource/query"
+    data = {"engine_name": "EGGROLL"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+
+
+def resource_return():
+    uri = "/resource/return"
+    data = {"job_id": "202204261616175720130"}
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+resource_return()
+```
+
+## 7. 权限
+```python
+import requests
+
+base_url = "http://127.0.0.1:9380/v1"
+
+
+def grant_privilege():
+    uri = "/permission/grant/privilege"
+    data = {
+        "src_role": "guest",
+        "src_party_id": "9999",
+        "privilege_role": "all",
+        "privilege_component": "all",
+        "privilege_command": "all"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+# grant_privilege()
+
+def delete_privilege():
+    uri = "/permission/delete/privilege"
+    data = {
+        "src_role": "guest",
+        "src_party_id": "9999",
+        "privilege_role": "guest",
+        "privilege_component": "dataio",
+        "privilege_command": "create"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+# delete_privilege()
+
+
+def query_privilege():
+    uri = "/permission/query/privilege"
+    data = {
+        "src_role": "guest",
+        "src_party_id": "9999"
+    }
+    res = requests.post(base_url + uri, json=data)
+    print(res.text)
+
+query_privilege()
+
+```
+
+

+ 702 - 0
FATE-Flow/doc/fate_flow_job_scheduling.md

@@ -0,0 +1,702 @@
+# Multi-Party Job&Task Scheduling
+
+## 1. Description
+
+Mainly describes how to submit a federated learning job using `FATE Flow` and observe the use of
+
+## 2. Job submission
+
+- Build a federated learning job and submit it to the scheduling system for execution
+- Two configuration files are required: job dsl and job conf
+- job dsl configures the running components: list, input-output relationships
+- job conf configures the component execution parameters, system operation parameters
+
+{{snippet('cli/job.md', '### submit')}}
+
+## 3. Job DSL configuration description
+
+The configuration file of DSL is in json format, in fact, the whole configuration file is a json object (dict).
+
+### 3.1 Component List
+
+**Description** The first level of this dict is `components`, which indicates the modules that will be used by this job.
+**Example**
+
+```json
+{
+  "components" : {
+          ...
+      }
+}
+```
+
+Each individual module is defined under "components", e.g.
+
+```json
+"data_transform_0": {
+      "module": "DataTransform",
+      "input": {
+          "data": {
+              "data": [
+                  "reader_0.train_data"
+              ]
+          }
+      },
+      "output": {
+          "data": ["train"],
+          "model": ["model"]
+      }
+  }
+```
+
+All data needs to be fetched from the data store via the **Reader** module, note that this module only has the output `output`
+
+```json
+"reader_0": {
+      "module": "Reader",
+      "output": {
+          "data": ["train"]
+      }
+}
+```
+
+### 3.2 Modules
+
+**Description** Used to specify the components to be used, all optional module names refer to.
+**Example**
+
+```json
+"hetero_feature_binning_1": {
+    "module": "HeteroFeatureBinning",
+     ...
+}
+```
+
+### 3.3 Inputs
+
+**Implications** Upstream inputs, divided into two input types, data and model.
+
+#### data input
+
+**Description** Upstream data input, divided into three input types.
+    
+    > 1. data: generally used in the data-transform module, feature_engineering module or
+    > evaluation module.
+    > 2. train_data: Generally used in homo_lr, hetero_lr and secure_boost
+    > modules. If the train_data field is present, then the task will be recognized as a fit task
+    > validate_data: If the train_data
+    > field is present, then the field is optional. If you choose to keep this field, the data pointed to will be used as the
+    > validation set
+    > 4. test_data: Used as prediction data, if provided, along with model input.
+
+#### model_input
+
+**Description** Upstream model input, divided into two input types.
+    1. model: Used for model input of the same type of component. For example, hetero_binning_0 will fit the model, and then
+        hetero_binning_1 will use the output of hetero_binning_0 for predict or
+        transform. code example.
+
+```json
+        "hetero_feature_binning_1": {
+            "module": "HeteroFeatureBinning",
+            "input": {
+                "data": {
+                    "data": [
+                        "data_transform_1.validate_data"
+                    ]
+                },
+                "model": [
+                    "hetero_feature_binning_0.fit_model"
+                ]
+            },
+            "output": {
+                "data": ["validate_data" ],
+              "model": ["eval_model"]
+            }
+        }
+```
+    2. isometric_model: Used to specify the model input of the inherited upstream component. For example, the upstream component of feature selection is
+        feature binning, it will use the information of feature binning as the feature
+        Code example.
+```json
+        "hetero_feature_selection_0": {
+            "module": "HeteroFeatureSelection",
+            "input": {
+                "data": {
+                    "data": [
+                        "hetero_feature_binning_0.train"
+                    ]
+                },
+                "isometric_model": [
+                    "hetero_feature_binning_0.output_model"
+                ]
+            },
+            "output": {
+                "data": [ "train" ],
+                "model": ["output_model"]
+            }
+        }
+```
+
+### 3.4 Output
+
+**Description** Output, like input, is divided into data and model output
+
+#### data output
+
+**Description** Data output, divided into four output types.
+
+1. data: General module data output
+2. train_data: only for Data Split
+3. validate_data: Only for Data Split
+4. test_data: Data Split only
+
+#### Model Output
+
+**Description** Model output, using model only
+
+### 3.5 Component Providers
+
+Since FATE-Flow version 1.7.0, the same FATE-Flow system supports loading multiple component providers, i.e. providers, which provide several components, and the source provider of the component can be configured when submitting a job
+Since FATE-Flow version 1.9.0, the parameters of the provider need to be configured in job conf, as follows
+
+**Description** Specify the provider, support global specification and individual component specification; if not specified, the default provider: `fate@$FATE_VERSION`
+
+**Format** `provider_name@$provider_version`
+
+**Advanced** You can register a new provider through the component registration CLI, currently supported providers: fate and fate_sql, please refer to [FATE Flow Component Center](./fate_flow_component_registry.md)
+
+**Example**
+
+```json
+{
+  "dsl_version": "2",
+  "initiator": {},
+  "role": {},
+  "job_parameters": {},
+  "component_parameters": {},
+  "provider": {
+    "common": {
+      "hetero_feature_binning_0": "fate@1.8.0"
+    },
+    "role": {
+      "guest": {
+        "0": {
+          "data_transform_0": "fate@1.9.0"
+        }
+      },
+      "host": {
+        "0": {
+          "data_transform_0": "fate@1.9.0"
+        }
+      }
+    }
+  }
+}
+```
+
+## 4. Job Conf Configuration Description
+
+Job Conf is used to set the information of each participant, the parameters of the job and the parameters of each component. The contents include the following.
+
+### 4.1 DSL Version
+
+**Description** Configure the version, the default is not 1, it is recommended to configure 2
+**Example**
+```json
+"dsl_version": "2"
+```
+
+### 4.2 Job participants
+
+#### initiating party
+
+**Description** The role and party_id of the assignment initiator.
+**Example**
+```json
+"initiator": {
+    "role": "guest",
+    "party_id": 9999
+}
+```
+
+#### All participants
+
+**Description** Information about each participant.
+**Description** In the role field, each element represents a role and the party_id that assumes that role. party_id for each role
+    The party_id of each role is in the form of a list, since a task may involve multiple parties in the same role.
+**Example**
+
+```json
+"role": {
+    "guest": [9999],
+    "host": [10000],
+    "arbiter": [10000]
+}
+```
+
+### 4.3 System operation parameters
+
+**Description**
+    Configure the main system parameters for job runtime
+
+#### Parameter application scope policy setting
+
+**Apply to all participants, use the common scope identifier
+**Apply to only one participant, use the role scope identifier, use (role:)party_index to locate the specified participant, direct
+
+```json
+"common": {
+}
+
+"role": {
+  "guest": {
+    "0": {
+    }
+  }
+}
+```
+
+The parameters under common are applied to all participants, and the parameters under role-guest-0 configuration are applied to the participants under the subscript 0 of the guest role.
+Note that the current version of the system operation parameters are not strictly tested for application to only one participant, so it is recommended to use common as a preference.
+
+#### Supported system parameters
+
+| Configuration | Default | Supported | Description |
+| ----------------------------- | --------------------- | ------------------------------- | ------------------------------------------------------------------------------------------------- |
+| job_type | train | train, predict | task_cores |
+| task_cores | 4 | positive_integer | total_cpu_cores_applied_to_job |
+| task_parallelism | 1 | positive_integer | task_parallelism |
+| computing_partitions | number of cpu cores allocated to task | positive integer | number of partitions in the data table at computation time |
+| eggroll_run | none | processors_per_node, etc. | eggroll computing engine related configuration parameters, generally do not need to be configured, from task_cores automatically calculated, if configured, task_cores parameters do not take effect |
+| spark_run | none | num-executors, executor-cores, etc. | spark compute engine related configuration parameters, generally do not need to be configured, automatically calculated by task_cores, if configured, task_cores parameters do not take effect |
+| rabbitmq_run | None | queue, exchange, etc. | Configuration parameters for rabbitmq to create queue, exchange, etc., which are generally not required and take the system defaults.
+| pulsar_run | none | producer, consumer, etc. | The configuration parameters for pulsar to create producer and consumer.                                        |
+| federated_status_collect_type | PUSH | PUSH, PULL | Multi-party run status collection mode, PUSH means that each participant actively reports to the initiator, PULL means that the initiator periodically pulls from each participant.
+| timeout | 259200 (3 days) | positive integer | task_timeout,unit_second |
+| audo_retries | 3 | positive integer | maximum number of retries per task failure |
+| model_id | \- | \- | The model id to be filled in for prediction tasks.
+| model_version | \- | \- | Model version, required for prediction tasks
+
+1. there is a certain support dependency between the computation engine and the storage engine
+2. developers can implement their own adapted engines, and configure the engines in runtime config
+
+#### reference configuration
+
+1. no need to pay attention to the compute engine, take the system default cpu allocation compute policy when the configuration
+
+```json
+"job_parameters": {
+  "common": {
+    "job_type": "train",
+    "task_cores": 6,
+    "task_parallelism": 2,
+    "computing_partitions": 8,
+    "timeout": 36000
+  }
+}
+```
+
+2. use eggroll as the computing engine, take the configuration when specifying cpu and other parameters directly
+
+```json
+"job_parameters": {
+  "common": {
+    "job_type": "train",
+    "eggroll_run": {
+      "eggroll.session.processors.per.node": 2
+    },
+    "task_parallelism": 2,
+    "computing_partitions": 8,
+    "timeout": 36000,
+  }
+}
+```
+
+3. use spark as the computing engine, rabbitmq as the federation engine, take the configuration when specifying the cpu and other parameters directly
+
+```json
+"job_parameters": {
+  "common": {
+    "job_type": "train",
+    "spark_run": {
+      "num-executors": 1,
+      "executor-cores": 2
+    },
+    "task_parallelism": 2,
+    "computing_partitions": 8,
+    "timeout": 36000,
+    "rabbitmq_run": {
+      "queue": {
+        "durable": true
+      },
+      "connection": {
+        "heartbeat": 10000
+      }
+    }
+  }
+}
+```
+
+4. use spark as the computing engine and pulsar as the federation engine
+
+```json
+"job_parameters": {
+  "common": {
+    "spark_run": {
+      "num-executors": 1,
+      "executor-cores": 2
+    },
+  }
+}
+```
+For more advanced resource-related configuration, please refer to [Resource Management](#4-Resource Management)
+
+### 4.3 Component operation parameters
+
+#### Parameter application scope policy setting
+
+- Apply to all participants, use common scope identifier
+- Apply to only one participant, use the role scope identifier, use (role:)party_index to locate the specified participant, directly specified parameters have higher priority than common parameters
+
+```json
+"commom": {
+}
+
+"role": {
+  "guest": {
+    "0": {
+    }
+  }
+}
+```
+
+where the parameters under the common configuration are applied to all participants, and the parameters under the role-guest-0 configuration indicate that they are applied to the participants under the subscript 0 of the guest role
+Note that the current version of the component runtime parameter already supports two application scope policies
+
+#### Reference Configuration
+
+- For the `intersection_0` and `hetero_lr_0` components, the runtime parameters are placed under the common scope and are applied to all participants
+- The operational parameters of `reader_0` and `data_transform_0` components are configured specific to each participant, because usually the input parameters are not consistent across participants, so usually these two components are set by participant
+- The above component names are defined in the DSL configuration file
+
+```json
+"component_parameters": {
+  "common": {
+    "intersection_0": {
+      "intersect_method": "raw",
+      "sync_intersect_ids": true,
+      "only_output_key": false
+    },
+    "hetero_lr_0": {
+      "penalty": "L2",
+      "optimizer": "rmsprop",
+      "alpha": 0.01,
+      "max_iter": 3,
+      "batch_size": 320,
+      "learning_rate": 0.15,
+      "init_param": {
+        "init_method": "random_uniform"
+      }
+    }
+  },
+  "role": {
+    "guest": {
+      "0": {
+        "reader_0": {
+          "table": {"name": "breast_hetero_guest", "namespace": "experiment"}
+        },
+        "data_transform_0":{
+          "with_label": true,
+          "label_name": "y",
+          "label_type": "int",
+          "output_format": "dense"
+        }
+      }
+    },
+    "host": {
+      "0": {
+        "reader_0": {
+          "table": {"name": "breast_hetero_host", "namespace": "experiment"}
+        },
+        "data_transform_0":{
+          "with_label": false,
+          "output_format": "dense"
+        }
+      }
+    }
+  }
+}
+```
+
+## 5. Multi-Host Configuration
+
+Multi-Host task should list all host information under role
+
+**Example**:
+
+```json
+"role": {
+   "guest": [
+     10000
+   ],
+   "host": [
+     10000, 10001, 10002
+   ],
+   "arbiter": [
+     10000
+   ]
+}
+```
+
+The different configurations for each host should be listed separately under their respective corresponding modules
+
+**Example**:
+
+```json
+"component_parameters": {
+   "role": {
+      "host": {
+         "0": {
+            "reader_0": {
+               "table":
+                {
+                  "name": "hetero_breast_host_0",
+                  "namespace": "hetero_breast_host"
+                }
+            }
+         },
+         "1": {
+            "reader_0": {
+               "table":
+               {
+                  "name": "hetero_breast_host_1",
+                  "namespace": "hetero_breast_host"
+               }
+            }
+         },
+         "2": {
+            "reader_0": {
+               "table":
+               {
+                  "name": "hetero_breast_host_2",
+                  "namespace": "hetero_breast_host"
+               }
+            }
+         }
+      }
+   }
+}
+```
+
+## 6. Predictive Task Configuration
+
+### 6.1 Description
+
+DSL V2 does not automatically generate prediction dsl for the training task. Users need to deploy the modules in the required model using `Flow Client` first.
+For detailed command description, please refer to [fate_flow_client](./fate_flow_client.md)
+
+```bash
+flow model deploy --model-id $model_id --model-version $model_version --cpn-list ...
+```
+
+Optionally, the user can add new modules to the prediction dsl, such as `Evaluation`
+
+### 6.2 Sample
+
+Training dsl.
+
+```json
+"components": {
+    "reader_0": {
+        "module": "Reader",
+        "output": {
+            "data": [
+                "data"
+            ]
+        }
+    },
+    "data_transform_0": {
+        "module": "DataTransform",
+        "input": {
+            "data": {
+                "data": [
+                    "reader_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data": [
+                "data"
+            ],
+            "model": [
+                "model"
+            ]
+        }
+    },
+    "intersection_0": {
+        "module": "Intersection",
+        "input": {
+            "data": {
+                "data": [
+                    "data_transform_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data":[
+                "data"
+            ]
+        }
+    },
+    "hetero_nn_0": {
+        "module": "HeteroNN",
+        "input": {
+            "data": {
+                "train_data": [
+                    "intersection_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data": [
+                "data"
+            ],
+            "model": [
+                "model"
+            ]
+        }
+    }
+}
+```
+
+Prediction dsl:
+
+```json
+"components": {
+    "reader_0": {
+        "module": "Reader",
+        "output": {
+            "data": [
+                "data"
+            ]
+        }
+    },
+    "data_transform_0": {
+        "module": "DataTransform",
+        "input": {
+            "data": {
+                "data": [
+                    "reader_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data": [
+                "data"
+            ],
+            "model": [
+                "model"
+            ]
+        }
+    },
+    "intersection_0": {
+        "module": "Intersection",
+        "input": {
+            "data": {
+                "data": [
+                    "data_transform_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data":[
+                "data"
+            ]
+        }
+    },
+    "hetero_nn_0": {
+        "module": "HeteroNN",
+        "input": {
+            "data": {
+                "train_data": [
+                    "intersection_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data": [
+                "data"
+            ],
+            "model": [
+                "model"
+            ]
+        }
+    },
+    "evaluation_0": {
+        "module": "Evaluation",
+        "input": {
+            "data": {
+                "data": [
+                    "hetero_nn_0.data"
+                ]
+            }
+         },
+         "output": {
+             "data": [
+                 "data"
+             ]
+          }
+    }
+}
+```
+
+## 7. Job reruns
+
+In `1.5.0`, we started to support re-running a job, but only failed jobs are supported.
+Version `1.7.0` supports rerunning of successful jobs, and you can specify which component to rerun from, the specified component and its downstream components will be rerun, but other components will not be rerun
+
+{{snippet('cli/job.md', '### rerun')}}
+
+## 8. Job parameter update
+
+In the actual production modeling process, it is necessary to constantly debug the component parameters and rerun, but not all components need to be adjusted and rerun at this time, so after `1.7.0` version support to modify a component parameter update, and with the `rerun` command on-demand rerun
+
+{{snippet('cli/job.md', '### parameter-update')}}
+
+## 9. Job scheduling policy
+
+- Queuing by commit time
+- Currently, only FIFO policy is supported, i.e. the scheduler will only scan the first job each time, if the first job is successful in requesting resources, it will start and get out of the queue, if the request fails, it will wait for the next round of scheduling.
+
+## 10. dependency distribution
+
+**Brief description:** 
+
+- Support for distributing fate and python dependencies from client nodes;
+- The work node does not need to deploy fate;
+- Only fate on spark supports distribution mode in current version;
+
+**Related parameters configuration**:
+
+conf/service_conf.yaml:
+
+```yaml
+dependent_distribution: true
+```
+
+fate_flow/settings.py
+
+```python
+FATE_FLOW_UPDATE_CHECK = False
+```
+
+**Description:**
+
+- dependent_distribution: dependent distribution switch;, off by default; when off, you need to deploy fate on each work node, and also fill in the configuration of spark in spark-env.sh to configure PYSPARK_DRIVER_PYTHON and PYSPARK_PYTHON.
+
+- FATE_FLOW_UPDATE_CHECK: Dependency check switch, turned off by default; it will automatically check if the fate code has changed every time a task is submitted; if it has changed, the fate code dependency will be re-uploaded;
+
+## 11. More commands
+
+Please refer to [Job CLI](./cli/job.md) and [Task CLI](./cli/task.md)

+ 702 - 0
FATE-Flow/doc/fate_flow_job_scheduling.zh.md

@@ -0,0 +1,702 @@
+# 多方联合作业&任务调度
+
+## 1. 说明
+
+主要介绍如何使用`FATE Flow`提交一个联邦学习作业,并观察使用
+
+## 2. 作业提交
+
+- 构建一个联邦学习作业,并提交到调度系统执行
+- 需要两个配置文件:job dsl和job conf
+- job dsl配置运行的组件:列表、输入输出关系
+- job conf配置组件执行参数、系统运行参数
+
+{{snippet('cli/job.zh.md', '### submit')}}
+
+## 3. Job DSL配置说明
+
+DSL 的配置文件采用 json 格式,实际上,整个配置文件就是一个 json 对象 (dict)。
+
+### 3.1 组件列表
+
+**描述** 在这个 dict 的第一级是 `components`,用来表示这个任务将会使用到的各个模块。
+**样例**
+
+```json
+{
+  "components" : {
+          ...
+      }
+}
+```
+
+每个独立的模块定义在 "components" 之下,例如:
+
+```json
+"data_transform_0": {
+      "module": "DataTransform",
+      "input": {
+          "data": {
+              "data": [
+                  "reader_0.train_data"
+              ]
+          }
+      },
+      "output": {
+          "data": ["train"],
+          "model": ["model"]
+      }
+  }
+```
+
+所有数据需要通过**Reader**模块从数据存储拿取数据,注意此模块仅有输出`output`
+
+```json
+"reader_0": {
+      "module": "Reader",
+      "output": {
+          "data": ["train"]
+      }
+}
+```
+
+### 3.2 模块
+
+**描述** 用来指定使用的组件,所有可选module名称参考:
+**样例**
+
+```json
+"hetero_feature_binning_1": {
+    "module": "HeteroFeatureBinning",
+     ...
+}
+```
+
+### 3.3 输入
+
+**描述** 上游输入,分为两种输入类型,分别是数据和模型。
+
+#### 数据输入
+
+**描述** 上游数据输入,分为三种输入类型:
+    
+    > 1.  data: 一般被用于 data-transform模块, feature_engineering 模块或者
+    >     evaluation 模块
+    > 2.  train_data: 一般被用于 homo_lr, hetero_lr 和 secure_boost
+    >     模块。如果出现了 train_data 字段,那么这个任务将会被识别为一个 fit 任务
+    > 3.  validate_data: 如果存在 train_data
+    >     字段,那么该字段是可选的。如果选择保留该字段,则指向的数据将会作为
+    >     validation set
+    > 4.  test_data: 用作预测数据,如提供,需同时提供model输入。
+
+#### 模型输入
+
+**描述** 上游模型输入,分为两种输入类型:
+    1.  model: 用于同种类型组件的模型输入。例如,hetero_binning_0 会对模型进行 fit,然后
+        hetero_binning_1 将会使用 hetero_binning_0 的输出用于 predict 或
+        transform。代码示例:
+
+```json
+        "hetero_feature_binning_1": {
+            "module": "HeteroFeatureBinning",
+            "input": {
+                "data": {
+                    "data": [
+                        "data_transform_1.validate_data"
+                    ]
+                },
+                "model": [
+                    "hetero_feature_binning_0.fit_model"
+                ]
+            },
+            "output": {
+                "data": ["validate_data"],
+              "model": ["eval_model"]
+            }
+        }
+```
+    2.  isometric_model: 用于指定继承上游组件的模型输入。 例如,feature selection 的上游组件是
+        feature binning,它将会用到 feature binning 的信息来作为 feature
+        importance。代码示例:
+```json
+        "hetero_feature_selection_0": {
+            "module": "HeteroFeatureSelection",
+            "input": {
+                "data": {
+                    "data": [
+                        "hetero_feature_binning_0.train"
+                    ]
+                },
+                "isometric_model": [
+                    "hetero_feature_binning_0.output_model"
+                ]
+            },
+            "output": {
+                "data": ["train"],
+                "model": ["output_model"]
+            }
+        }
+```
+
+### 3.4 输出
+
+**描述** 输出,与输入一样,分为数据和模型输出
+
+#### 数据输出
+
+**描述** 数据输出,分为四种输出类型:
+
+1.  data: 常规模块数据输出
+2.  train_data: 仅用于Data Split
+3.  validate_data: 仅用于Data Split
+4.  test_data: 仅用于Data Split
+
+#### 模型输出
+
+**描述** 模型输出,仅使用model
+
+### 3.5 组件Provider
+
+FATE-Flow 1.7.0版本开始,同一个FATE-Flow系统支持加载多种且多版本的组件提供方,也即provider,provider提供了若干个组件,提交作业时可以配置组件的来源provider
+FATE-Flow 1.9.0版本开始,provider的参数需在conf中配置,具体如下
+
+**描述** 指定provider,支持全局指定以及单个组件指定;若不指定,默认 provider:`fate@$FATE_VERSION`
+
+**格式** `provider_name@$provider_version`
+
+**进阶** 可以通过组件注册CLI注册新的 provider,目前支持的 provider:fate 和 fate_sql,具体请参考[FATE Flow 组件中心](./fate_flow_component_registry.zh.md)
+
+**样例**
+
+```json
+{
+  "dsl_version": "2",
+  "initiator": {},
+  "role": {},
+  "job_parameters": {},
+  "component_parameters": {},
+  "provider": {
+    "common": {
+      "hetero_feature_binning_0": "fate@1.8.0"
+    },
+    "role": {
+      "guest": {
+        "0": {
+          "data_transform_0": "fate@1.9.0"
+        }
+      },
+      "host": {
+        "0": {
+          "data_transform_0": "fate@1.9.0"
+        }
+      }
+    }
+  }
+}
+```
+
+## 4. Job Conf配置说明
+
+Job Conf用于设置各个参与方的信息, 作业的参数及各个组件的参数。 内容包括如下:
+
+### 4.1 DSL版本
+
+**描述** 配置版本,默认不配置为1,建议配置为2
+**样例**
+```json
+"dsl_version": "2"
+```
+
+### 4.2 作业参与方
+
+#### 发起方
+
+**描述** 任务发起方的role和party_id。
+**样例**
+```json
+"initiator": {
+    "role": "guest",
+    "party_id": 9999
+}
+```
+
+#### 所有参与方
+
+**描述** 各参与方的信息。
+**说明** 在 role 字段中,每一个元素代表一种角色以及承担这个角色的 party_id。每个角色的 party_id
+    以列表形式存在,因为一个任务可能涉及到多个 party 担任同一种角色。
+**样例**
+
+```json
+"role": {
+    "guest": [9999],
+    "host": [10000],
+    "arbiter": [10000]
+}
+```
+
+### 4.3 系统运行参数
+
+**描述**
+    配置作业运行时的主要系统参数
+
+#### 参数应用范围策略设置
+
+**应用于所有参与方,使用common范围标识符
+**仅应用于某参与方,使用role范围标识符,使用(role:)party_index定位被指定的参与方,直接指定的参数优先级高于common参数
+
+```json
+"common": {
+}
+
+"role": {
+  "guest": {
+    "0": {
+    }
+  }
+}
+```
+
+其中common下的参数应用于所有参与方,role-guest-0配置下的参数应用于guest角色0号下标的参与方
+注意,当前版本系统运行参数未对仅应用于某参与方做严格测试,建议使用优先选用common
+
+#### 支持的系统参数
+
+| 配置项                        | 默认值                | 支持值                          | 说明                                                                                              |
+| ----------------------------- | --------------------- | ------------------------------- | ------------------------------------------------------------------------------------------------- |
+| job_type                      | train                 | train, predict                  | 任务类型                                                                                          |
+| task_cores                    | 4                     | 正整数                          | 作业申请的总cpu核数                                                                               |
+| task_parallelism              | 1                     | 正整数                          | task并行度                                                                                        |
+| computing_partitions          | task所分配到的cpu核数 | 正整数                          | 计算时数据表的分区数                                                                              |
+| eggroll_run                   | 无                    | processors_per_node等           | eggroll计算引擎相关配置参数,一般无须配置,由task_cores自动计算得到,若配置则task_cores参数不生效 |
+| spark_run                     | 无                    | num-executors, executor-cores等 | spark计算引擎相关配置参数,一般无须配置,由task_cores自动计算得到,若配置则task_cores参数不生效   |
+| rabbitmq_run                  | 无                    | queue, exchange等               | rabbitmq创建queue、exchange的相关配置参数,一般无须配置,采取系统默认值                           |
+| pulsar_run                    | 无                    | producer, consumer等            | pulsar创建producer和consumer时候的相关配置,一般无需配置。                                        |
+| federated_status_collect_type | PUSH                  | PUSH, PULL                      | 多方运行状态收集模式,PUSH表示每个参与方主动上报到发起方,PULL表示发起方定期向各个参与方拉取      |
+| timeout                       | 259200 (3天)          | 正整数                          | 任务超时时间,单位秒                                                                               |
+| audo_retries                  | 3                     | 正整数                          | 每个任务失败自动重试最大次数                                                                      |
+| model_id                      | \-                    | \-                              | 模型id,预测任务需要填入                                                                          |
+| model_version                 | \-                    | \-                              | 模型version,预测任务需要填入                                                                     |
+
+1. 计算引擎和存储引擎之间具有一定的支持依赖关系
+2. 开发者可自行实现适配的引擎,并在runtime config配置引擎
+
+#### 参考配置
+
+1.  无须关注计算引擎,采取系统默认cpu分配计算策略时的配置
+
+```json
+"job_parameters": {
+  "common": {
+    "job_type": "train",
+    "task_cores": 6,
+    "task_parallelism": 2,
+    "computing_partitions": 8,
+    "timeout": 36000
+  }
+}
+```
+
+2.  使用eggroll作为computing engine,采取直接指定cpu等参数时的配置
+
+```json
+"job_parameters": {
+  "common": {
+    "job_type": "train",
+    "eggroll_run": {
+      "eggroll.session.processors.per.node": 2
+    },
+    "task_parallelism": 2,
+    "computing_partitions": 8,
+    "timeout": 36000,
+  }
+}
+```
+
+3.  使用spark作为computing engine,rabbitmq作为federation engine,采取直接指定cpu等参数时的配置
+
+```json
+"job_parameters": {
+  "common": {
+    "job_type": "train",
+    "spark_run": {
+      "num-executors": 1,
+      "executor-cores": 2
+    },
+    "task_parallelism": 2,
+    "computing_partitions": 8,
+    "timeout": 36000,
+    "rabbitmq_run": {
+      "queue": {
+        "durable": true
+      },
+      "connection": {
+        "heartbeat": 10000
+      }
+    }
+  }
+}
+```
+
+4.  使用spark作为computing engine,pulsar作为federation engine
+
+```json
+"job_parameters": {
+  "common": {
+    "spark_run": {
+      "num-executors": 1,
+      "executor-cores": 2
+    },
+  }
+}
+```
+更多资源相关高级配置请参考[资源管理](#4-资源管理)
+
+### 4.3 组件运行参数
+
+#### 参数应用范围策略设置
+
+- 应用于所有参与方,使用common范围标识符
+- 仅应用于某参与方,使用role范围标识符,使用(role:)party_index定位被指定的参与方,直接指定的参数优先级高于common参数
+
+```json
+"commom": {
+}
+
+"role": {
+  "guest": {
+    "0": {
+    }
+  }
+}
+```
+
+其中common配置下的参数应用于所有参与方,role-guest-0配置下的参数表示应用于guest角色0号下标的参与方
+注意,当前版本组件运行参数已支持两种应用范围策略
+
+#### 参考配置
+
+- `intersection_0`与`hetero_lr_0`两个组件的运行参数,放在common范围下,应用于所有参与方
+- 对于`reader_0`与`data_transform_0`两个组件的运行参数,依据不同的参与方进行特定配置,这是因为通常不同参与方的输入参数并不一致,所有通常这两个组件一般按参与方设置
+- 上述组件名称是在DSL配置文件中定义
+
+```json
+"component_parameters": {
+  "common": {
+    "intersection_0": {
+      "intersect_method": "raw",
+      "sync_intersect_ids": true,
+      "only_output_key": false
+    },
+    "hetero_lr_0": {
+      "penalty": "L2",
+      "optimizer": "rmsprop",
+      "alpha": 0.01,
+      "max_iter": 3,
+      "batch_size": 320,
+      "learning_rate": 0.15,
+      "init_param": {
+        "init_method": "random_uniform"
+      }
+    }
+  },
+  "role": {
+    "guest": {
+      "0": {
+        "reader_0": {
+          "table": {"name": "breast_hetero_guest", "namespace": "experiment"}
+        },
+        "data_transform_0":{
+          "with_label": true,
+          "label_name": "y",
+          "label_type": "int",
+          "output_format": "dense"
+        }
+      }
+    },
+    "host": {
+      "0": {
+        "reader_0": {
+          "table": {"name": "breast_hetero_host", "namespace": "experiment"}
+        },
+        "data_transform_0":{
+          "with_label": false,
+          "output_format": "dense"
+        }
+      }
+    }
+  }
+}
+```
+
+## 5. 多Host 配置
+
+多Host任务应在role下列举所有host信息
+
+**样例**:
+
+```json
+"role": {
+   "guest": [
+     10000
+   ],
+   "host": [
+     10000, 10001, 10002
+   ],
+   "arbiter": [
+     10000
+   ]
+}
+```
+
+各host不同的配置应在各自对应模块下分别列举
+
+**样例**:
+
+```json
+"component_parameters": {
+   "role": {
+      "host": {
+         "0": {
+            "reader_0": {
+               "table":
+                {
+                  "name": "hetero_breast_host_0",
+                  "namespace": "hetero_breast_host"
+                }
+            }
+         },
+         "1": {
+            "reader_0": {
+               "table":
+               {
+                  "name": "hetero_breast_host_1",
+                  "namespace": "hetero_breast_host"
+               }
+            }
+         },
+         "2": {
+            "reader_0": {
+               "table":
+               {
+                  "name": "hetero_breast_host_2",
+                  "namespace": "hetero_breast_host"
+               }
+            }
+         }
+      }
+   }
+}
+```
+
+## 6. 预测任务配置
+
+### 6.1 说明
+
+DSL V2不会自动为训练任务生成预测dsl。 用户需要首先使用`Flow Client`部署所需模型中模块。
+详细命令说明请参考[fate_flow_client](./fate_flow_client.zh.md)
+
+```bash
+flow model deploy --model-id $model_id --model-version $model_version --cpn-list ...
+```
+
+可选地,用户可以在预测dsl中加入新模块,如`Evaluation`
+
+### 6.2 样例
+
+训练 dsl:
+
+```json
+"components": {
+    "reader_0": {
+        "module": "Reader",
+        "output": {
+            "data": [
+                "data"
+            ]
+        }
+    },
+    "data_transform_0": {
+        "module": "DataTransform",
+        "input": {
+            "data": {
+                "data": [
+                    "reader_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data": [
+                "data"
+            ],
+            "model": [
+                "model"
+            ]
+        }
+    },
+    "intersection_0": {
+        "module": "Intersection",
+        "input": {
+            "data": {
+                "data": [
+                    "data_transform_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data":[
+                "data"
+            ]
+        }
+    },
+    "hetero_nn_0": {
+        "module": "HeteroNN",
+        "input": {
+            "data": {
+                "train_data": [
+                    "intersection_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data": [
+                "data"
+            ],
+            "model": [
+                "model"
+            ]
+        }
+    }
+}
+```
+
+预测 dsl:
+
+```json
+"components": {
+    "reader_0": {
+        "module": "Reader",
+        "output": {
+            "data": [
+                "data"
+            ]
+        }
+    },
+    "data_transform_0": {
+        "module": "DataTransform",
+        "input": {
+            "data": {
+                "data": [
+                    "reader_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data": [
+                "data"
+            ],
+            "model": [
+                "model"
+            ]
+        }
+    },
+    "intersection_0": {
+        "module": "Intersection",
+        "input": {
+            "data": {
+                "data": [
+                    "data_transform_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data":[
+                "data"
+            ]
+        }
+    },
+    "hetero_nn_0": {
+        "module": "HeteroNN",
+        "input": {
+            "data": {
+                "train_data": [
+                    "intersection_0.data"
+                ]
+            }
+        },
+        "output": {
+            "data": [
+                "data"
+            ],
+            "model": [
+                "model"
+            ]
+        }
+    },
+    "evaluation_0": {
+        "module": "Evaluation",
+        "input": {
+            "data": {
+                "data": [
+                    "hetero_nn_0.data"
+                ]
+            }
+         },
+         "output": {
+             "data": [
+                 "data"
+             ]
+          }
+    }
+}
+```
+
+## 7. 作业重跑
+
+`1.5.0`版本, 开始支持重跑某个作业, 但是仅支持失败的作业
+`1.7.0`版本支持成功的作业重跑, 并且可以指定从哪个组件开始重跑, 被指定的组件及其下游组件会重跑, 但其他组件不会重跑
+
+{{snippet('cli/job.zh.md', '### rerun')}}
+
+## 8. 作业参数更新
+
+实际生产建模过程中, 需要进行不断调试修改组件参数且重跑, 但是此时并不是所有组件都需要调整并且重跑, 因此在`1.7.0`版本后支持修改某个组件的参数更新, 且配合`rerun`命令按需重跑
+
+{{snippet('cli/job.zh.md', '### parameter-update')}}
+
+## 9. 作业调度策略
+
+- 按提交时间先后入队
+- 目前仅支持FIFO策略,也即每次调度器仅会扫描第一个作业,若第一个作业申请资源成功则start且出队,若申请资源失败则等待下一轮调度
+
+## 10. 依赖分发
+
+**简要描述:** 
+
+- 支持从client节点分发fate和python依赖;
+- work节点不用部署fate;
+- 当前版本只有fate on spark支持分发模式;
+
+**相关参数配置**:
+
+conf/service_conf.yaml:
+
+```yaml
+dependent_distribution: true
+```
+
+fate_flow/settings.py
+
+```python
+FATE_FLOW_UPDATE_CHECK = False
+```
+
+**说明:**
+
+- dependent_distribution: 依赖分发开关;,默认关闭;关闭时需要在每个work节点部署fate, 另外还需要在spark的配置spark-env.sh中填配置PYSPARK_DRIVER_PYTHON和PYSPARK_PYTHON;
+
+- FATE_FLOW_UPDATE_CHECK: 依赖校验开关, 默认关闭;打开后每次提交任务都会自动校验fate代码是否发生改变;若发生改变则会重新上传fate代码依赖;
+
+## 11. 更多命令
+
+请参考[Job CLI](./cli/job.zh.md)和[Task CLI](./cli/task.zh.md)

+ 213 - 0
FATE-Flow/doc/fate_flow_model_migration.md

@@ -0,0 +1,213 @@
+# Inter-cluster Model Migration
+
+The model migration function makes it possible to copy the model file to a cluster with a different `party_id` and still have it available.
+
+1. the cluster of any of the model generation participants is redeployed and the `party_id` of the cluster is changed after the deployment, e.g. the source participant is `arbiter-10000#guest-9999#host-10000`, changed to `arbiter-10000#guest-99#host-10000`
+2. Any one or more of the participants will copy the model file from the source cluster to the target cluster, which needs to be used in the target cluster
+
+Basics.
+1. In the above two scenarios, the participant `party_id` of the model changes, such as `arbiter-10000#guest-9999#host-10000` -> `arbiter-10000#guest-99#host-10000`, or `arbiter-10000#guest -9999#host-10000` -> `arbiter-100#guest-99#host-100`
+2. the model's participant `party_id` changes, so `model_id` and the model file involving `party_id` need to be changed
+3. The overall process has three steps: copy and transfer the original model file, execute the model migration task on the original model file, and import the new model generated by the model migration task.
+4. where *execute model migration task on the original model file* is actually a temporary copy of the original model file at the execution, and then modify `model_id` and the contents of the model file involving `party_id` according to the configuration, in order to adapt to the new participant `party_id`.
+5. All the above steps need to be performed on all new participants, even if the `party_id` of one of the target participants has not changed.
+6. the new participant cluster version needs to be greater than or equal to `1.5.1`.
+
+The migration process is as follows.
+
+## Transfer the model file
+
+Please package and transfer the model files (including the directory named by model id) generated by the machine where the source participant fate flow service is located to the machine where the target participant fate flow is located, and please transfer the model files to a fixed directory as follows.
+
+```bash
+$FATE_PROJECT_BASE/model_local_cache
+```
+
+Instructions:
+1. just transfer the folder, if you do the transfer by compressing and packing, please extract the model files to the directory where the model is located after the transfer.
+2. Please transfer the model files one by one according to the source participants.
+
+## Preparation work before migration
+
+### Instructions
+
+1. refer to [fate flow client](. /fate_flow_client.zh.md) to install the client fate-client which supports model migration, only fate 1.5.1 and above are supported.
+
+## Execute the migration task
+
+### Description
+1. Execute the migration task by replacing the source model file with `model_id`, `model_version` and the contents of the model involving `role` and `party_id` according to the migration task configuration file
+
+2. The cluster submitting the task must complete the above migration preparation
+
+### 1. Modify the configuration file
+
+Modify the configuration file of the migration task in the new participant (machine) according to the actual situation, as follows for the migration task example configuration file [migrate_model.json](https://github.com/FederatedAI/FATE-Flow/blob/main/examples/model /migrate_model.json)
+
+```json
+{
+  "job_parameters": {
+    "federated_mode": "SINGLE"
+  },
+  "role": {
+    "guest": [9999],
+    "arbiter": [10000],
+    "host": [10000]
+  },
+  "migrate_initiator": {
+    "role": "guest",
+    "party_id": 99
+  },
+  "migrate_role": {
+    "guest": [99],
+    "arbiter": [100],
+    "host": [100]
+  },
+  "execute_party": {
+    "guest": [9999],
+    "arbiter": [10000],
+    "host": [10000]
+  },
+  "model_id": "arbiter-10000#guest-9999#host-10000#model",
+  "model_version": "202006171904247702041",
+  "unify_model_version": "202901_0001"
+}
+```
+
+Please save the above configuration content to a location in the server for modification.
+
+The following are explanatory notes for the parameters in this configuration.
+
+1. **`job_parameters`**: The `federated_mode` in this parameter has two optional parameters, which are `MULTIPLE` and `SINGLE`. If set to `SINGLE`, the migration job will be executed only in the party that submitted the migration job, then the job needs to be submitted in all new participants separately; if set to `MULTIPLE`, the job will be distributed to the participants specified in `execute_party` to execute the job, only the new The task will be distributed to the participant specified in `execute_party`, and only needs to be submitted in the new participant as `migrate_initiator`.
+2. **`role`**: This parameter fills in the `role` of the participant that generated the original model and its corresponding `party_id` information.
+3. **`migrate_initiator`**: This parameter is used to specify the task initiator information of the migrated model, and the initiator's `role` and `party_id` should be specified respectively.
+4. **`migrate_role`**: This parameter is used to specify the `role` and `party_id` information of the migrated model.
+5. **`execute_party`**: This parameter is used to specify the `role` and `party_id` information of the `party_id` that needs to execute the migration, which is the source cluster `party_id`.
+6. **`model_id`**: This parameter is used to specify the `model_id` of the original model to be migrated.
+7. **`model_version`**: This parameter is used to specify the `model_version` of the original model that needs to be migrated.
+8. **`unify_model_version`**: This parameter is not required, it is used to specify the `model_version` of the new model. If this parameter is not provided, the new model will take the `job_id` of the migrated job as its new `model_version`.
+
+Examples of the above configuration files are.
+1. the source model has `guest: 9999, host: 10000, arbiter: 10000,` migrate the model to have `guest: 99, host: 100, arbiter: 100` as participants, and `guest: 99` as the new initiator
+2. `federated_mode: SINGLE` means that each migration task will be executed only in the cluster where the task is submitted, then the task needs to be submitted in 99 and 100 respectively.
+3. for example, if the task is executed at 99, then `execute_party` is configured as `"guest": [9999]`.
+4. For example, if you execute at 100, then `execute_party` is configured as `"arbiter": [10000], "host": [10000]`
+
+
+## 2. Submit migration tasks (separate operations in all target clusters)
+
+
+Migration tasks need to be committed using fate-client. A sample execution command is as follows.
+
+```bash
+flow model migrate -c $FATE_FLOW_BASE/examples/model/migrate_model.json
+```
+
+## 3. Task execution results
+
+The following is the content of the configuration file for the actual migration task.
+
+```json
+{
+  "job_parameters": {
+    "federated_mode": "SINGLE"
+  },
+  "role": {
+    "guest": [9999],
+    "host": [10000]
+  },
+  "migrate_initiator": {
+    "role": "guest",
+    "party_id": 99
+  },
+  "migrate_role": {
+    "guest": [99],
+    "host": [100]
+  },
+  "execute_party": {
+    "guest": [9999],
+    "host": [10000]
+  },
+  "model_id": "guest-9999#host-10000#model",
+  "model_version": "202010291539339602784",
+  "unify_model_version": "fate_migration"
+}
+```
+
+What this task achieves is to migrate the model with `model_id` of `guest-9999#host-10000#model` and `model_version` of `202010291539339602784` from a cluster with `party_id` of 9999 (guest) and 10000 (host) to a new model that fits the `party_id` of 99 (guest) and 100 (host) clusters
+
+The following is the result of a successful migration.
+
+```json
+{
+    "data": {
+        "detail": {
+            "guest": {
+                "9999": {
+                    "retcode": 0,
+                    "retmsg": "Migrating model successfully. the configuration of model has been modified automatically. new model id is: guest-99#host-100#model, Model files can be found at '/data/projects/fate/temp/fate_flow/guest#99#guest-99#host-100#model_fate_migration.zip'.zip. migration.zip'."
+                }
+            },
+            "host": {
+                "10000": {
+                    "retcode": 0,
+                    "retmsg": "Migrating model successfully. The configuration of model has been modified automatically, Model files can be found at '/data/projects/fate/temp/fate_flow/host#100#guest-99#host-100#model_fate_migration.zip'.zip. migration.zip'."
+                }
+            }
+        },
+        "guest": {
+            "9999": 0
+        },
+        "host": {
+            "10000": 0
+        }
+    },
+    "jobId": "202010292152299793981",
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+After the task is successfully executed, a copy of the migrated model zip file is generated in each of the executor's machines, and the path to this file can be obtained in the returned results. For example, the path of the post-migration model file for 9999 (guest) is: `/data/projects/fate/temp/fate_flow/guest#99#guest-99#host-100#model_fate_migration.zip` and for 10000 (host) The model file path is: `/data/projects/fate/temp/fate_flow/host#100#guest-99#host-100#model_fate_migration.zip`. The new `model_id` can be obtained from the return as well as the `model_version`.
+
+## 4. Transferring files and importing (separate operation in all target clusters)
+
+After the migration task is successful, please manually transfer the newly generated model zip file to the fate flow machine of the target cluster. For example, the new model zip file generated by 9999 (guest) in point 3 needs to be transferred to the 99 (guest) machine. The zip file can be placed anywhere on the corresponding machine. Next, you need to configure the model import task, see [import_model.json](https://github.com/FederatedAI/FATE/blob/master/python/fate_flow/) for the configuration file examples/import_model.json) (this configuration file is included in the zip file, please modify it according to the actual situation, **do not use it directly**).
+
+The following is an example of the configuration file for importing the migrated model in guest (99).
+
+```json
+{
+  "role": "guest",
+  "party_id": 99,
+  "model_id": "guest-99#host-100#model",
+  "model_version": "202010292152299793981",
+  "file": "/data/projects/fate/python/temp/guest#99#guest-99#host-100#202010292152299793981.zip"
+}
+```
+
+Please fill in the role `role`, the current party `party_id`, the new `model_id` and `model_version` of the migrated model, and the path to the zip file of the migrated model according to the actual situation.
+
+The following is a sample command to submit an imported model using fate-client.
+
+```bash
+flow model import -c $FATE_FLOW_BASE/examples/model/import_model.json
+```
+
+The import is considered successful when it returns the following.
+
+```json
+{
+  "data": {
+    "job_id": "202208261102212849780",
+    "model_id": "arbiter-10000#guest-9999#host-10000#model",
+    "model_version": "foobar",
+    "party_id": "9999",
+    "role": "guest"
+  },
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+The migration task is now complete and the user can submit the task with the new `model_id` and `model_version` to perform prediction tasks with the migrated model.

+ 213 - 0
FATE-Flow/doc/fate_flow_model_migration.zh.md

@@ -0,0 +1,213 @@
+# 集群间模型迁移
+
+模型迁移功能使得模型文件复制拷贝到不同`party_id`的集群依然可用,以下两种场景需要做模型迁移:
+
+1. 模型生成参与方任何一方的集群, 重新部署且部署后集群的`party_id`变更, 例如源参与方为`arbiter-10000#guest-9999#host-10000`, 改为`arbiter-10000#guest-99#host-10000`
+2. 其中任意一个或多个参与方将模型文件从源集群复制到目标集群,需要在目标集群使用
+
+基本原理:
+1. 上述两种场景下,模型的参与方`party_id`会发生改变,如`arbiter-10000#guest-9999#host-10000` -> `arbiter-10000#guest-99#host-10000`,或者`arbiter-10000#guest-9999#host-10000` -> `arbiter-100#guest-99#host-100`
+2. 模型的参与方`party_id`发生改变,因此`model_id`以及模型文件里面涉及`party_id`需要改变
+3. 整体流程下来,有三个步骤:复制转移原有模型文件、对原有模型文件执行模型迁移任务、导入模型迁移任务生成的新模型
+4. 其中*原有模型文件执行模型迁移任务*其实就是在执行处临时复制一份原模型文件,然后按照配置,修改`model_id`及模型文件里面涉及`party_id`的内容,以适配新的参与方`party_id`
+5. 上述步骤都需要在所有新的参与方执行,即使其中某个目标参与方的`party_id`没有改变,也需要执行
+6. 新的参与方集群版本需大于等于`1.5.1`
+
+迁移流程如下:
+
+## 转移模型文件
+
+请将源参与方fate flow服务所在机器生成的模型文件(包括以model id为命名的目录)进行打包并转移到目标参与方fate flow所在机器中,请将模型文件转移至固定目录中:
+
+```bash
+$FATE_PROJECT_BASE/model_local_cache
+```
+
+说明:
+1. 文件夹转移即可,如果是通过压缩打包进行的转移,请在转移后将模型文件解压到模型所在目录中。
+2. 模型文件请按源目参与方一一对应转移
+
+## 迁移前的准备工作
+
+### 说明
+
+1. 参考[fate flow client](./fate_flow_client.zh.md)安装支持模型迁移的客户端fate-client,只有fate 1.5.1及其以上版本支持
+
+## 执行迁移任务
+
+### 说明
+1. 执行迁移任务是将源模型文件根据迁移任务配置文件修改`model_id`、`model_version`以及模型内涉及`role`和`party_id`的内容进行替换
+
+2. 提交任务的集群必须完成上述迁移准备
+
+### 1. 修改配置文件
+
+在新参与方(机器)中根据实际情况对迁移任务的配置文件进行修改,如下为迁移任务示例配置文件 [migrate_model.json](https://github.com/FederatedAI/FATE-Flow/blob/main/examples/model/migrate_model.json)
+
+```json
+{
+  "job_parameters": {
+    "federated_mode": "SINGLE"
+  },
+  "role": {
+    "guest": [9999],
+    "arbiter": [10000],
+    "host": [10000]
+  },
+  "migrate_initiator": {
+    "role": "guest",
+    "party_id": 99
+  },
+  "migrate_role": {
+    "guest": [99],
+    "arbiter": [100],
+    "host": [100]
+  },
+  "execute_party": {
+    "guest": [9999],
+    "arbiter": [10000],
+    "host": [10000]
+  },
+  "model_id": "arbiter-10000#guest-9999#host-10000#model",
+  "model_version": "202006171904247702041",
+  "unify_model_version": "20200901_0001"
+}
+```
+
+请将上述配置内容保存到服务器中的某一位置进行修改。
+
+以下为对该配置中的参数的解释说明:
+
+1. **`job_parameters`**:该参数中的`federated_mode`有两个可选参数,分别为`MULTIPLE` 及`SINGLE`。如果设置为`SINGLE`,则该迁移任务只会在提交迁移任务的本方执行,那么需要分别在所有新参与方提交任务;如果设置为`MULTIPLE`,则将任务分发到`execute_party`中指定的参与方执行任务,只需要在作为`migrate_initiator`的新参与方提交。
+2. **`role`**:该参数填写生成原始模型的参与方`role`及其对应的`party_id`信息。
+3. **`migrate_initiator`**:该参数用于指定迁移后的模型的任务发起方信息,分别需指定发起方的`role`与`party_id`。
+4. **`migrate_role`**:该参数用于指定迁移后的模型的参与方`role`及`party_id`信息。
+5. **`execute_party`**:该参数用于指定需要执行迁移的`role`及`party_id`信息, 该`party_id`为源集群`party_id`。
+6. **`model_id`**:该参数用于指定需要被迁移的原始模型的`model_id`。
+7. **`model_version`**:该参数用于指定需要被迁移的原始模型的`model_version`。
+8. **`unify_model_version`**:此参数为非必填参数,该参数用于指定新模型的`model_version`。若未提供该参数,新模型将以迁移任务的`job_id`作为其新`model_version`。
+
+上述配置文件举例说明:
+1. 源模型的参与方为`guest: 9999, host: 10000, arbiter: 10000,` 将模型迁移成参与方为`guest: 99, host: 100, arbiter: 100`, 且新发起方为`guest: 99`
+2. `federated_mode: SINGLE` 表示每个迁移任务只在提交任务的集群执行任务,那么需要在99、100分别提交任务
+3. 例如在99执行,则`execute_party`配置为`"guest": [9999]`
+4. 例如在100执行,则`execute_party`配置为`"arbiter": [10000], "host": [10000]`
+
+
+## 2. 提交迁移任务(在所有目标集群分别操作)
+
+
+迁移任务需使用fate-client进行提交,示例执行命令如下:
+
+```bash
+flow model migrate -c $FATE_FLOW_BASE/examples/model/migrate_model.json
+```
+
+## 3. 任务执行结果
+
+如下为实际迁移任务的配置文件内容:
+
+```json
+{
+  "job_parameters": {
+    "federated_mode": "SINGLE"
+  },
+  "role": {
+    "guest": [9999],
+    "host": [10000]
+  },
+  "migrate_initiator": {
+    "role": "guest",
+    "party_id": 99
+  },
+  "migrate_role": {
+    "guest": [99],
+    "host": [100]
+  },
+  "execute_party": {
+    "guest": [9999],
+    "host": [10000]
+  },
+  "model_id": "guest-9999#host-10000#model",
+  "model_version": "202010291539339602784",
+  "unify_model_version": "fate_migration"
+}
+```
+
+该任务实现的是,将`party_id`为9999 (guest),10000 (host)的集群生成的`model_id`为`guest-9999#host-10000#model`,`model_version`为`202010291539339602784`的模型修改迁移生成适配`party_id`为99 (guest),100 (host)集群的新模型
+
+如下为迁移成功的后得到的返回结果:
+
+```json
+{
+    "data": {
+        "detail": {
+            "guest": {
+                "9999": {
+                    "retcode": 0,
+                    "retmsg": "Migrating model successfully. The configuration of model has been modified automatically. New model id is: guest-99#host-100#model, model version is: fate_migration. Model files can be found at '/data/projects/fate/temp/fate_flow/guest#99#guest-99#host-100#model_fate_migration.zip'."
+                }
+            },
+            "host": {
+                "10000": {
+                    "retcode": 0,
+                    "retmsg": "Migrating model successfully. The configuration of model has been modified automatically. New model id is: guest-99#host-100#model, model version is: fate_migration. Model files can be found at '/data/projects/fate/temp/fate_flow/host#100#guest-99#host-100#model_fate_migration.zip'."
+                }
+            }
+        },
+        "guest": {
+            "9999": 0
+        },
+        "host": {
+            "10000": 0
+        }
+    },
+    "jobId": "202010292152299793981",
+    "retcode": 0,
+    "retmsg": "success"
+}
+```
+
+任务成功执行后,执行方的机器中都会生成一份迁移后模型压缩文件,该文件路径可以在返回结果中得到。如上,9999 (guest)的迁移后模型文件路径为:`/data/projects/fate/temp/fate_flow/guest#99#guest-99#host-100#model_fate_migration.zip`,10000 (host)的迁移后模型文件路径为:`/data/projects/fate/temp/fate_flow/host#100#guest-99#host-100#model_fate_migration.zip`。新的`model_id`与`model_version`同样可以从返回中获得。
+
+## 4. 转移文件并导入(在所有目标集群分别操作)
+
+迁移任务成功之后,请手动将新生成的模型压缩文件转移到目标集群的fate flow机器上。例如:第三点中9999 (guest)生成的新模型压缩文件需要被转移到99 (guest) 机器上。压缩文件可以放在对应机器上的任意位置,接下来需要配置模型的导入任务,配置文件请见[import_model.json](https://github.com/FederatedAI/FATE/blob/master/python/fate_flow/examples/import_model.json)(压缩文件内包含此配置文件,请根据实际情况修改,**切勿直接使用**)。
+
+下面举例介绍在guest (99)中导入迁移后模型的配置文件:
+
+```json
+{
+  "role": "guest",
+  "party_id": 99,
+  "model_id": "guest-99#host-100#model",
+  "model_version": "202010292152299793981",
+  "file": "/data/projects/fate/python/temp/guest#99#guest-99#host-100#202010292152299793981.zip"
+}
+```
+
+请根据实际情况对应填写角色`role`,当前本方`party_id`,迁移模型的新`model_id`及`model_version`,以及迁移模型的压缩文件所在路径。
+
+如下为使用fate-client提交导入模型的示例命令:
+
+```bash
+flow model import -c $FATE_FLOW_BASE/examples/model/import_model.json
+```
+
+得到如下返回视为导入成功:
+
+```json
+{
+  "data": {
+    "job_id": "202208261102212849780",
+    "model_id": "arbiter-10000#guest-9999#host-10000#model",
+    "model_version": "foobar",
+    "party_id": "9999",
+    "role": "guest"
+  },
+  "retcode": 0,
+  "retmsg": "success"
+}
+```
+
+迁移任务至此完成,用户可使用新的`model_id`及`model_version`进行任务提交,以利用迁移后的模型执行预测任务。

+ 78 - 0
FATE-Flow/doc/fate_flow_model_registry.md

@@ -0,0 +1,78 @@
+# Federated Model Registry
+
+## 1. Description
+
+Models trained by FATE are automatically saved locally and recorded in the FATE-Flow database. models saved after each component run are called Pipeline models, and models saved at regular intervals while the component is running are called Checkpoint models. checkpoint models can also be used for retrying after a component run is unexpectedly interrupted The Checkpoint model can also be used for "breakpoints" when a component is retrying after an unexpected interruption.
+
+Checkpoint model support has been added since 1.7.0 and is not saved by default. To enable it, add the callback `ModelCheckpoint` to the DSL.
+
+### Local disk storage
+
+- Pipeline models are stored in `model_local_cache/<party_model_id>/<model_version>/variables/data/<component_name>/<model_alias>`.
+
+- Checkpoint models are stored in `model_local_cache/<party_model_id>/<model_version>/checkpoint/<component_name>/<step_index>#<step_name>`.
+
+### Remote storage engine
+
+Local disk is not reliable, so there is a risk of losing models. FATE-Flow supports exporting models to specified storage engines, importing from specified storage engines, and pushing models to engine storage when publishing models automatically.
+
+The storage engine supports Tencent Cloud Object Storage, MySQL and Redis, please refer to [Storage Engine Configuration](#4-storage-engine-configuration)
+
+## 2. Model
+
+{{snippet('cli/model.md', '## Model')}}
+
+## 3. Checkpoint
+
+{{snippet('cli/checkpoint.md', '## Checkpoint')}}
+
+## 4. Storage engine configuration
+
+### `enable_model_store`
+
+This option affects API `/model/load`.
+
+Automatic upload models to the model store if it exists locally but does not exist in the model storage, or download models from the model store if it does not exist locally but does not exist in the model storage.
+
+This option does not affect API `/model/store` or `/model/restore`.
+
+### `model_store_address`
+
+This config defines which storage engine to use.
+
+#### Tencent Cloud Object Storage
+
+```yaml
+storage: tencent_cos
+# get these configs from Tencent Cloud console
+Region:
+SecretId:
+SecretKey:
+Bucket:
+```
+
+#### MySQL
+
+```yaml
+storage: mysql
+database: fate_model
+user: fate
+password: fate
+host: 127.0.0.1
+port: 3306
+# other optional configs send to the engine
+max_connections: 10
+stale_timeout: 10
+```
+
+#### Redis
+
+```yaml
+storage: redis
+host: 127.0.0.1
+port: 6379
+db: 0
+password:
+# the expiry time of keys, in seconds. defaults None (no expiry time)
+ex:
+```

+ 203 - 0
FATE-Flow/doc/fate_flow_model_registry.zh.md

@@ -0,0 +1,203 @@
+# 联合模型注册中心
+
+## 1. 说明
+
+由 FATE 训练的模型会自动保存到本地并记录在 FATE-Flow 的数据库中,每个组件运行完成后保存的模型称为 Pipeline 模型,在组件运行时定时保存的模型称为 Checkpoint 模型。Checkpoint 模型也可以用于组件运行意外中断后,重试时的“断点续传”。
+
+Checkpoint 模型的支持自 1.7.0 加入,默认是不保存的,如需启用,则要向 DSL 中加入 callback `ModelCheckpoint`。
+
+### 本地磁盘存储
+
+- Pipeline 模型存储于 `model_local_cache/<party_model_id>/<model_version>/variables/data/<component_name>/<model_alias>`。
+
+- Checkpoint 模型存储于 `model_local_cache/<party_model_id>/<model_version>/checkpoint/<component_name>/<step_index>#<step_name>`。
+
+#### 目录结构
+
+```
+tree model_local_cache/guest#9999#arbiter-10000#guest-9999#host-10000#model/202112181502241234200
+
+model_local_cache/guest#9999#arbiter-10000#guest-9999#host-10000#model/202112181502241234200
+├── checkpoint
+│   ├── data_transform_0
+│   ├── evaluation_0
+│   ├── hetero_linr_0
+│   │   ├── 0#step_name
+│   │   │   ├── HeteroLinearRegressionMeta.json
+│   │   │   ├── HeteroLinearRegressionMeta.pb
+│   │   │   ├── HeteroLinearRegressionParam.json
+│   │   │   ├── HeteroLinearRegressionParam.pb
+│   │   │   └── database.yaml
+│   │   ├── 1#step_name
+│   │   │   ├── HeteroLinearRegressionMeta.json
+│   │   │   ├── HeteroLinearRegressionMeta.pb
+│   │   │   ├── HeteroLinearRegressionParam.json
+│   │   │   ├── HeteroLinearRegressionParam.pb
+│   │   │   └── database.yaml
+│   │   ├── 2#step_name
+│   │   │   ├── HeteroLinearRegressionMeta.json
+│   │   │   ├── HeteroLinearRegressionMeta.pb
+│   │   │   ├── HeteroLinearRegressionParam.json
+│   │   │   ├── HeteroLinearRegressionParam.pb
+│   │   │   └── database.yaml
+│   │   ├── 3#step_name
+│   │   │   ├── HeteroLinearRegressionMeta.json
+│   │   │   ├── HeteroLinearRegressionMeta.pb
+│   │   │   ├── HeteroLinearRegressionParam.json
+│   │   │   ├── HeteroLinearRegressionParam.pb
+│   │   │   └── database.yaml
+│   │   └── 4#step_name
+│   │       ├── HeteroLinearRegressionMeta.json
+│   │       ├── HeteroLinearRegressionMeta.pb
+│   │       ├── HeteroLinearRegressionParam.json
+│   │       ├── HeteroLinearRegressionParam.pb
+│   │       └── database.yaml
+│   ├── hetero_linr_1
+│   ├── intersection_0
+│   └── reader_0
+├── define
+│   ├── define_meta.yaml
+│   ├── proto
+│   │   └── pipeline.proto
+│   └── proto_generated_python
+│       ├── __pycache__
+│       │   └── pipeline_pb2.cpython-36.pyc
+│       └── pipeline_pb2.py
+├── run_parameters
+│   ├── data_transform_0
+│   │   └── run_parameters.json
+│   ├── hetero_linr_0
+│   │   └── run_parameters.json
+│   ├── hetero_linr_1
+│   │   └── run_parameters.json
+│   └── pipeline
+│       └── run_parameters.json
+└── variables
+    ├── data
+    │   ├── data_transform_0
+    │   │   └── model
+    │   │       ├── DataTransformMeta
+    │   │       ├── DataTransformMeta.json
+    │   │       ├── DataTransformParam
+    │   │       └── DataTransformParam.json
+    │   ├── hetero_linr_0
+    │   │   └── model
+    │   │       ├── HeteroLinearRegressionMeta
+    │   │       ├── HeteroLinearRegressionMeta.json
+    │   │       ├── HeteroLinearRegressionParam
+    │   │       └── HeteroLinearRegressionParam.json
+    │   ├── hetero_linr_1
+    │   │   └── model
+    │   │       ├── HeteroLinearRegressionMeta
+    │   │       ├── HeteroLinearRegressionMeta.json
+    │   │       ├── HeteroLinearRegressionParam
+    │   │       └── HeteroLinearRegressionParam.json
+    │   └── pipeline
+    │       └── pipeline
+    │           ├── Pipeline
+    │           └── Pipeline.json
+    └── index
+
+32 directories, 47 files
+```
+
+**`checkpoint`**
+
+此目录存储组件运行过程中,每轮迭代产生的模型,不是所有组件都支持 checkpoint。
+
+以 `checkpoint/hetero_linr_0/2#step_name` 为例:
+
+`hetero_linr_0` 是 `component_name`;`2` 是 `step_index`,即迭代次数;`step_name` 目前只做占位符,没有使用。
+
+`HeteroLinearRegressionMeta.json`, `HeteroLinearRegressionMeta.pb`, `HeteroLinearRegressionParam.json`, `HeteroLinearRegressionParam.pb` 都是训练产生的数据,可以理解为模型文件。`database.yaml` 主要记录上述文件的 hash 以作校验,还存储有 `step_index`, `step_name`, `create_time`。
+
+**`define`**
+
+该目录储存作业的基本信息,在作业初始化时创建。`pipeline` 不是一个组件,而是代表整个作业。
+
+`define/proto/pipeline.proto` 和 `define/proto/pipeline_pb2.py` 目前没有使用。
+
+`define/define_meta.yaml` 记录组件列表,包括 `component_name`, `componet_module_name`, `model_alias`。
+
+**`run_parameters`**
+
+此目录存储组件的配置信息,也称为 DSL。
+
+`run_parameters/pipeline/run_parameters.json` 为一个空的 object `{}`。
+
+**`variables`**
+
+此目录存储组件运行结束后产生的模型,与最后一轮迭代产生的模型一致。
+
+以 `variables/data/hetero_linr_0/model` 为例:
+
+`hetero_linr_0` 是 `component_name`;`model` 是 `model_alias`。
+
+`HeteroLinearRegressionMeta`, `HeteroLinearRegressionMeta.json`, `HeteroLinearRegressionParam` `HeteroLinearRegressionParam.json` 与 `checkpoint` 目录下的文件格式完全一致,除了 `.pb` 文件去掉了扩展名。
+
+`variables/data/pipeline/`存储作业的详细信息。
+
+`variables/index/` 目前没有使用。
+
+### 远端存储引擎
+
+本地磁盘并不可靠,因此模型有丢失的风险,FATE-Flow 支持导出模型到指定存储引擎、从指定存储引擎导入以及自动发布模型时推送模型到引擎存储。
+
+存储引擎支持腾讯云对象存储、MySQL 和 Redis, 具体请参考[存储引擎配置](#4-存储引擎配置)
+
+## 2. Model
+
+{{snippet('cli/model.zh.md', '## Model')}}
+
+## 3. Checkpoint
+
+{{snippet('cli/checkpoint.zh.md', '## Checkpoint')}}
+
+## 4. 存储引擎配置
+
+### `enable_model_store`
+
+开启后,在调用 `/model/load` 时:如果模型文件在本地磁盘存在、但不在存储引擎中,则自动把模型文件上传至存储引擎;如果模型文件在存储引擎存在、但不在本地磁盘中,则自动把模型文件下载到本地磁盘。
+
+此配置不影响 `/model/store` 和 `/model/restore`。
+
+### `model_store_address`
+
+此配置定义使用的存储引擎。
+
+#### 腾讯云对象存储
+
+```yaml
+storage: tencent_cos
+# 请从腾讯云控制台获取下列配置
+Region:
+SecretId:
+SecretKey:
+Bucket:
+```
+
+#### MySQL
+
+```yaml
+storage: mysql
+database: fate_model
+user: fate
+password: fate
+host: 127.0.0.1
+port: 3306
+# 可选的数据库连接参数
+max_connections: 10
+stale_timeout: 10
+```
+
+#### Redis
+
+```yaml
+storage: redis
+host: 127.0.0.1
+port: 6379
+db: 0
+password:
+# key 的超时时间,单位秒。默认 None,没有超时时间。
+ex:
+```

+ 5 - 0
FATE-Flow/doc/fate_flow_monitoring.md

@@ -0,0 +1,5 @@
+# Real-Time Monitoring
+
+## 1. Description
+
+Mainly introduces `FATE Flow` to monitor job running status, Worker execution status, etc., in real time to ensure final consistency

+ 6 - 0
FATE-Flow/doc/fate_flow_monitoring.zh.md

@@ -0,0 +1,6 @@
+# 作业实时监测
+
+## 1. 说明
+
+主要介绍`FATE Flow`对作业运行状态、Worker执行状态等,进行实时监测,以保证最终一致性
+

+ 48 - 0
FATE-Flow/doc/fate_flow_permission_management.md

@@ -0,0 +1,48 @@
+## Multi-party cooperation rights management
+
+## 1. Description
+
+- fateflow permission authentication supports both flow's own authentication and third-party authentication
+
+
+- Authentication configuration: ```$FATE_BASE/conf/service_conf.yaml```.
+
+  ```yaml
+  hook_module:
+    permission: fate_flow.hook.flow.permission
+  hook_server_name:
+  permission:
+    switch: false
+    component: false
+    dataset: false
+  ```
+  The permission hooks support both "fate_flow.hook.flow.permission" and "fate_flow.hook.api.permission".
+
+## 2. Permission authentication
+### 2.1 flow permission authentication
+#### 2.1.1 Authentication scheme
+- The flow permission authentication scheme uses the casbin permission control framework and supports both component and dataset permissions.
+- The configuration is as follows.
+```yaml
+  hook_module:
+    permission: fate_flow.hook.flow.permission
+  permission:
+    switch: true
+    component: true
+    dataset: true
+```
+#### 2.1.2 Authorization
+
+{{snippet('cli/privilege.md', '### grant')}}
+
+#### 2.1.3 Revoke privileges
+
+{{snippet('cli/privilege.md', '### delete')}}
+
+#### 2.1.4 Permission query
+
+{{snippet('cli/privilege.md', '### query')}}
+
+### 2.2 Third-party interface privilege authentication
+- Third party services need to authenticate to the flow privilege interface, refer to [privilege authentication service registration](./third_party_service_registry.md#33-permission)
+- If the authentication fails, flow will directly return the authentication failure to the partner.

+ 48 - 0
FATE-Flow/doc/fate_flow_permission_management.zh.md

@@ -0,0 +1,48 @@
+# 多方合作权限管理
+
+## 1. 说明
+
+- fateflow权限认证支持flow自身鉴权和第三方鉴权两种方式
+
+
+- 鉴权配置: `$FATE_BASE/conf/service_conf.yaml`:
+
+  ```yaml
+  hook_module:
+    permission: fate_flow.hook.flow.permission
+  hook_server_name:
+  permission:
+    switch: false
+    component: false
+    dataset: false
+  ```
+  其中,权限钩子支持"fate_flow.hook.flow.permission"和"fate_flow.hook.api.permission"两种
+
+## 2. 权限认证
+### 2.1 flow权限认证
+#### 2.1.1 认证方案
+- flow权限认证方案使用casbin权限控制框架,支持组件和数据集两种权限。
+- 配置如下:
+```yaml
+  hook_module:
+    permission: fate_flow.hook.flow.permission
+  permission:
+    switch: true
+    component: true
+    dataset: true
+```
+#### 2.1.2 授权
+
+{{snippet('cli/privilege.zh.md', '### grant')}}
+
+#### 2.1.3 吊销权限
+
+{{snippet('cli/privilege.zh.md', '### delete')}}
+
+#### 2.1.4 权限查询
+
+{{snippet('cli/privilege.zh.md', '### query')}}
+
+### 2.2 第三方接口权限认证
+- 第三方服务需要向flow权限认证接口,具体参考[权限认证服务注册](./third_party_service_registry.zh.md#33-permission)
+- 若认证失败,flow会直接返回认证失败给合作方。

+ 102 - 0
FATE-Flow/doc/fate_flow_resource_management.md

@@ -0,0 +1,102 @@
+# Multi-Party Resource Coordination
+
+## 1. Description
+
+Resources refer to the basic engine resources, mainly CPU resources and memory resources of the compute engine, CPU resources and network resources of the transport engine, currently only the management of CPU resources of the compute engine is supported
+
+## 2. Total resource allocation
+
+- The current version does not automatically get the resource size of the base engine, so you configure it through the configuration file `$FATE_PROJECT_BASE/conf/service_conf.yaml`, that is, the resource size of the current engine allocated to the FATE cluster
+- `FATE Flow Server` gets all the base engine information from the configuration file and registers it in the database table `t_engine_registry` when it starts.
+- `FATE Flow Server` has been started and the resource configuration can be modified by restarting `FATE Flow Server` or by reloading the configuration using the command: `flow server reload`.
+- `total_cores` = `nodes` * `cores_per_node`
+
+**Example**
+
+fate_on_standalone: is for executing a standalone engine on the same machine as `FATE Flow Server`, generally used for fast experiments, `nodes` is generally set to 1, `cores_per_node` is generally the number of CPU cores of the machine, also can be moderately over-provisioned
+
+```yaml
+fate_on_standalone:
+  standalone:
+    cores_per_node: 20
+    nodes: 1
+```
+
+fate_on_eggroll: configured based on the actual deployment of `EggRoll` cluster, `nodes` denotes the number of `node manager` machines, `cores_per_node` denotes the average number of CPU cores per `node manager` machine
+
+```yaml
+fate_on_eggroll:
+  clustermanager:
+    cores_per_node: 16
+    nodes: 1
+  rollsite:
+    host: 127.0.0.1
+    port: 9370
+```
+
+fate_on_spark: configured based on the resources allocated to the `FATE` cluster in the `Spark` cluster, `nodes` indicates the number of `Spark` nodes, `cores_per_node` indicates the average number of CPU cores per node allocated to the `FATE` cluster
+
+```yaml
+fate_on_spark:
+  spark:
+    # default use SPARK_HOME environment variable
+    home:
+    cores_per_node: 20
+    nodes: 2
+```
+
+Note: Please make sure that the `Spark` cluster allocates the corresponding amount of resources to the `FATE` cluster, if the `Spark` cluster allocates less resources than the resources configured in `FATE` here, then it will be possible to submit the `FATE` job, but when `FATE Flow` submits the task to the `Spark` cluster, the task will not actually execute because the `Spark` cluster has insufficient resources. Insufficient resources, the task is not actually executed
+
+## 3. Job request resource configuration
+
+We generally use ``task_cores`'' and ``task_parallelism`' to configure job request resources, such as
+
+```json
+{
+"job_parameters": {
+  "common": {
+    "job_type": "train",
+    "task_cores": 6,
+    "task_parallelism": 2,
+    "computing_partitions": 8,
+    "timeout": 36000
+    }
+  }
+}
+```
+
+The total resources requested by the job are `task_cores` * `task_parallelism`. When creating a job, `FATE Flow` will distribute the job to each `party` based on the above configuration, running role, and the engine used by the party (via `$FATE_PROJECT_BASE/conf/service_conf .yaml#default_engines`), the actual parameters will be calculated as follows
+
+## 4. The process of calculating the actual parameter adaptation for resource requests
+
+- Calculate `request_task_cores`:
+  - guest, host.
+    - `request_task_cores` = `task_cores`
+  - arbiter, considering that the actual operation consumes very few resources: `request_task_cores
+    - `request_task_cores` = 1
+
+- Further calculate `task_cores_per_node`.
+  - `task_cores_per_node"` = max(1, `request_task_cores` / `task_nodes`)
+
+  - If `eggroll_run` or `spark_run` configuration resource is used in the above `job_parameters`, then the `task_cores` configuration is invalid; calculate `task_cores_per_node`.
+    - `task_cores_per_node"` = eggroll_run["eggroll.session.processors.per.node"]
+    - `task_cores_per_node"` = spark_run["executor-cores"]
+
+- The parameter to convert to the adaptation engine (which will be presented to the compute engine for recognition when running the task).
+  - fate_on_standalone/fate_on_eggroll:
+    - eggroll_run["eggroll.session.processors.per.node"] = `task_cores_per_node`
+  - fate_on_spark:
+    - spark_run["num-executors"] = `task_nodes`
+    - spark_run["executor-cores"] = `task_cores_per_node`
+
+- The final calculation can be seen in the job's `job_runtime_conf_on_party.json`, typically in `$FATE_PROJECT_BASE/jobs/$job_id/$role/$party_id/job_runtime_on_party_conf.json `
+
+## 5. Resource Scheduling Policy
+- `total_cores` see [total_resource_allocation](#2-total-resource-allocation)
+- `apply_cores` see [job_request_resource_configuration](#3-job-request-resource-configuration), `apply_cores` = `task_nodes` * `task_cores_per_node` * `task_parallelism`
+- If all participants apply for resources successfully (total_cores - apply_cores) > 0, then the job applies for resources successfully
+- If not all participants apply for resources successfully, then send a resource rollback command to the participants who have applied successfully, and the job fails to apply for resources
+
+## 6. Related commands
+
+{{snippet('cli/resource.md', header=False)}}

+ 103 - 0
FATE-Flow/doc/fate_flow_resource_management.zh.md

@@ -0,0 +1,103 @@
+# 多方资源协调
+
+## 1. 说明
+
+资源指基础引擎资源,主要指计算引擎的CPU资源和内存资源,传输引擎的CPU资源和网络资源,目前仅支持计算引擎CPU资源的管理
+
+## 2. 总资源配置
+
+- 当前版本未实现自动获取基础引擎的资源大小,因此你通过配置文件`$FATE_PROJECT_BASE/conf/service_conf.yaml`进行配置,也即当前引擎分配给FATE集群的资源大小
+- `FATE Flow Server`启动时从配置文件获取所有基础引擎信息并注册到数据库表`t_engine_registry`
+- `FATE Flow Server`已经启动,修改资源配置,可重启`FATE Flow Server`,也可使用命令:`flow server reload`,重新加载配置
+- `total_cores` = `nodes` * `cores_per_node`
+
+**样例**
+
+fate_on_standalone:是为执行在`FATE Flow Server`同台机器的单机引擎,一般用于快速实验,`nodes`一般设置为1,`cores_per_node`一般为机器CPU核数,也可适量超配
+
+```yaml
+fate_on_standalone:
+  standalone:
+    cores_per_node: 20
+    nodes: 1
+```
+
+fate_on_eggroll:依据`EggRoll`集群实际部署情况进行配置,`nodes`表示`node manager`的机器数量,`cores_per_node`表示平均每台`node manager`机器CPU核数
+
+```yaml
+fate_on_eggroll:
+  clustermanager:
+    cores_per_node: 16
+    nodes: 1
+  rollsite:
+    host: 127.0.0.1
+    port: 9370
+```
+
+fate_on_spark:依据在`Spark`集群中配置给`FATE`集群的资源进行配置,`nodes`表示`Spark`节点数量,`cores_per_node`表示平均每个节点分配给`FATE`集群的CPU核数
+
+```yaml
+fate_on_spark:
+  spark:
+    # default use SPARK_HOME environment variable
+    home:
+    cores_per_node: 20
+    nodes: 2
+```
+
+注意:请务必确保在`Spark`集群分配了对应数量的资源于`FATE`集群,若`Spark`集群分配资源少于此处`FATE`所配置的资源,那么会出现可以提交`FATE`作业,但是`FATE Flow`将任务提交至`Spark`集群时,由于`Spark`集群资源不足,任务实际不执行
+
+## 3. 作业申请资源配置
+
+我们一般使用`task_cores`和`task_parallelism`进行配置作业申请资源,如:
+
+```json
+{
+"job_parameters": {
+  "common": {
+    "job_type": "train",
+    "task_cores": 6,
+    "task_parallelism": 2,
+    "computing_partitions": 8,
+    "timeout": 36000
+    }
+  }
+}
+```
+
+作业申请的总资源为`task_cores` * `task_parallelism`,创建作业时,`FATE Flow`分发作业到各`party`时会依据上述配置、运行角色、本方使用引擎(通过`$FATE_PROJECT_BASE/conf/service_conf.yaml#default_engines`),适配计算出实际参数,如下
+
+## 4. 资源申请实际参数适配计算过程
+
+- 计算`request_task_cores`:
+  - guest、host:
+    - `request_task_cores` = `task_cores`
+  - arbiter,考虑实际运行耗费极少资源:
+    - `request_task_cores` = 1
+
+- 进一步计算`task_cores_per_node`:
+  - `task_cores_per_node"` = max(1, `request_task_cores` / `task_nodes`)
+
+  - 若在上述`job_parameters`使用了`eggroll_run`或`spark_run`配置资源时,则`task_cores`配置无效;计算`task_cores_per_node`:
+    - `task_cores_per_node"` = eggroll_run[“eggroll.session.processors.per.node”]
+    - `task_cores_per_node"` = spark_run["executor-cores"]
+
+- 转换为适配引擎的参数(该参数会在运行任务时,提交到计算引擎识别):
+  - fate_on_standalone/fate_on_eggroll:
+    - eggroll_run["eggroll.session.processors.per.node"] = `task_cores_per_node`
+  - fate_on_spark:
+    - spark_run["num-executors"] = `task_nodes`
+    - spark_run["executor-cores"] = `task_cores_per_node`
+
+- 最终计算结果可以查看job的`job_runtime_conf_on_party.json`,一般在`$FATE_PROJECT_BASE/jobs/$job_id/$role/$party_id/job_runtime_on_party_conf.json`
+
+## 5. 资源调度策略
+
+- `total_cores`见上述[总资源配置](#2-总资源配置)
+- `apply_cores`见上述[作业申请资源配置](#3-作业申请资源配置),`apply_cores` = `task_nodes` * `task_cores_per_node` * `task_parallelism`
+- 若所有参与方均申请资源成功(total_cores - apply_cores) > 0,则该作业申请资源成功
+- 若非所有参与方均申请资源成功,则发送资源回滚指令到已申请成功的参与方,该作业申请资源失败
+
+## 6. 相关命令
+
+{{snippet('cli/resource.zh.md', header=False)}}

+ 13 - 0
FATE-Flow/doc/fate_flow_server_operation.md

@@ -0,0 +1,13 @@
+# Server Operation
+
+## 1. Description
+
+Starting from version `1.7.0`, we provide some maintenance functions for `FATE Flow Server`, which will be further enhanced in future versions.
+
+## 2. View version information
+
+{{snippet('cli/server.md', '### versions')}}
+
+## 3. Reload the configuration file
+
+{{snippet('cli/server.md', '### reload')}}

+ 13 - 0
FATE-Flow/doc/fate_flow_server_operation.zh.md

@@ -0,0 +1,13 @@
+# 服务端操作
+
+## 1. 说明
+
+从`1.7.0`版本开始, 提供`FATE Flow Server`的一些更新维护功能, 后续版本会进一步增强
+
+## 2. 查看版本信息
+
+{{snippet('cli/server.zh.md', '### versions')}}
+
+## 3. 重新加载配置文件
+
+{{snippet('cli/server.zh.md', '### reload')}}

+ 32 - 0
FATE-Flow/doc/fate_flow_service_registry.md

@@ -0,0 +1,32 @@
+# Service Registry
+
+## 1. Description
+
+### 1.1 Model Registry
+
+FATE-Flow interacts with FATE-Serving through Apache ZooKeeper. If `use_registry` is enabled in the configuration, Flow registers model download URLs with ZooKeeper when it starts, and Serving can get the models through these URLs.
+
+Likewise, Serving registers its own address with ZooKeeper, which Flow will fetch to communicate with. If `use_registry` is not enabled, Flow will try to communicate with the set `servings` address in the configuration file.
+
+### 1.2 High Availability
+
+FATE-Flow implements automatic discovery of multiple nodes in the same party by registering its own IP and port with Apache ZooKeeper.
+
+## 2. Configuring the ZooKeeper service
+
+```yaml
+zookeeper:
+  hosts:
+    - 127.0.0.1:2181
+  use_acl: false
+  user: fate
+  password: fate
+```
+
+## 3. ZNode
+
+- FATE-Flow Model Registry: `/FATE-SERVICES/flow/online/transfer/providers`
+
+- FATE-Flow High Availability: `/FATE-COMPONENTS/fate-flow`
+
+- FATE-Serving: `/FATE-SERVICES/serving/online/publishLoad/providers`

+ 32 - 0
FATE-Flow/doc/fate_flow_service_registry.zh.md

@@ -0,0 +1,32 @@
+# 服务注册中心
+
+## 1. 说明
+
+### 1.1 模型注册
+
+FATE-Flow 通过 Apache ZooKeeper 与 FATE-Serving 交互,如果在配置中启用了 `use_registry`,则 Flow 在启动时会向 ZooKeeper 注册模型的下载 URL,Serving 可以通过这些 URL 获取模型。
+
+同样,Serving 也会向 ZooKeeper 注册其自身的地址,Flow 会获取该地址以与之通信。 如果没有启用 `use_registry`,Flow 则会尝试与配置文件中的设置 `servings` 地址通信。
+
+### 1.2 高可用
+
+FATE-Flow 通过向 Apache ZooKeeper 注册自身的 IP 和端口实现同一 party 内多节点的自动发现。
+
+## 2. 配置 ZooKeeper 服务
+
+```yaml
+zookeeper:
+  hosts:
+    - 127.0.0.1:2181
+  use_acl: false
+  user: fate
+  password: fate
+```
+
+## 3. ZNode
+
+- FATE-Flow 模型注册: `/FATE-SERVICES/flow/online/transfer/providers`
+
+- FATE-Flow 高可用: `/FATE-COMPONENTS/fate-flow`
+
+- FATE-Serving: `/FATE-SERVICES/serving/online/publishLoad/providers`

+ 49 - 0
FATE-Flow/doc/fate_flow_tracking.md

@@ -0,0 +1,49 @@
+# Data Flow Tracking
+
+## 1. Description
+
+## 2. Task output indicators
+
+## 2.1 List of metrics
+
+{{snippet('cli/tracking.md', '### metrics')}}
+
+### 2.2 All metrics
+
+{{snippet('cli/tracking.md', '### metric-all')}}
+
+## 3. Task run parameters
+
+{{snippet('cli/tracking.md', '### parameters')}}
+
+## 4. Task output data
+
+### 4.1 Download output data
+
+{{snippet('cli/tracking.md', '### output-data')}}
+
+### 4.2 Get the name of the data table where the output data is stored
+
+{{snippet('cli/tracking.md', '### output-data-table')}}
+
+## 5. Task output model
+
+{{snippet('cli/tracking.md', '### output-model')}}
+
+## 6. Task output summary
+
+{{snippet('cli/tracking.md', '### get-summary')}}
+
+## 7. Dataset usage tracking
+
+Tracing source datasets and their derived datasets, such as component task output datasets
+
+### 7.1 Source table query
+
+{{snippet('cli/tracking.md', '### tracking-source')}}
+
+### 7.2 Querying with table tasks
+
+{{snippet('cli/tracking.md', '### tracking-job')}}
+
+## 8. Developing the API

+ 49 - 0
FATE-Flow/doc/fate_flow_tracking.zh.md

@@ -0,0 +1,49 @@
+# 数据流动追踪
+
+## 1. 说明
+
+## 2. 任务输出指标
+
+### 2.1 指标列表
+
+{{snippet('cli/tracking.zh.md', '### metrics')}}
+
+### 2.2 所有指标
+
+{{snippet('cli/tracking.zh.md', '### metric-all')}}
+
+## 3. 任务运行参数
+
+{{snippet('cli/tracking.zh.md', '### parameters')}}
+
+## 4. 任务输出数据
+
+### 4.1 下载输出数据
+
+{{snippet('cli/tracking.zh.md', '### output-data')}}
+
+### 4.2 获取输出数据存放数据表名称
+
+{{snippet('cli/tracking.zh.md', '### output-data-table')}}
+
+## 5. 任务输出模型
+
+{{snippet('cli/tracking.zh.md', '### output-model')}}
+
+## 6. 任务输出摘要
+
+{{snippet('cli/tracking.zh.md', '### get-summary')}}
+
+## 7. 数据集使用追踪
+
+追踪源数据集及其衍生数据集,如组件任务输出数据集
+
+### 7.1 源表查询
+
+{{snippet('cli/tracking.zh.md', '### tracking-source')}}
+
+### 7.2 用表任务查询
+
+{{snippet('cli/tracking.zh.md', '### tracking-job')}}
+
+## 8. 开发API

BIN
FATE-Flow/doc/images/fate_arch.png


BIN
FATE-Flow/doc/images/fate_deploy_directory.png


BIN
FATE-Flow/doc/images/fate_flow_arch.png


BIN
FATE-Flow/doc/images/fate_flow_authorization.png


BIN
FATE-Flow/doc/images/fate_flow_component_dsl.png


BIN
FATE-Flow/doc/images/fate_flow_component_registry.png


BIN
FATE-Flow/doc/images/fate_flow_dag.png


BIN
FATE-Flow/doc/images/fate_flow_detector.png


BIN
FATE-Flow/doc/images/fate_flow_dsl.png


BIN
FATE-Flow/doc/images/fate_flow_inputoutput.png


BIN
FATE-Flow/doc/images/fate_flow_logical_arch.png


BIN
FATE-Flow/doc/images/fate_flow_major_feature.png


BIN
FATE-Flow/doc/images/fate_flow_model_storage.png


BIN
FATE-Flow/doc/images/fate_flow_pipelined_model.png


BIN
FATE-Flow/doc/images/fate_flow_resource_process.png


BIN
FATE-Flow/doc/images/fate_flow_scheduling_arch.png


BIN
FATE-Flow/doc/images/federated_learning_pipeline.png


+ 4 - 0
FATE-Flow/doc/index.md

@@ -0,0 +1,4 @@
+---
+template: overrides/home.html
+title: Secure, Privacy-preserving Machine Learning Multi-Party Schduling System
+---

+ 4 - 0
FATE-Flow/doc/index.zh.md

@@ -0,0 +1,4 @@
+---
+template: overrides/home.zh.html
+title: 安全,隐私保护的机器学习多方调度系统
+---

+ 79 - 0
FATE-Flow/doc/mkdocs/README.md

@@ -0,0 +1,79 @@
+# Build
+
+## use docker
+
+At repo root, execute
+
+```sh
+docker run --rm -it -p 8000:8000 -v ${PWD}:/docs sagewei0/mkdocs  
+```
+
+to serve docs in http://localhost:8000
+
+or
+
+```sh
+docker run --rm -it -p 8000:8000 -v ${PWD}:/docs sagewei0/mkdocs build
+```
+
+to build docs to `site` folder.
+
+## manually
+
+[`mkdocs-material`](https://pypi.org/project/mkdocs-material/) and servel plugins are needed to build this docs
+
+Fisrt, create an python virtual environment
+
+```sh
+python3 -m venv "fatedocs"
+source fatedocs/bin/activate
+pip install -U pip
+```
+And then install requirements
+
+```sh
+pip install -r doc/mkdocs/requirements.txt
+```
+
+Now, use
+
+```sh
+mkdocs serve
+```
+
+at repo root to serve docs or
+
+use 
+
+```sh
+mkdocs build
+```
+
+at repo root to build docs to folder `site`
+
+
+# Develop guide
+
+We use [mkdocs-material](https://squidfunk.github.io/mkdocs-material/) to build our docs. 
+Servel markdown extensions are really useful to write pretty documents such as 
+[admonitions](https://squidfunk.github.io/mkdocs-material/reference/admonitions/) and 
+[content-tabs](https://squidfunk.github.io/mkdocs-material/reference/content-tabs/).
+
+Servel plugins are introdused to makes mkdocs-material much powerful:
+
+
+- [mkdocstrings](https://mkdocstrings.github.io/usage/) 
+    automatic documentation from sources code. We mostly use this to automatic generate
+    `params api` for `federatedml`.
+
+- [awesome-pages](https://github.com/lukasgeiter/mkdocs-awesome-pages-plugin)
+    for powerful nav rule
+
+- [i18n](https://ultrabug.github.io/mkdocs-static-i18n/)
+    for multi-languege support
+
+- [mkdocs-jupyter](https://github.com/danielfrg/mkdocs-jupyter)
+    for jupyter format support
+
+- [mkdocs-simple-hooks](https://github.com/aklajnert/mkdocs-simple-hooks)
+    for simple plugin-in

Some files were not shown because too many files changed in this diff