The storage tables of fate are identified by table name and namespace.
fate provides an upload component for users to upload data to a storage system supported by the fate compute engine.
If the user's data already exists in a storage system supported by fate, the storage information can be mapped to a fate storage table by table bind.
If the table bind's table storage type is not consistent with the current default engine, the reader component will automatically convert the storage type;
{{snippet('cli/data.md', '### upload')}}
{{snippet('cli/table.md', '### bind')}}
{{snippet('cli/table.md', '### info')}}
{{snippet('cli/table.md', '### delete')}}
{{snippet('cli/data.md', '### download')}}
{{snippet('cli/table.md', '### disable')}}
{{snippet('cli/table.md', '### enable')}}
{{snippet('cli/table.md', '### disable-delete')}}
{{snippet('cli/data.md', '### writer')}}
Brief description:
Parameter configuration:
The input table of the reader is configured in the conf when submitting the job:
{
"role": {
"guest": {
"0": {
"reader_0": {
"table": {
"name": "breast_hetero_guest",
"namespace": "experiment"
}
}
}
}
}
}
Component Output
The output data storage engine of the component is determined by the configuration file conf/service_conf.yaml, with the following configuration items:
default_engines:
storage: eggroll
| computing_engine | storage_engine | | :--------------- | :---------------------------- | | standalone | standalone | | eggroll | eggroll | | spark | hdfs(distributed), localfs(standalone) |
Brief description:
Parameter configuration:
Configure the api-reader parameter in the conf when submitting the job:
{
"role": {
"guest": {
"0": { "api_reader_0": {
"server_name": "xxx",
"parameters": { "version": "xxx"},
"id_delimiter": ",",
"head": true
}
}
}
}
}
Parameter meaning: