DBFDataWriter writes data to dbase file(s).
Character/Number/Logical/Date dBase data types.
The component can write a single file or a partitioned collection of files.
|Component||Data output||Input ports||Output ports||Transformation||Transf. required||Java||CTL||Auto-propagated metadata|
||Incoming data records||Fixed length|
DBFDataWriter does not propagate metadata.
DBFDataWriter has no metadata template.
Input metadata has to be fixed-length as you are writing binary data.
Specifies where data will be written to (a path to
Character encoding of records written to the output. See Details
The default encoding depends on DEFAULT_CHARSET_DECODER in defaultProperties.
If records are printed into a non-empty file,
they replace the previous content by default (
||false (default) | true|
|DBF type||Type of the created DBF file (determined by the first byte of the file header). If you are unsure which type to choose, leave the attribute to default.|
|Records per file||Maximum number of records to be written to each output file. If specified, the dollar sign(s) $ ('number of digits' placeholder) must be a part of the file name mask, see Supported File URL Formats for Writers||1 - N|
|Number of skipped records||Number of records/rows to be skipped before writing the first record to the output file, see Selecting Output Records.||0 (default) - N|
|Max number of records||Aggregate number of records/rows to be written to all output files, see Selecting Output Records.||0-N|
|Exclude fields||Sequence of field names that will not be written to the output (separated by semicolon). Can be used when the same fields serve as a part of Partition key.|
|Partition key||[ 2]||Sequence of field names defining record distribution among multiple output files - records with the same Partition key are written to the same output file. Use semicolon ';' as field names separator. Depending on selected Partition file tag use the appropriate placeholder ($ or #) in the file name mask, see Partitioning Output into Different Output Files|
|Partition lookup table||[ 3]||ID of a lookup table serving for selecting records that should be written to output file(s). See Partitioning Output into Different Output Files for more information.|
|Partition file tag||[ 2]||
By default, partitioned output files are numbered.
If this attribute is set to
|Partition output fields||[ 3]||Fields of Partition lookup table whose values are used as output file(s) names. See Partitioning Output into Different Output Files for more information.|
|Partition unassigned file name||Name of the file which the unassigned records should be written into (if there are any). Unless specified, data records whose key values are not contained in Partition lookup table are discarded. See Partitioning Output into Different Output Files for more information.|
|Sorted input||In case the partitioning into multiple output files is enabled, all output files are open at once. This could lead to undesirable memory footprint for many output files (thousands). Moreover, for example unix-based OS usually have very strict limitation of number of simultaneously open files (1024) per process. If you run into one of these limitations, consider sorting the data according to a partition key using one of our standard sorting components and set this attribute to true. The partitioning algorithm does not need to keep open all output files, just the last one is open at one time. See Partitioning Output into Different Output Files for more information.||false (default) | true|
|Create empty files||If set to ||true (default) | false|
[ 2] Either both or neither of these two attributes must be specified.
[ 3] Either both or neither of these two attributes must be specified.
DBFDataWriter can be used to write UTF-8 encoded dBase files.
In general, DBFDataWriter can use any encoding for parsing. But be careful - every character at any column name (stored at header of the file) must be represented by single byte. Example: set UTF-8 encoding. It's possible to write Japanese characters stored at dBase file but column name must not contain such a character. Since the column name can contain single byte characters only, some charsets cannot be used (for example UTF-16).
Output data can be stored locally only. Uploading via a remote transfer protocol and writing ZIP and TAR archives is not supported.
The structure of .dbf file is not suitable for reading and writing lists or maps. DBFDataWriter converts lists and maps to string before the writing, but there is no easy way to read them back as lists or maps.
We recommend users to explicitly specify Charset.