41par_cgns | Input/Output | alias for parallel CGNS
42
43\section properties Properties
44
45## General Properties
46
47 Property | Value | Description
48 ----------|:--------:|------------
49 LOGGING | on/[off] | enable/disable logging of field input/output
50 LOWER_CASE_VARIABLE_NAMES | [on]/off | Convert all variable names read from input database to lowercase; replace ' ' with '_'
51 USE_GENERIC_CANONICAL_NAMES | on/[off] | use `block_{id}` as canonical name of an element block instead of the name (if any) stored on the database. The database name will be an alias.
52 IGNORE_DATABASE_NAMES | on/[off] | Do not read any element block, nodeset, ... names if they exist on the database. Use only the canonical generated names (entitytype + _ + id)
53 IGNORE_ATTRIBUTE_NAMES | on/[off] | Do not read the attribute names that may exist on an input database. Instead for an element block with N attributes, the fields will be named `attribute_1` ... `attribute_N`
54 MINIMIZE_OPEN_FILES | on/[off] | If on, then close file after each timestep and then reopen on next output
55 SERIALIZE_IO | integer | The number of files that will be read/written to simultaneously in a parallel file-per-rank run.
61MODEL_DECOMPOSITION_METHOD | {method} | Decompose a DB with type `MODEL` using `method`
62RESTART_DECOMPOSITION_METHOD | {method} | Decompose a DB with type `RESTART_IN` using `method`
63DECOMPOSITION_METHOD | {method} | Decompose all input DB using `method`
64PARALLEL_CONSISTENCY | [on]/off | On if the client will call Ioss functions consistently on all processors. If off, then the auto-decomp and auto-join cannot be used.
65RETAIN_FREE_NODES | [on]/off | In auto-decomp, will nodes not connected to any elements be retained.
66LOAD_BALANCE_THRESHOLD | {real} [1.4] | CGNS-Structured only -- Load imbalance permitted Load on Proc / Avg Load
67DECOMPOSITION_EXTRA | {name},{multiplier} | Specify the name of the element map or variable used if the decomposition method is `map` or `variable`. If it contains a comma, the value following the comma is used to scale (divide) the values in the map/variable. If it is 'auto', then all values will be scaled by `max_value/processorCount`
68
69### Valid values for Decomposition Method
70
71Method | Description
72:---------:|-------------------
73rcb | recursive coordinate bisection
74rib | recursive inertial bisection
75hsfc | hilbert space-filling curve
76metis_sfc | metis space-filling-curve
77kway | metis kway graph-based
78kway_geom | metis kway graph-based method with geometry speedup
79linear | elements in order first n/p to proc 0, next to proc 1.
80cyclic | elements handed out to id % proc_count
81random | elements assigned randomly to processors in a way that preserves balance (do not use for a real run)
82map | the specified element map contains the mapping of elements to processor. Uses 'processor_id' map by default; otherwise specify name with `DECOMPOSITION_EXTRA` property
83variable | the specified element variable contains the mapping of elements to processor. Uses 'processor_id' variable by default; otherwise specify name with `DECOMPOSITION_EXTRA` property
84external | Files are decomposed externally into a file-per-processor in a parallel run.
85
86## Output File Composition -- Single File output from parallel run instead of file-per-processor
87
88 Property | Value
89-----------------|:------:
90COMPOSE_RESTART | on/[off]
91COMPOSE_RESULTS | on/[off]
92PARALLEL_IO_MODE | netcdf4, hdf5, pnetcdf, (mpiio and mpiposix are deprecated)
93
94## Properties Related to byte size of reals and integers
106 ENABLE_FIELD_RECOGNITION | [on]/off | Does the IOSS library combine scalar fields into higher-order fields (tensor, vector) based on suffix interpretation.
107 IGNORE_REALN_FIELDS | [off]/on | Do not recognize var_1, var_2, ..., var_n as an n-component field. Keep as n scalar fields. Currently ignored for composite fields.
108 FIELD_SUFFIX_SEPARATOR | char / '_'| The character that is used to separate the base field name from the suffix. Default is underscore.
109 FIELD_STRIP_TRAILING_UNDERSCORE | on / [off] | If `FIELD_SUFFIX_SEPARATOR` is empty and there are fields that end with an underscore, then strip the underscore. (`a_x`, `a_y`, `a_z` is vector field `a`).
110 IGNORE_ATTRIBUTE_NAMES | on/[off] | Do not read the attribute names that may exist on an input database. Instead for an element block with N attributes, the fields will be named `attribute_1` ... `attribute_N`
111 SURFACE_SPLIT_TYPE | {type} | Specify how to split sidesets into homogeneous sideblocks. Either an integer or string: 1 or `TOPOLOGY`, 2 or `BLOCK`, 3 or `NO_SPLIT`. Default is `TOPOLOGY` if not specified.
112 DUPLICATE_FIELD_NAME_BEHAVIOR | {behavior} | Determine how to handle duplicate incompatible fields on a database. Valid values are `IGNORE`, `WARNING`, or `ERROR` (default). An incompatible field is two or more fields with the same name, but different sizes or roles or types.
117 OMIT_QA_RECORDS | on/[off] | Do not output any QA records to the output database.
118 OMIT_INFO_RECORDS | on/[off] | Do not output any INFO records to the output database.
119 RETAIN_EMPTY_BLOCKS | on/[off] | If an element block is completely empty (on all ranks) should it be written to the output database.
120 VARIABLE_NAME_CASE | upper/lower | Should all output field names be converted to uppercase or lowercase. Default is leave as is.
121 FILE_TYPE | [netcdf], netcdf4, netcdf-4, hdf5 | Underlying file type (bits on disk format)
122 COMPRESSION_METHOD | [zlib], szip | The compression method to use. `szip` only available if HDF5 is built with that supported.
123 COMPRESSION_LEVEL | [0]-9 | If zlib: In the range [0..9]. A value of 0 indicates no compression, will automatically set `file_type=netcdf4`, recommend <=4
124 COMPRESSION_LEVEL | 4-32 | If szip: An even number in the range 4-32, will automatically set `file_type=netcdf4`.
126 MAXIMUM_NAME_LENGTH | [32] | Maximum length of names that will be returned/passed via api call.
127 APPEND_OUTPUT | on/[off] | Append output to end of existing output database
128 APPEND_OUTPUT_AFTER_STEP | {step}| Max step to read from an input db or a db being appended to (typically used with APPEND_OUTPUT)
129 APPEND_OUTPUT_AFTER_TIME | {time}| Max time to read from an input db or a db being appended to (typically used with APPEND_OUTPUT)
130 FILE_PER_STATE | on/[off] | Put data for each output timestep into a separate file.
131 CYCLE_COUNT | {cycle} | If using FILE_PER_STATE, then use {cycle} different files and then overwrite. Otherwise, there will be a maximum of {cycle} time steps in the file. See below.
132 OVERLAY_COUNT | {overlay}| If using FILE_PER_STATE, then put {overlay} timesteps worth of data into each file before going to next file. Otherwise, each output step in the file will be overwritten {overlay} times. See below.
133 ENABLE_DATAWARP | on/[off] | If the system supports Cray DataWarp (burst buffer), should it be used for buffering output files.
134
135### Cycle and Overlay Behavior:
136(Properties `CYCLE_COUNT`, `OVERLAY_COUNT`, and `FILE_PER_STATE`)
137The `overlay` specifies the number of output steps which will be
138overlaid on top of the currently written step before advancing to the
139next step on the database.
140
141For example, if output every 0.1 seconds and the overlay count is
142specified as 2, then IOSS will write time 0.1 to step 1 of the
143database. It will then write 0.2 and 0.3 also to step 1. It will
144then increment the database step and write 0.4 to step 2 and overlay
1450.5 and 0.6 on step 2. At the end of the analysis, (assuming it runs
146to completion), the database would have times 0.3, 0.6, 0.9,
147... However, if there were a problem during the analysis, the last
148step on the database would contain an intermediate step.
149
150The `cycle_count` specifies the number of restart steps which will be
151written to the restart database before previously written steps are
152overwritten. For example, if the `cycle` count is 5 and output is
153written every 0.1 seconds, IOSS will write data at times 0.1, 0.2,
1540.3, 0.4, 0.5 to the database. It will then overwrite the first step
155with data from time 0.6, the second with time 0.7. At time 0.8, the
156database would contain data at times 0.6, 0.7, 0.8, 0.4, 0.5. Note
157that time will not necessarily be monotonically increasing on a
158database that specifies the cycle count.
159
160The cycle count and overlay count can both be used at the same time
184 FILE_FORMAT | [default], spyhis, csv, ts_csv, text, ts_text | predefined formats for heartbeat output. The ones starting with `ts_` output timestamps.
185 FLUSH_INTERVAL | int | Minimum time interval between flushing heartbeat data to disk. Default is 10 seconds. Set to 0 to flush every step (bad performance)
186 HEARTBEAT_FLUSH_INTERVAL | int | Minimum time interval between flushing heartbeat data to disk. Default is 10 seconds (Same as FLUSH_INTERVAL, but doesn't affect other database types)
187 TIME_STAMP_FORMAT | [%H:%M:%S] | Format used to format time stamp. See strftime man page
188 SHOW_TIME_STAMP | on/off | Should the output lines be preceded by the timestamp
189 FIELD_SEPARATOR | [, ] | separator to be used between output fields.
190 FULL_PRECISION | on/[off] | output will contain as many digits as needed to fully represent the doubles value. FIELD_WIDTH will be ignored for doubles if this is specified.
191 PRECISION | -1..16 [5] | Precision used for floating point output. If set to `-1`, then the output will contain as many digits as needed to fully represent the doubles value. FIELD_WIDTH will be ignored for doubles if precision is set to -1.
192 FIELD_WIDTH | 0.. | Width of an output field. If 0, then use natural width.
193 SHOW_LABELS | on/[off] | Should each field be preceded by its name (ke=1.3e9, ie=2.0e9)
194 SHOW_LEGEND | [on]/off | Should a legend be printed at the beginning of the output showing the field names for each column of data.
195 SHOW_TIME_FIELD | on/[off] | Should the current analysis time be output as the first field.
201MEMORY_READ | on/[off] | experimental. Read a file into memory at open time, operate on it without disk accesses.
202MEMORY_WRITE | on/[off] | experimental. Open and read a file into memory or create and optionally write it back out to disk when nc_close() is called.
203ENABLE_FILE_GROUPS | on/[off] | experimental. Opens database in netcdf-4 non-classic mode which is what is required to support groups at netCDF level.
204MINIMAL_NEMESIS_INFO | on/[off] | special case, omit all nemesis data except for nodal communication map
205OMIT_EXODUS_NUM_MAPS | on/[off] | special case, do not output the node and element numbering map.
206EXODUS_CALL_GET_ALL_TIMES| [on] / off | special case -- should the `ex_get_all_times()` function be called. See below.
207
208* `EXODUS_CALL_GET_ALL_TIMES`: Typically only used in `isSerialParallel`
209mode and the client is responsible for making sure that the step times
210are handled correctly. All databases will know about the number of
211timesteps, but if the `ex_get_all_times()` function call is skipped, then
212the times on that database will all be zero. The use case is that in `isSerialParallel`,
213each call to `ex_get_all_times()` for all files is performed
214sequentially, so if you have hundreds to thousands of files, the time
215for the call is additive and since timesteps are record variables in
216netCDF, accessing the data for all timesteps involves lseeks
217throughout the file.
218
219
220## Debugging / Profiling
221
222 Property | Value | Description
223 ----------|:--------:|------------
224 LOGGING | on/[off] | enable/disable logging of field input/output
225 ENABLE_TRACING | on/[off] | show memory and elapsed time during some IOSS calls (mainly decomp).
226 DECOMP_SHOW_PROGRESS | on/[off] | use `ENABLE_TRACING`.
227 DECOMP_SHOW_HWM | on/[off] | show high-water memory during autodecomp
228 IOSS_TIME_FILE_OPEN_CLOSE | on/[off] | show elapsed time during parallel-io file open/close/create