Change the representation of numeric objects to conserve memory by limiting the number of times LabVIEW coerces data.
By default, the representation of a numeric constant automatically adapts to the value of the constant you enter. For example, the default representation of a numeric constant with a value of 1 is a 32-bit integer. If you change the value of the constant to 1.1, the representation of the constant changes to a double-precision, floating-point number.
Complete the following steps to change the representation of a numeric object.
Note Slide and rotary controls and indicators cannot represent complex numbers. |
If you change the representation of an object by using the shortcut menu, the object retains the representation you specified regardless of the value you enter.
Some functions, such as Divide, Sine, and Cosine, always produce floating-point output. If you wire integers to the inputs of these functions, the functions convert the integers to double-precision, floating-point numbers before they perform the calculation.
To reset a constant to determine its type based on its value, right-click the constant and select Adapt To Entered Data from the shortcut menu.