<xsd:simpleType name="ResultEnumeration">
<xsd:annotation>
<xsd:documentation>Define acceptable result values for the evaluation of an OVAL Definition or an OVAL Test.</xsd:documentation>
</xsd:annotation>
<xsd:restriction base="xsd:string">
<xsd:enumeration value="true">
<xsd:annotation>
<xsd:documentation>When evaluating a definition or test, a result value of 'true' means that the characteristics being evaluated match the information represented in the system characteristic file.</xsd:documentation>
</xsd:annotation>
</xsd:enumeration>
<xsd:enumeration value="false">
<xsd:annotation>
<xsd:documentation>When evaluating a definition or test, a result value of 'false' means that the characteristics being evaluated do not match the information represented in the system characteristic file.</xsd:documentation>
</xsd:annotation>
</xsd:enumeration>
<xsd:enumeration value="unknown">
<xsd:annotation>
<xsd:documentation>When evaluating a definition or test, a result value of 'unknown' means that the characteristics being evaluated can not be found in the system characteristic file. (or the characteristics can be found but collected object flag is 'not collected') For example, assume you have a definition that tests a file, but when you look at the system characteristic file, data pertaining to that file can not be found. The lack of an object (in the collected_object section) for this file in the SC file means that no attempt was made to even try and collect information about the file. So you do not know what the result would be if it was collected. Note that finding a collected_object element in the system characteristic file is not the same as finding a matching element of the system. When evaluating an OVAL Test, the lack of a matching object on a system (for example, file not found) does not mean an unknown result since part of a test in OVAL is about existence. In this case the result would be 'false'.</xsd:documentation>
</xsd:annotation>
</xsd:enumeration>
<xsd:enumeration value="error">
<xsd:annotation>
<xsd:documentation>When evaluating a definition or test, a result value of 'error' means that the characteristics being evaluated exist in the system characteristic file but there was an error either collecting information or in performing anaylsis. For example, if there was an error returned by an api when trying to determine if an object exists on a system. Another example would be: xsi:nil might be set on an object entity, but then the entity is compared to a state entity with a value, thus producing an error.</xsd:documentation>
</xsd:annotation>
</xsd:enumeration>
<xsd:enumeration value="not evaluated">
<xsd:annotation>
<xsd:documentation>When evaluating a definition or test, a result value of 'not evaluated' means that a choice was made not to evaluate the given definition or test. The actual result is in essence unknown since if evaluation had occurred it could have been either true or false.</xsd:documentation>
</xsd:annotation>
</xsd:enumeration>
<xsd:enumeration value="not applicable">
<xsd:annotation>
<xsd:documentation>When evaluating a definition or test, a result value of 'not applicable' means that the definition or test being evaluated is not valid on the given platform. For example, trying to collect Linux RPM information on a Windows system. Another example would be in trying to collect RPM information on a linux system that does not have the RPM packaging system installed.</xsd:documentation>
</xsd:annotation>
</xsd:enumeration>
</xsd:restriction>
</xsd:simpleType>
|