|
SPEC SERT Result File FieldsSVN Revision: $Revision: 1990 $Last updated: $Date: 2015-12-11 10:38:08 -0600 (Fri, 11 Dec 2015) $ABSTRACT
(To check for possible updates to this document, please see http://www.spec.org/sert/docs/SERT-Result_File_Fields.html) OverviewSelecting one of the following will take you to the detailed table of contents for that section: 1. SPEC SERT 3. Top Bar 4. Summary 8. SUT Notes 9. Aggregate Electrical and Environmental Data 10. Details Report 12. Worklet Performance and Power Details Detailed Contents1. SPEC SERT 1.1 Test harness - Chauffeur 1.1.1 SERT Director 1.1.2 SERT Host 1.1.3 SERT Client 1.1.4 SERT Reporter 1.1.5 SERT Graphical User Interface 1.2 Workloads 1.3 The Power and Temperature Daemon 1.4 Result Validation and Report Generation 1.5 References 2. Main Report File 3. Top bar 3.1 Test sponsor 3.2 Software Availability 3.3 Tested by 3.4 Hardware/Firmware Availability 3.5 SPEC license # 3.6 System Source 3.7 Test Location 4. Summary 4.1 Workload Efficiency Score 4.2 Idle Watts 5. Worklet Summary 5.1 Result Chart 5.2 Result Table 5.2.1 Workload 5.2.2 Worklet 5.2.3 Normalized Peak Performance 5.2.4 Watts at Lowest Load Level 5.2.5 Watts at Highest Load Level 5.2.6 ∑ Normalized Performance 5.2.7 ∑ Power (Watts) 5.2.8 Efficiency Score 6. Aggregate SUT Data 6.1 # of Nodes 6.2 # of Processors 6.3 Total Physical Memory 6.4 # of Cores 6.5 # of Storage Devices 6.6 # of Threads 7. System Under Test 7.1 Shared Hardware 7.1.1 Enclosure 7.1.2 Form Factor 7.1.3 Server Blade Bays (populated / available) 7.1.4 Additional Hardware 7.1.5 Management Firmware Version 7.1.6 Power Supply Quantity (active / populated / bays) 7.1.7 Power Supply Details 7.1.8 Power Supply Operating Mode 7.1.9 Available Power Supply Modes 7.1.10 Network Switches (active / populated / bays) 7.1.11 Network Switch 7.2 Hardware per Node 7.2.1 Hardware Vendor 7.2.2 Model 7.2.3 Form Factor 7.2.4 CPU Name 7.2.5 CPU Frequency (MHz) 7.2.6 Number of CPU Sockets (available / populated) 7.2.7 CPU(s) Enabled 7.2.8 Number of NUMA Nodes 7.2.9 Hardware Threads / Core 7.2.10 Primary Cache 7.2.11 Secondary Cache 7.2.12 Tertiary Cache 7.2.13 Other Cache 7.2.14 Additional CPU Characteristics 7.2.15 Total Memory Available to OS 7.2.16 Total Memory Amount (populated / maximum) 7.2.17 Total Memory Slots (populated / available) 7.2.18 Memory DIMMs 7.2.19 Memory Operating Mode 7.2.20 Power Supply Quantity (active / populated / bays) 7.2.21 Power Supply Details 7.2.22 Power Supply Operating Mode 7.2.23 Available Power Supply Modes 7.2.24 Disk Drive Bays (populated / available) 7.2.25 Disk Drive 7.2.26 Network Interface Cards 7.2.27 Management Controller or Service Processor 7.2.28 Expansion Slots (populated / available) 7.2.29 Optical Drives 7.2.30 Keyboard 7.2.31 Mouse 7.2.32 Monitor 7.2.33 Additional Hardware 7.3 Software per Node 7.3.1 Power Management 7.3.2 Operating System (OS) 7.3.3 OS Version 7.3.4 Filesystem 7.3.5 Other Software 7.3.6 Boot Firmware Version 7.3.7 Management Firmware Version 7.3.8 JVM Vendor 7.3.9 JVM Version 7.3.10 Client Configuration ID (formerly SERT Client Configuration) 8. System Under Test Notes 9. Aggregate Electrical and Environmental Data 9.1 Line Standard 9.2 Elevation (m) 9.3 Minimum Temperature (°C) 10. Details Report File 11. Measurement Devices 11.1 Power Analyzer 11.1.1 Hardware Vendor 11.1.2 Model 11.1.3 Serial Number 11.1.4 Connectivity 11.1.5 Input Connection 11.1.6 Metrology Institute 11.1.7 Calibration Laboratory 11.1.8 Calibration Label 11.1.9 Date of Calibration 11.1.10 PTDaemon Version 11.1.11 Setup Description 11.2 Temperature Sensor 11.2.1 Hardware Vendor 11.2.2 Model 11.2.3 Driver Version 11.2.4 Connectivity 11.2.5 PTDaemon Version 11.2.6 Sensor Placement 12. Worklet Performance and Power Details 12.1 Total Clients 12.2 CPU Threads per Client 12.3 Sample Client Command-line 12.4 Performance Data 12.4.1 Phase 12.4.2 Interval 12.4.3 Actual Load 12.4.4 Score 12.4.5 Host CV 12.4.6 Client CV 12.4.7 Elapsed Measurement Time (s) 12.4.8 Transaction 12.4.9 Transaction Count 12.4.10 Transaction Time (S) 12.5 Power Data 12.5.1 Phase 12.5.2 Interval 12.5.3 Analyzer 12.5.4 Average Voltage (V) 12.5.5 Average Current (A) 12.5.6 Current Range Setting 12.5.7 Average Power Factor 12.5.8 Average Active Power (W) 12.5.9 Power Measurement Uncertainty (%) 12.5.10 Minimum Temperature (°C) 1. SPEC SERTSPEC SERT is the next generation SPEC tool for evaluating the power and performance of server class computers.
1.1 Test harness - ChauffeurThe test harness (called Chauffeur): handles the logistical side of measuring and recording power data along with controlling the software installed on the SUT and controller system itself. 1.1.1 SERT DirectorThe director reads test parameters and environment description information from the SERT configuration files and controls the execution of the test based on this information. It is the central control instance of the SERT and communicates with the other software modules described below via TCP/IP protocol. It also collects the result data from the worklet instances and stores them in the basic result file "results.xml". 1.1.2 SERT HostThis module is the main SERT module on the System Under Test (SUT). It must be launched manually by the tester and it starts the client modules executing the workloads under control of the director. 1.1.3 SERT ClientOne or more client instances each executing its own Java Virtual Machine (JVM) are started by the host for every worklet. Each client executes worklet code to stress the SUT and reports the performance data back to the director after finishing potentially multiple test phases. 1.1.4 SERT ReporterThe reporter gathers the configuration, environmental, power and performance data after a run is complete from the "results.xml" file and compiles it into HTML and text or CSV format result files. It will be started automatically after finishing all workloads by the director to create the default set of report files. Alternately it can be started manually to generate special report files from the information in the basic result file "results.xml". 1.1.5 SERT Graphical User InterfaceA Graphical User Interface (GUI) to facilitate configuration and setup of test runs, allows real-time monitoring of test runs and to review the results is part of the test package. The SERT GUI leads the user through the steps of detecting or entering the hardware and software configuration, setting up a trial run or a valid test, displaying results reports and other functions common to the testing environment. 1.2 Workloads
The SERT workloads will take advantage of different server capabilities by using various load patterns, which are intended to stress
all major components of a server uniformly. It is highly unlikely that a single workload can be designed which achieves this goal.
Therefore, the SERT workloads will consist of several different worklets, each stressing specific capabilities of a server. This
approach furthermore supports generating individual efficiency scores for the different server components.
1.3 The Power and Temperature DaemonThe Power and Temperature Daemon (PTDaemon) is a single executable program that communicates with a power analyzer or a temperature sensor via the server's native RS-232 port, USB port or additionally installed interface cards, e.g. GPIB. It reports the power consumption or temperature readings to the director via a TCP/IP socket connection. It supports a variety of RS-232, GPIB and USB interface command sets for a variety of power analyzers and temperature sensors. PTDaemon is the only SERT software module that is not Java based. Although it can be quite easily setup and run on a server other than the controller, the SERT Run and Reporting Rules section 2.9 require it to be run on the controller. 1.4 Result Validation and Report GenerationAt the beginning of each run, the test configuration parameters are logged in order to be available for later conformance checks. Warnings are displayed for non-compliant properties and printed in the final report; however, the test will run to completion producing a report that is not valid for publication.
1.5 ReferencesMore detailed information can be found in the documents shown in the following table.
For the latest versions, please consult SPEC's website.
In this document all references to configurable parameters or result file fields are printed in parentheses with different colors using the names from the configuration and result files, e.g. red color for parameters from "test-environment.xml" <TestInformation><TestSponsor> or light purple color for parameters from "config-*.xml" or "*-configurations.xml" <suite><definitions><launch-definition><num-clients> or green color for parameters from "results.xml" <TestEnvironment><configuration><suite><client-configuration id> The following configuration files are delivered with the test kit:
2. Main Report FileThis section gives an overview of the information and result fields in the main report file
"results.html/.txt".
3. Top barThe top bar gives general information regarding this test run.
3.1 Test sponsorThe name of the organization or individual that sponsored the test. Generally, this is the name of the license holder <TestSponsor>. 3.2 Software AvailabilityThe date when all the software necessary to run the
result is generally available <Software><Availability>.
3.3 Tested byThe name of the organization or individual that ran the test and submitted the result <TestedBy>. 3.4 Hardware/Firmware AvailabilityThe date when all the hardware and related firmware modules necessary to run the
result are generally available <Hardware><Availability>.
3.5 SPEC license #The SPEC license number of the organization or individual that ran the result <SpecLicense>. 3.6 System SourceSingle Supplier or Parts Built <SystemUnderTest><SystemSource>
3.7 Test LocationThe name of the city, the state and country the test took place. If there are installations in multiple geographic locations, that must also be listed in this field <Location>. 4 Summary
The summary table presents the efficiency scores for the 4 workloads and the Idle power consumption.
These 5 values are intended for use in Energy Efficiency Regulatory Programs of government agencies around the world.
4.1 Workload Efficiency ScoreThe efficiency score for each workload is calculated from the efficiency scores of all its worklets as "Geometric Mean (Worklet Efficiency Scores)" Efficiency scores for the different workloads can be extremely dissimilar due to configuration varietes, which may be favorable for some workloads only, e.g. additional DIMMs for the Memory workload or disk drives for the Storage workload.Typically these changes wouldn't influence the CPU workload score perceivably.4.2 Idle Watts
The average-watts measured for the Idle worklet test interval, see also
Watts at Lowest Load Level.
By definition there is no performance value for the Idle worklet and therefore also no
efficiency score can be calculated.
5 Worklet SummaryThis section describes the main results for all worklets in a table and as a graph. 5.1 Result ChartThe result chart graphically displays the power, performance and efficiency scores for the different worklets. Each worklet is presented on a separate row beginning with the worklet name on the left. The lines are printed with distinct background colors for the different workloads:
5.2 Result TableThe result table numerically displays the power, performance and efficiency scores for the different worklets. Each worklet is presented on a separate row. 5.2.1 WorkloadThis column of the result table shows the names of the workloads. A workload may include one or more worklets. 5.2.2 WorkletThis column of the result table shows the names of the worklets the following values are belonging to. 5.2.3 Normalized Peak Performance
In order to get performance values in the same order of magnitude from all worklets the individual performance scores of each worklet,
see Score, are divided by a fixed reference score.
The reference score for each worklet was determined taking the average performance score over several SERT test runs on a
well defined reference configuration under different operating systems.
5.2.4 Watts at Lowest Load LevelThis column of the result table shows the worklet power readings at the lowest load level. Please note that this does not correspond to "Idle", which is implemented as a separate workload in SERT and not included in each worklet. 5.2.5 Watts at Highest Load LevelThis column of the result table shows the worklet power readings at the highest load level, or "100%" load. 5.2.6 ∑ Normalized Performance
In order to get performance values in the same order of magnitude from all worklets the individual performance scores for each
measurement interval of all worklets, see Score, are divided by a fixed reference score.
The reference score for each worklet was determined taking the average performance score over several SERT test runs on a
well defined reference configuration under different operating systems.
A detailed description of the normalization process is given in chapter 6.1 of the SERT Design Document. 5.2.7 ∑ Power (Watts)The sum of the average-watts for all measurement intervals, see Average Active Power (W) in the "Worklet Performance and Power Details" section. 5.2.8 Efficiency Score
The efficiency score for each worklet is calculated as
1000 * "∑ Normalized Performance" / "∑ Power (Watts)" Efficiency for the Idle worklet is marked as not applicable (n/a) because the performance part is missing by definition.Please note that Idle power is NOT included in the per worklet efficiency score calculation. Since SERT release 1.0.1 there are two categories of scores:
6. Aggregate SUT DataIn this section aggregated values for several system configuration parameters are reported. 6.1 # of NodesThe total number of all nodes used for running the test. The reported values are calculated by the test software from the information given in the configuration files and the test startup script files. 6.2 # of ProcessorsThe number of processor chips per set and the total number of all chips used for running the test. The reported values are calculated by the test software from the information given in the configuration files and the test startup script files. 6.3 Total Physical MemoryThe amount of memory (GB) per set and the total memory size for all systems used to run the test. The reported values are calculated by the test software from the information given in the configuration files and the test startup script files. 6.4 # of CoresThe total number of all cores used for running the test. The reported value is calculated by the test software from the information given in the configuration files and the test startup script files. 6.5 # of Storage DevicesThe total number of all storage used for running the test. The reported value is calculated by the test software from the information given in the configuration files and the test startup script files. 6.6 # of ThreadsThe total number of all hardware threads used for running the test. The reported value is calculated by the test software from the information given in the configuration files and the test startup script files. 7. System under testThe following section of the report file describes the hardware and the software of the System Under Test (SUT) used to run the reported SERT results with the level of detail required to reproduce this result. 7.1 Shared HardwareA table including the description of the shared hardware components. This table will be printed for multi node results only and is missing in single node report files. 7.1.1 EnclosureThe model name identifying the enclosure housing the tested nodes <SystemUnderTest><SharedHardware><Enclosure>. 7.1.2 Form Factor
The full SUT form factor (including all nodes and any shared hardware).
<SystemUnderTest><SharedHardware><FormFactor>.
7.1.3 Server Blade Bays (populated / available)
This field is divided into 2 parts separated by a slash.
7.1.4 Additional Hardware
Any additional shared equipment added to improve performance and required to achieve the reported scores
<SharedHardware><Other><OtherHardware>.
7.1.5 Management Firmware VersionA version number or string identifying the management firmware running on the SUT enclosure or "None" if no management controller was installed. <SharedHardware><Firmware><Management><Version>. 7.1.6 Power Supply Quantity (active / populated / bays)
This field is divided into 3 parts separated by slashes.
7.1.7 Power Supply Details
The number and watts rating of this power supply unit (PSU) plus the supplier and the part number to identify it.
7.1.8 Power Supply Operating Mode
Power supply unit (PSU) operating mode active for running this test. Must be one of the available modes as described in the field
Available Power Supply Modes
<SharedHardware><PowerSupplies><OperatingMode>.
7.1.9 Available Power Supply Modes
The available power supply unit (PSU) modes depend on the capabilities of the tested server hardware and firmware
<SharedHardware><PowerSupplies><AvailableModes><Mode>.
7.1.10 Network Switches (active / populated / bays)
This field is divided into 3 parts separated by slashes.
7.1.11 Network Switch
The number and a description of the network switch used for this test, including the manufacturer and the model number to identify it.
7.2 Hardware per NodeThis section describes in detail the different hardware components of the system under test which are important to achieve the reported result. 7.2.1 Hardware VendorCompany which sells the hardware <SystemUnderTest><Node><Hardware><Vendor> 7.2.2 Model
The model name identifying the system under test
<SystemUnderTest><Node><Hardware><Model>
7.2.3 Form Factor
The form factor for this system
<SystemUnderTest><Node><Hardware><FormFactor>.
7.2.4 CPU Name
A manufacturer-determined processor formal name.
<SystemUnderTest><Node><Hardware><CPU><Name>
7.2.5 CPU Frequency (MHz)
The nominal (marked) clock frequency of the CPU, expressed in megahertz.
<SystemUnderTest><Node><Hardware><CPU><FrequencyMHz>.
7.2.6 Number of CPU Sockets (available / populated)
This field is divided into 2 parts separated by a slash.
The first part gives the number of available CPU sockets and the second part the number of sockets populated with a CPU chip as used for this SERT result.
7.2.7 CPU(s) enabledThe CPUs that were enabled and active during the test run, displayed as the number of cores <SystemUnderTest><Node><Hardware><CPU><Cores> the number of processors <SystemUnderTest><Node><Hardware><CPU><PopulatedSockets> and the number of cores per processor <SystemUnderTest><Node><Hardware><CPU><CoresPerChip>. 7.2.8 Number of NUMA Nodes
The number of Non-Uniform Memory Access (NUMA) nodes usesd for this SERT test
<SystemUnderTest><Node><Hardware><NumaNodes>.
7.2.9 Hardware Threads / CoreThe total number of active hardware threads for this SERT test and the number of harware threads per core given in paranthesis <SystemUnderTest><Node><Hardware><CPU><HardwareThreadsPerCore>. 7.2.10 Primary CacheDescription (size and organization) of the CPU's primary cache. This cache is also referred to as "L1 cache" <SystemUnderTest><Node><Hardware><CPU><Cache><Primary>. 7.2.11 Secondary CacheDescription (size and organization) of the CPU's secondary cache. This cache is also referred to as "L2 cache" <SystemUnderTest><Node><Hardware><CPU><Cache><Secondary>. 7.2.12 Tertiary CacheDescription (size and organization) of the CPU's tertiary, or "L3" cache <SystemUnderTest><Node><Hardware><CPU><Cache><Tertiary>. 7.2.13 Other CacheDescription (size and organization) of any other levels of cache memory <SystemUnderTest><Node><Hardware><CPU><Cache><Other>. 7.2.14 Additional CPU CharacteristicsAdditional technical characteristics to help identify the processor. <SystemUnderTest><Node><Hardware><CPU><OtherCharacteristics>. 7.2.15 Total Memory Available to OS
Total memory capacity in GB available to the operating system for task processing.
This number is typically slightly lower then the amount of configured physical memory.
It is determined automatically by the SERT discovery tools.
7.2.16 Total Memory Amount (populated / maximum)This field is divided into 2 parts separated by a slash. The first part describes the amount of installed physical memory in GB as used for this SERT test. <SystemUnderTest><Node><Hardware><Memory><SizeMB>. The second number gives the maximum possible memory capacity in GB if all memory slots are populated with the highest capacity DIMMs available in the SUT <SystemUnderTest><Node><Hardware><Memory><MaximumSizeMB>.7.2.17 Total Memory Slots (populated / available)This field is divided into 2 parts separated by a slash.The first part describes the number of memory slots populated with a memory module as used for this SERT test <SystemUnderTest><Node><Hardware><Memory><Dimms><Quantity>. The second part shows the total number of available memory slots in the SUT <SystemUnderTest><Node><Hardware><Memory><AvailableSlots>. 7.2.18 Memory DIMMsDetailed description of the system main memory technology, sufficient for identifying the memory used in this test.
8 x 16 GB 2Rx4 PC4-2133P-R; slots 1 - 8 populated Where:
DDR3 Format: N x gg ss eRxff PChv-wwwwwm-aa, ECC CLa; slots k, ... l populated Reference:
8 x 8 GB 2Rx4 PC3L-12800R-11, ECC CL10; slots 1 - 8 populated Where:
7.2.19 Memory Operating Mode
Description of the memory operating mode. Examples of possible values are:
7.2.20 Power Supply Quantity (active / populated / bays)
This field is divided into 3 parts separated by slashes.
7.2.21 Power Supply Details
The number and watts rating of this power supply unit (PSU) plus the supplier name and the order number to identify it.
7.2.22 Power Supply Operating ModeOperating mode active for running this test. Must be one of the available modes as described in the field Available Power Supply Modes <SystemUnderTest><Node><Hardware><PowerSupplies><OperatingMode>. 7.2.23 Available Power Supply Modes
The available power supply unit (PSU) modes depend on the capabilities of the tested server hardware and firmware
<SystemUnderTest><Node><Hardware><PowerSupplies><AvailableModes>.
7.2.24 Disk Drive Bays (populated / available)
This field is divided into 2 parts separated by a slash.
7.2.25 Disk DriveThis field contains four rows. In case of heterogenous multi disk configurations there may be several instances of this field.
7.2.26 Network Interface CardsThis field contains three rows. In case of heterogenous configurations with different Network Interface Cards (NICs) there may be several instances of this field.
7.2.27 Management Controller or Service Processor
Specifies whether any management controller was configured in the SUT
7.2.28 Expansion Slots (populated / available)
This field is divided into 2 parts separated by a slash.
Potentially there can be multiple lines in this field if different types of expansion slots are available, one for each slot type.
7.2.29 Optical DrivesSpecifies whether any optical drives were configured in the SUT <SystemUnderTest><Node><Hardware><OpticalDrives>. 7.2.30 KeyboardThe type of keyboard (USB, PS2, KVM or None) used <SystemUnderTest><Node><Hardware><Keyboard>. 7.2.31 MouseThe type of mouse (USB, PS2, KVM or None) used <SystemUnderTest><Node><Hardware><Mouse>. 7.2.32 MonitorSpecifies if a monitor was used for the test and how it was connected (directly or via KVM) <SystemUnderTest><Node><Hardware><Monitor>. 7.2.33 Additional Hardware
Number and description of any additional equipment added to improve performance and required to achieve the reported scores
7.3 Software per NodeThis section describes in detail the various software components installed on the system under test, which are critical to achieve the reported result, and their configuration parameters. 7.3.1 Power Management
This field shows whether power management features of the SUT were enabled or disabled
7.3.2 Operating System (OS)
Operating system vendor and name
7.3.3 OS Version
The operating system version. For Unix based operating systems the detailed kernel number must be given here.
If there are patches applied that affect performance and / or power, they must be disclosed in the
System Under Test Notes
<SystemUnderTest><Node><Software><OperatingSystem><Version>.
7.3.4 File SystemThe type of the filesystem containing the operating system files and directories and test files for the storage worklets. <SystemUnderTest><Node><Software><OperatingSystem><FileSystem>. 7.3.5 Additional SoftwareAny performance- and / or power-relevant software used and required to reproduce the reported scores, including third-party libraries, accelerators, etc. <SystemUnderTest><Node><Software><Other><OtherSoftware>. 7.3.6 Boot Firmware VersionA version number or string identifying the boot firmware installed on the SUT. <SystemUnderTest><Node><Firmware><Boot><Version>. 7.3.7 Management Firmware VersionA version number or string identifying the management firmware running on the SUT or "None" if no management controller was installed. <SystemUnderTest><Node><Firmware><Management><Version>. 7.3.8 JVM VendorThe company that makes the JVM software, <SystemUnderTest><Node><JVM><Vendor>. 7.3.9 JVM Version
Name and version of the JVM software product, as displayed by the "java -version" or "java -fullversion" commands,
<SystemUnderTest><Node><JVM><Version>.
7.3.10 Client Configuration ID (formerly SERT Client Configuration)
Beginning with SERT V1.0.1 this field shows the label of the client configuration element from the "client-configurations-NNN.xml"
file specifying the predefined set of JVM options and number of clients to be used for running the tests.
8. System Under Test Notes
Free text description of what sort of tuning one has to do to
the SUT to get these results. Also additional hardware information not covered in the other fields above can be given here,
<SystemUnderTest><Node><Notes>.
9. Aggregate Electrical and Environmental DataThe following section displays more details of the electrical and environmental data collected during the different target loads, including data not used to calculate the test result. For further explanation of the measured values look in the "SPECpower Methodology" document (SPECpower-Power_and_Performance_Methodology.pdf). 9.1 Line Standard
Description of the line standards for the main AC power as provided by the local utility company and used to power the SUT. The standard voltage and frequency are printed in this field followed by the number of phases and wires used to connect the SUT to the AC power line
9.2 Elevation (m)Elevation of the location where the test was run. This inforamtion is provided by the tester <SystemUnderTest><TestInformation><ElevationMeters>. 9.3 Minimum Temperature (°C)Minimum temperature which was measured by the temperature sensor during all target load levels. 10. Details Report FileThe details report file "results-details.html/.txt" is created together with the standard report file at the end of each succesfull SERT run. In addition to the information in the standard report file described above it does include more detailed performance and power result values for all Worklets separately. 11. Measurement Devices
This report section is available in the Details Report File
"results-details.html/.txt" only.
It shows the details of the different measurement devices used for this test run.
11.1 Power Analyzer "Name"The following table includes information about the power analyzer indentified by "Name" and used to measure the electrical data. 11.1.1 Hardware VendorCompany which manufactures and/or sells the power analyzer <SystemUnderTest><MeasurementDevices><PowerAnalyzer><HardwareVendor>. 11.1.2 ModelThe model name of the power analyzer type used for this test run <SystemUnderTest><MeasurementDevices><PowerAnalyzer><Model>. 11.1.3 Serial NumberThe serial number uniquely identifying the power analyzer used for this test run <SystemUnderTest><MeasurementDevices><PowerAnalyzer><SerialNumber>. 11.1.4 ConnectivityWhich interface was used to connect the power analyzer to the PTDaemon host system and to read the power data, e.g. RS-232 (serial port), USB, GPIB etc. <SystemUnderTest><MeasurementDevices><PowerAnalyzer><Connectivity>. 11.1.5 Input ConnectionInput connection used to connect the load, if several options are available, or "Default" if not <SystemUnderTest><MeasurementDevices><PowerAnalyzer><InputConnection>. 11.1.6 Metrology Institute
Name of the national metrology institute, which specifies the calibration standards for power analyzers, appropriate for the
Test Location reported in the result files.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><CalibrationInstitute>.
A list of national metrology institutes for many countries is maintained by NIST here http://gsi.nist.gov/global/index.cfm. 11.1.7 Calibration LaboratoryName of the organization that performed the power analyzer calibration according to the standards defined by the national metrology institute. Could be the analyzer manufacturer, a third party company, or an organization within your own company <SystemUnderTest><MeasurementDevices><PowerAnalyzer><AccreditedBy>. 11.1.8 Calibration LabelA number or character string which uniquely identifies this meter calibration event. May appear on the calibration certificate or on a sticker applied to the power analyzer. The format of this number is specified by the organization performing the calibration <SystemUnderTest><MeasurementDevices><PowerAnalyzer><CalibrationLabel>. 11.1.9 Date of CalibrationThe date (yyyy-mm-dd) the calibration certificate was issued, from the calibration label or the calibration certificate <SystemUnderTest><MeasurementDevices><PowerAnalyzer><DateOfCalibration>. 11.1.10 PTDaemon VersionThe version of the power daemon program reading the analyzer data, including CRC information to verify that the released version was running unchanged. This information is provided automatically by the test software. 11.1.11 Setup DescriptionFree format textual description of the device or devices measured by this power analyzer and the accompanying PTDaemon instance, e.g. "SUT Power Supplies 1 and 2". <SystemUnderTest><MeasurementDevices><PowerAnalyzer><SetupDescription>. 11.2 Temperature SensorThe following table includes information about the temperature sensor used to measure the ambient temperature of the test environment. 11.2.1 Hardware VendorCompany which manufactures and/or sells the temperature sensor <SystemUnderTest><MeasurementDevices><TemperatureSensor><HardwareVendor>. 11.2.2 ModelThe manufacturer and model name of the temperature sensor used for this test run <SystemUnderTest><MeasurementDevices><TemperatureSensor><Model>. 11.2.3 Driver VersionThe version number of the operating system driver used to control and read the temperature sensor. <SystemUnderTest><MeasurementDevices><TemperatureSensor><DriverVersion>. 11.2.4 ConnectivityWhich interface was used to read the temperature data from the sensor, e.g. RS-232 (serial port), USB etc. <SystemUnderTest><MeasurementDevices><TemperatureSensor><Connectivity>. 11.2.5 PTDaemon VersionThe version of the power daemon program reading the analyzer data, including CRC information to verify that the released version was running unchanged. This information is provided automatically by the test software. 11.2.6 Sensor PlacementFree format textual description of the device or devices measured and the approximate location of this temperature sensor, e.g. "50 mm in front of SUT main airflow intake". <SystemUnderTest><MeasurementDevices><TemperatureSensor><SetupDescription>. 12. Worklet Performance and Power DetailsThis report section is available in the Details Report File "results-details.html/.txt" only. It is divided into separate segments for all worklets each starting with a title bar showing the workload and worklet names <workload name>: <worklet name>. Each segmant includes Performance and Power Data tables together with some details about the client JVMs for the corresponding Worklet. 12.1 Total Clients
Total number of client JVMs started on the System Under Test for this worklet.
12.2 CPU Threads per ClientThe number of hardware threads each instance of the client JVM is affinitized to. 12.3 Sample Client Command-line
The complete command-line for one of the client JVMs used to run this worklet, including affinity specification, the Java classpath,
the JVM tuning flags and additional SERT parameters.
The affinity mask, the "Client N of M" string and the "-jvmid N" parameter printed here are valid
for one specific instance of the client JVM only.
The other client JVMs will use their associated affinity masks, strings and parameters but share the rest of the commandline.
12.4 Performance DataThis table displays detailed performance information for a worklet. The information is presented on separate rows per Phase, Interval and Transaction where applicable. 12.4.1 Phase
This column of the performance data table shows the phase names for the performance values presented in the following
columns of the rows belonging to this phase.
12.4.2 Interval
This column of the performance data table shows the interval names for the performance values presented in the following
columns of the rows belonging to this interval.
12.4.3 Actual LoadThe "Actual Load" is calculated dividing the interval "Score" by the "Calibration Result". This value is shown for the measurement intervals only. It can be compared against the target load level as defined by the "Interval" name. 12.4.4 Score
The fields in this column show the worklet specific score for each interval, which is calculated dividing the sum of all
"Transaction Count" values for this interval by the "Elapsed Measurement Time (s)".
12.4.5 Host CV
This field was introduced in SERT V1.1.0.
cv = σ ⁄ μ
12.4.6 Client CV
This field was introduced in SERT V1.1.0.
cv = σ ⁄ μ
12.4.7 Elapsed Measurement Time (s)
The time spent during this interval executing transactions. This is the time used for calculating the "Score" for this interval.
12.4.8 Transaction
The name of the transaction(s) related to the following "Transaction Count" and "Transaction Time" values.
12.4.9 Transaction Count
The number of successfully completed transactions defined in column "Transaction" during the interval given in column "Interval".
12.4.10 Transaction Time (s)
The total elapsed (wall clock) time spent executing this transaction during this interval.
It only includes the actual execution time, and not input generation time. Since multiple transactions execute concurrently in different threads,
this time may be longer than the length of the interval.
12.5 Power Data
This table displays detailed power information for a worklet.
The information is presented on separate rows per Phase and Interval.
12.5.1 Phase
This column of the power data table gives the phase names for the power values presented in the following
fields of the rows belonging to this phase.
12.5.2 Interval
This column of the power data table gives the interval names for the power values presented in the following
fields of the rows belonging to this interval.
12.5.3 AnalyzerName indentifying the power analyzer whose power readings are displayed in this table. More details regarding this power analyzer are given in the Power Analyzer table(s) in the "Measurement Devices" section above. 12.5.4 Average Voltage (V)Average voltage in V for each interval as reported by the PTDaemon instance connected to this power analyzer. 12.5.5 Average Current (A)Average current in A for each interval as reported by the PTDaemon instance connected to this power analyzer. 12.5.6 Current Range Setting
The current range for each test phase as configured in the power analyzer.
Typically range settings are read by PTDaemon directly from the power analyzer.
12.5.7 Average Power FactorAverage power factor for each interval as reported by the PTDaemon instance connected to this power analyzer. 12.5.8 Average Active Power (W)
Average active power in Watt for each interval as reported by the PTDaemon instance connected to this power analyzer.
12.5.9 Power Measurement Uncertainty (%)
The average uncertainty of the reported power readings for each test phase as calculated by PTDaemon based on the range settings.
The value must be within the 1% limit defined in section "1.20.1 Power Analyzer Requirements" of the
SERT-Run_and_Reporting_Rules.pdf document.
12.5.10 Minimum Temperature (°C)The minimum ambient temperature for each interval as measured by the temperature sensor. All values are measured in ten second intervals, evaluated by the PTDaemon and reported to the test harness at the end of each interval. Copyright © 2006 - 2016 Standard Performance Evaluation Corporation |