mirror of
https://github.com/ClusterCockpit/cc-metric-collector.git
synced 2025-07-31 08:56:06 +02:00
Add likwid collector
This commit is contained in:
31
collectors/likwid/groups/haswell/BRANCH.txt
Normal file
31
collectors/likwid/groups/haswell/BRANCH.txt
Normal file
@@ -0,0 +1,31 @@
|
||||
SHORT Branch prediction miss rate/ratio
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 BR_INST_RETIRED_ALL_BRANCHES
|
||||
PMC1 BR_MISP_RETIRED_ALL_BRANCHES
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Branch rate PMC0/FIXC0
|
||||
Branch misprediction rate PMC1/FIXC0
|
||||
Branch misprediction ratio PMC1/PMC0
|
||||
Instructions per branch FIXC0/PMC0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Branch rate = BR_INST_RETIRED_ALL_BRANCHES/INSTR_RETIRED_ANY
|
||||
Branch misprediction rate = BR_MISP_RETIRED_ALL_BRANCHES/INSTR_RETIRED_ANY
|
||||
Branch misprediction ratio = BR_MISP_RETIRED_ALL_BRANCHES/BR_INST_RETIRED_ALL_BRANCHES
|
||||
Instructions per branch = INSTR_RETIRED_ANY/BR_INST_RETIRED_ALL_BRANCHES
|
||||
-
|
||||
The rates state how often on average a branch or a mispredicted branch occurred
|
||||
per instruction retired in total. The branch misprediction ratio sets directly
|
||||
into relation what ratio of all branch instruction where mispredicted.
|
||||
Instructions per branch is 1/branch rate.
|
||||
|
71
collectors/likwid/groups/haswell/CACHES.txt
Normal file
71
collectors/likwid/groups/haswell/CACHES.txt
Normal file
@@ -0,0 +1,71 @@
|
||||
SHORT Cache bandwidth in MBytes/s
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 L1D_REPLACEMENT
|
||||
PMC1 L1D_M_EVICT
|
||||
PMC2 L2_LINES_IN_ALL
|
||||
PMC3 L2_TRANS_L2_WB
|
||||
CBOX0C0 CACHE_LOOKUP_READ_MESI
|
||||
CBOX1C0 CACHE_LOOKUP_READ_MESI
|
||||
CBOX2C0 CACHE_LOOKUP_READ_MESI
|
||||
CBOX3C0 CACHE_LOOKUP_READ_MESI
|
||||
CBOX0C1 CACHE_LOOKUP_WRITE_MESI
|
||||
CBOX1C1 CACHE_LOOKUP_WRITE_MESI
|
||||
CBOX2C1 CACHE_LOOKUP_WRITE_MESI
|
||||
CBOX3C1 CACHE_LOOKUP_WRITE_MESI
|
||||
|
||||
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
L2 to L1 load bandwidth [MBytes/s] 1.0E-06*PMC0*64.0/time
|
||||
L2 to L1 load data volume [GBytes] 1.0E-09*PMC0*64.0
|
||||
L1 to L2 evict bandwidth [MBytes/s] 1.0E-06*PMC1*64.0/time
|
||||
L1 to L2 evict data volume [GBytes] 1.0E-09*PMC1*64.0
|
||||
L1 to/from L2 bandwidth [MBytes/s] 1.0E-06*(PMC0+PMC1)*64.0/time
|
||||
L1 to/from L2 data volume [GBytes] 1.0E-09*(PMC0+PMC1)*64.0
|
||||
L3 to L2 load bandwidth [MBytes/s] 1.0E-06*PMC2*64.0/time
|
||||
L3 to L2 load data volume [GBytes] 1.0E-09*PMC2*64.0
|
||||
L2 to L3 evict bandwidth [MBytes/s] 1.0E-06*PMC3*64.0/time
|
||||
L2 to L3 evict data volume [GBytes] 1.0E-09*PMC3*64.0
|
||||
L2 to/from L3 bandwidth [MBytes/s] 1.0E-06*(PMC2+PMC3)*64.0/time
|
||||
L2 to/from L3 data volume [GBytes] 1.0E-09*(PMC2+PMC3)*64.0
|
||||
System to L3 bandwidth [MBytes/s] 1.0E-06*(CBOX0C0+CBOX1C0+CBOX2C0+CBOX3C0)*64.0/time
|
||||
System to L3 data volume [GBytes] 1.0E-09*(CBOX0C0+CBOX1C0+CBOX2C0+CBOX3C0)*64.0
|
||||
L3 to system bandwidth [MBytes/s] 1.0E-06*(CBOX0C1+CBOX1C1+CBOX2C1+CBOX3C1)*64/time
|
||||
L3 to system data volume [GBytes] 1.0E-09*(CBOX0C1+CBOX1C1+CBOX2C1+CBOX3C1)*64
|
||||
L3 to/from system bandwidth [MBytes/s] 1.0E-06*(CBOX0C0+CBOX1C0+CBOX2C0+CBOX3C0+CBOX0C1+CBOX1C1+CBOX2C1+CBOX3C1)*64.0/time
|
||||
L3 to/from system data volume [GBytes] 1.0E-09*(CBOX0C0+CBOX1C0+CBOX2C0+CBOX3C0+CBOX0C1+CBOX1C1+CBOX2C1+CBOX3C1)*64.0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L2 to L1 load bandwidth [MBytes/s] = 1.0E-06*L1D_REPLACEMENT*64/time
|
||||
L2 to L1 load data volume [GBytes] = 1.0E-09*L1D_REPLACEMENT*64
|
||||
L1 to L2 evict bandwidth [MBytes/s] = 1.0E-06*L1D_M_EVICT*64/time
|
||||
L1 to L2 evict data volume [GBytes] = 1.0E-09*L1D_M_EVICT*64
|
||||
L1 to/from L2 bandwidth [MBytes/s] = 1.0E-06*(L1D_REPLACEMENT+L1D_M_EVICT)*64/time
|
||||
L1 to/from L2 data volume [GBytes] = 1.0E-09*(L1D_REPLACEMENT+L1D_M_EVICT)*64
|
||||
L3 to L2 load bandwidth [MBytes/s] = 1.0E-06*L2_LINES_IN_ALL*64/time
|
||||
L3 to L2 load data volume [GBytes] = 1.0E-09*L2_LINES_IN_ALL*64
|
||||
L2 to L3 evict bandwidth [MBytes/s] = 1.0E-06*L2_TRANS_L2_WB*64/time
|
||||
L2 to L3 evict data volume [GBytes] = 1.0E-09*L2_TRANS_L2_WB*64
|
||||
L2 to/from L3 bandwidth [MBytes/s] = 1.0E-06*(L2_LINES_IN_ALL+L2_TRANS_L2_WB)*64/time
|
||||
L2 to/from L3 data volume [GBytes] = 1.0E-09*(L2_LINES_IN_ALL+L2_TRANS_L2_WB)*64
|
||||
System to L3 bandwidth [MBytes/s] = 1.0E-06*(SUM(CACHE_LOOKUP_READ_MESI))*64/time
|
||||
System to L3 data volume [GBytes] = 1.0E-09*(SUM(CACHE_LOOKUP_READ_MESI))*64
|
||||
L3 to system bandwidth [MBytes/s] = 1.0E-06*(SUM(CACHE_LOOKUP_WRITE_MESI))*64/time
|
||||
L3 to system data volume [GBytes] = 1.0E-09*(SUM(CACHE_LOOKUP_WRITE_MESI))*64
|
||||
L3 to/from system bandwidth [MBytes/s] = 1.0E-06*(SUM(CACHE_LOOKUP_READ_MESI)+SUM(CACHE_LOOKUP_WRITE_MESI))*64/time
|
||||
L3 to/from system data volume [GBytes] = 1.0E-09*(SUM(CACHE_LOOKUP_READ_MESI)+SUM(CACHE_LOOKUP_WRITE_MESI))*64
|
||||
-
|
||||
Group to measure cache transfers between L1 and Memory. Please notice that the
|
||||
L3 to/from system metrics contain any traffic to the system (memory,
|
||||
Intel QPI, etc.) but don't seem to handle anything because commonly memory read
|
||||
bandwidth and L3 to L2 bandwidth is higher as the memory to L3 bandwidth.
|
||||
|
26
collectors/likwid/groups/haswell/CLOCK.txt
Normal file
26
collectors/likwid/groups/haswell/CLOCK.txt
Normal file
@@ -0,0 +1,26 @@
|
||||
SHORT Power and Energy consumption
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PWR0 PWR_PKG_ENERGY
|
||||
UBOXFIX UNCORE_CLOCK
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
Uncore Clock [MHz] 1.E-06*UBOXFIX/time
|
||||
CPI FIXC1/FIXC0
|
||||
Energy [J] PWR0
|
||||
Power [W] PWR0/time
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Power = PWR_PKG_ENERGY / time
|
||||
Uncore Clock [MHz] = 1.E-06 * UNCORE_CLOCK / time
|
||||
-
|
||||
Haswell implements the new RAPL interface. This interface enables to
|
||||
monitor the consumed energy on the package (socket) level.
|
||||
|
38
collectors/likwid/groups/haswell/CYCLE_ACTIVITY.txt
Normal file
38
collectors/likwid/groups/haswell/CYCLE_ACTIVITY.txt
Normal file
@@ -0,0 +1,38 @@
|
||||
SHORT Cycle Activities
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 CYCLE_ACTIVITY_CYCLES_L2_PENDING
|
||||
PMC1 CYCLE_ACTIVITY_CYCLES_LDM_PENDING
|
||||
PMC2 CYCLE_ACTIVITY_CYCLES_L1D_PENDING
|
||||
PMC3 CYCLE_ACTIVITY_CYCLES_NO_EXECUTE
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Cycles without execution [%] (PMC3/FIXC1)*100
|
||||
Cycles without execution due to L1D [%] (PMC2/FIXC1)*100
|
||||
Cycles without execution due to L2 [%] (PMC0/FIXC1)*100
|
||||
Cycles without execution due to memory loads [%] (PMC1/FIXC1)*100
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Cycles without execution [%] = CYCLE_ACTIVITY_CYCLES_NO_EXECUTE/CPU_CLK_UNHALTED_CORE*100
|
||||
Cycles with stalls due to L1D [%] = CYCLE_ACTIVITY_CYCLES_L1D_PENDING/CPU_CLK_UNHALTED_CORE*100
|
||||
Cycles with stalls due to L2 [%] = CYCLE_ACTIVITY_CYCLES_L2_PENDING/CPU_CLK_UNHALTED_CORE*100
|
||||
Cycles without execution due to memory loads [%] = CYCLE_ACTIVITY_CYCLES_LDM_PENDING/CPU_CLK_UNHALTED_CORE*100
|
||||
--
|
||||
This performance group measures the cycles while waiting for data from the cache
|
||||
and memory hierarchy.
|
||||
CYCLE_ACTIVITY_CYCLES_NO_EXECUTE: Counts number of cycles nothing is executed on
|
||||
any execution port.
|
||||
CYCLE_ACTIVITY_CYCLES_L1D_PENDING: Cycles while L1 cache miss demand load is
|
||||
outstanding.
|
||||
CYCLE_ACTIVITY_CYCLES_L2_PENDING: Cycles while L2 cache miss demand load is
|
||||
outstanding.
|
||||
CYCLE_ACTIVITY_CYCLES_LDM_PENDING: Cycles while memory subsystem has an
|
||||
outstanding load.
|
45
collectors/likwid/groups/haswell/CYCLE_STALLS.txt
Normal file
45
collectors/likwid/groups/haswell/CYCLE_STALLS.txt
Normal file
@@ -0,0 +1,45 @@
|
||||
SHORT Cycle Activities (Stalls)
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 CYCLE_ACTIVITY_STALLS_L2_PENDING
|
||||
PMC1 CYCLE_ACTIVITY_STALLS_LDM_PENDING
|
||||
PMC2 CYCLE_ACTIVITY_STALLS_L1D_PENDING
|
||||
PMC3 CYCLE_ACTIVITY_STALLS_TOTAL
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Total execution stalls PMC3
|
||||
Stalls caused by L1D misses [%] (PMC2/PMC3)*100
|
||||
Stalls caused by L2 misses [%] (PMC0/PMC3)*100
|
||||
Stalls caused by memory loads [%] (PMC1/PMC3)*100
|
||||
Execution stall rate [%] (PMC3/FIXC1)*100
|
||||
Stalls caused by L1D misses rate [%] (PMC2/FIXC1)*100
|
||||
Stalls caused by L2 misses rate [%] (PMC0/FIXC1)*100
|
||||
Stalls caused by memory loads rate [%] (PMC1/FIXC1)*100
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Total execution stalls = CYCLE_ACTIVITY_STALLS_TOTAL
|
||||
Stalls caused by L1D misses [%] = (CYCLE_ACTIVITY_STALLS_L1D_PENDING/CYCLE_ACTIVITY_STALLS_TOTAL)*100
|
||||
Stalls caused by L2 misses [%] = (CYCLE_ACTIVITY_STALLS_L2_PENDING/CYCLE_ACTIVITY_STALLS_TOTAL)*100
|
||||
Stalls caused by memory loads [%] = (CYCLE_ACTIVITY_STALLS_LDM_PENDING/CYCLE_ACTIVITY_STALLS_TOTAL)*100
|
||||
Execution stall rate [%] = (CYCLE_ACTIVITY_STALLS_TOTAL/CPU_CLK_UNHALTED_CORE)*100
|
||||
Stalls caused by L1D misses rate [%] = (CYCLE_ACTIVITY_STALLS_L1D_PENDING/CPU_CLK_UNHALTED_CORE)*100
|
||||
Stalls caused by L2 misses rate [%] = (CYCLE_ACTIVITY_STALLS_L2_PENDING/CPU_CLK_UNHALTED_CORE)*100
|
||||
Stalls caused by memory loads rate [%] = (CYCLE_ACTIVITY_STALLS_LDM_PENDING/CPU_CLK_UNHALTED_CORE)*100
|
||||
--
|
||||
This performance group measures the stalls caused by data traffic in the cache
|
||||
hierarchy.
|
||||
CYCLE_ACTIVITY_STALLS_TOTAL: Total execution stalls.
|
||||
CYCLE_ACTIVITY_STALLS_L1D_PENDING: Execution stalls while L1 cache miss demand
|
||||
load is outstanding.
|
||||
CYCLE_ACTIVITY_STALLS_L2_PENDING: Execution stalls while L2 cache miss demand
|
||||
load is outstanding.
|
||||
CYCLE_ACTIVITY_STALLS_LDM_PENDING: Execution stalls while memory subsystem has
|
||||
an outstanding load.
|
27
collectors/likwid/groups/haswell/DATA.txt
Normal file
27
collectors/likwid/groups/haswell/DATA.txt
Normal file
@@ -0,0 +1,27 @@
|
||||
SHORT Load to store ratio
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 MEM_UOPS_RETIRED_LOADS
|
||||
PMC1 MEM_UOPS_RETIRED_STORES
|
||||
PMC2 UOPS_RETIRED_ALL
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Load to store ratio PMC0/PMC1
|
||||
Load ratio PMC0/PMC2
|
||||
Store ratio PMC1/PMC2
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Load to store ratio = MEM_UOPS_RETIRED_LOADS/MEM_UOPS_RETIRED_STORES
|
||||
Load ratio = MEM_UOPS_RETIRED_LOADS/UOPS_RETIRED_ALL
|
||||
Store ratio = MEM_UOPS_RETIRED_STORES/UOPS_RETIRED_ALL
|
||||
-
|
||||
This is a metric to determine your load to store ratio.
|
||||
|
24
collectors/likwid/groups/haswell/DIVIDE.txt
Normal file
24
collectors/likwid/groups/haswell/DIVIDE.txt
Normal file
@@ -0,0 +1,24 @@
|
||||
SHORT Divide unit information
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 ARITH_DIVIDER_UOPS
|
||||
PMC1 ARITH_DIVIDER_CYCLES
|
||||
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Number of divide ops PMC0
|
||||
Avg. divide unit usage duration PMC1/PMC0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Number of divide ops = ARITH_DIVIDER_UOPS
|
||||
Avg. divide unit usage duration = ARITH_DIVIDER_CYCLES/ARITH_DIVIDER_UOPS
|
||||
-
|
||||
This performance group measures the average latency of divide operations
|
39
collectors/likwid/groups/haswell/ENERGY.txt
Normal file
39
collectors/likwid/groups/haswell/ENERGY.txt
Normal file
@@ -0,0 +1,39 @@
|
||||
SHORT Power and Energy consumption
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
TMP0 TEMP_CORE
|
||||
PWR0 PWR_PKG_ENERGY
|
||||
PWR1 PWR_PP0_ENERGY
|
||||
PWR2 PWR_PP1_ENERGY
|
||||
PWR3 PWR_DRAM_ENERGY
|
||||
|
||||
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Temperature [C] TMP0
|
||||
Energy [J] PWR0
|
||||
Power [W] PWR0/time
|
||||
Energy PP0 [J] PWR1
|
||||
Power PP0 [W] PWR1/time
|
||||
Energy PP1 [J] PWR2
|
||||
Power PP1 [W] PWR2/time
|
||||
Energy DRAM [J] PWR3
|
||||
Power DRAM [W] PWR3/time
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Power = PWR_PKG_ENERGY / time
|
||||
Power PP0 = PWR_PP0_ENERGY / time
|
||||
Power PP1 = PWR_PP1_ENERGY / time
|
||||
Power DRAM = PWR_DRAM_ENERGY / time
|
||||
-
|
||||
Haswell implements the new RAPL interface. This interface enables to
|
||||
monitor the consumed energy on the package (socket) and DRAM level.
|
||||
|
27
collectors/likwid/groups/haswell/FALSE_SHARE.txt
Normal file
27
collectors/likwid/groups/haswell/FALSE_SHARE.txt
Normal file
@@ -0,0 +1,27 @@
|
||||
SHORT False sharing
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 MEM_LOAD_UOPS_L3_HIT_RETIRED_XSNP_HITM
|
||||
PMC2 MEM_LOAD_UOPS_RETIRED_ALL_ALL
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Local LLC hit with false sharing [MByte] 1.E-06*PMC0*64
|
||||
Local LLC hit with false sharing rate PMC0/PMC2
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Local LLC false sharing [MByte] = 1.E-06*MEM_LOAD_UOPS_L3_HIT_RETIRED_XSNP_HITM*64
|
||||
Local LLC false sharing rate = MEM_LOAD_UOPS_L3_HIT_RETIRED_XSNP_HITM/MEM_LOAD_UOPS_RETIRED_ALL
|
||||
-
|
||||
False-sharing of cache lines can dramatically reduce the performance of an
|
||||
application. This performance group measures the L3 traffic induced by false-sharing.
|
||||
The false-sharing rate uses all memory loads as reference.
|
||||
Please keep in mind that the MEM_LOAD_UOPS_L3_HIT_RETIRED_XSNP_HITM event may
|
||||
undercount by as much as 40% (Errata HSD25).
|
28
collectors/likwid/groups/haswell/FLOPS_AVX.txt
Normal file
28
collectors/likwid/groups/haswell/FLOPS_AVX.txt
Normal file
@@ -0,0 +1,28 @@
|
||||
SHORT Packed AVX MFLOP/s
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 AVX_INSTS_CALC
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Packed SP [MFLOP/s] 1.0E-06*(PMC0*8.0)/time
|
||||
Packed DP [MFLOP/s] 1.0E-06*(PMC0*4.0)/time
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Packed SP [MFLOP/s] = 1.0E-06*(AVX_INSTS_CALC*8)/runtime
|
||||
Packed DP [MFLOP/s] = 1.0E-06*(AVX_INSTS_CALC*4)/runtime
|
||||
-
|
||||
Packed 32b AVX FLOP/s rates. Approximate counts of AVX & AVX2 256-bit instructions.
|
||||
May count non-AVX instructions that employ 256-bit operations, including (but
|
||||
not necessarily limited to) rep string instructions that use 256-bit loads and
|
||||
stores for optimized performance, XSAVE* and XRSTOR*, and operations that
|
||||
transition the x87 FPU data registers between x87 and MMX.
|
||||
Caution: The event AVX_INSTS_CALC counts the insertf128 instruction often used
|
||||
by the Intel C compilers for (unaligned) vector loads.
|
33
collectors/likwid/groups/haswell/ICACHE.txt
Normal file
33
collectors/likwid/groups/haswell/ICACHE.txt
Normal file
@@ -0,0 +1,33 @@
|
||||
SHORT Instruction cache miss rate/ratio
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 ICACHE_ACCESSES
|
||||
PMC1 ICACHE_MISSES
|
||||
PMC2 ICACHE_IFETCH_STALL
|
||||
PMC3 ILD_STALL_IQ_FULL
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
L1I request rate PMC0/FIXC0
|
||||
L1I miss rate PMC1/FIXC0
|
||||
L1I miss ratio PMC1/PMC0
|
||||
L1I stalls PMC2
|
||||
L1I stall rate PMC2/FIXC0
|
||||
L1I queue full stalls PMC3
|
||||
L1I queue full stall rate PMC3/FIXC0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L1I request rate = ICACHE_ACCESSES / INSTR_RETIRED_ANY
|
||||
L1I miss rate = ICACHE_MISSES / INSTR_RETIRED_ANY
|
||||
L1I miss ratio = ICACHE_MISSES / ICACHE_ACCESSES
|
||||
L1I stalls = ICACHE_IFETCH_STALL
|
||||
L1I stall rate = ICACHE_IFETCH_STALL / INSTR_RETIRED_ANY
|
||||
-
|
||||
This group measures some L1 instruction cache metrics.
|
37
collectors/likwid/groups/haswell/L2.txt
Normal file
37
collectors/likwid/groups/haswell/L2.txt
Normal file
@@ -0,0 +1,37 @@
|
||||
SHORT L2 cache bandwidth in MBytes/s
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 L1D_REPLACEMENT
|
||||
PMC1 L2_TRANS_L1D_WB
|
||||
PMC2 ICACHE_MISSES
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
L2D load bandwidth [MBytes/s] 1.0E-06*PMC0*64.0/time
|
||||
L2D load data volume [GBytes] 1.0E-09*PMC0*64.0
|
||||
L2D evict bandwidth [MBytes/s] 1.0E-06*PMC1*64.0/time
|
||||
L2D evict data volume [GBytes] 1.0E-09*PMC1*64.0
|
||||
L2 bandwidth [MBytes/s] 1.0E-06*(PMC0+PMC1+PMC2)*64.0/time
|
||||
L2 data volume [GBytes] 1.0E-09*(PMC0+PMC1+PMC2)*64.0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L2D load bandwidth [MBytes/s] = 1.0E-06*L1D_REPLACEMENT*64.0/time
|
||||
L2D load data volume [GBytes] = 1.0E-09*L1D_REPLACEMENT*64.0
|
||||
L2D evict bandwidth [MBytes/s] = 1.0E-06*L2_TRANS_L1D_WB*64.0/time
|
||||
L2D evict data volume [GBytes] = 1.0E-09*L2_TRANS_L1D_WB*64.0
|
||||
L2 bandwidth [MBytes/s] = 1.0E-06*(L1D_REPLACEMENT+L2_TRANS_L1D_WB+ICACHE_MISSES)*64.0/time
|
||||
L2 data volume [GBytes] = 1.0E-09*(L1D_REPLACEMENT+L2_TRANS_L1D_WB+ICACHE_MISSES)*64.0
|
||||
-
|
||||
Profiling group to measure L2 cache bandwidth. The bandwidth is computed by the
|
||||
number of cache line loaded from the L2 to the L2 data cache and the writebacks from
|
||||
the L2 data cache to the L2 cache. The group also outputs total data volume transferred between
|
||||
L2 and L1. Note that this bandwidth also includes data transfers due to a write
|
||||
allocate load on a store miss in L1 and cache lines transferred it the instruction
|
||||
cache.
|
34
collectors/likwid/groups/haswell/L2CACHE.txt
Normal file
34
collectors/likwid/groups/haswell/L2CACHE.txt
Normal file
@@ -0,0 +1,34 @@
|
||||
SHORT L2 cache miss rate/ratio
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 L2_TRANS_ALL_REQUESTS
|
||||
PMC1 L2_RQSTS_MISS
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
L2 request rate PMC0/FIXC0
|
||||
L2 miss rate PMC1/FIXC0
|
||||
L2 miss ratio PMC1/PMC0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L2 request rate = L2_TRANS_ALL_REQUESTS/INSTR_RETIRED_ANY
|
||||
L2 miss rate = L2_RQSTS_MISS/INSTR_RETIRED_ANY
|
||||
L2 miss ratio = L2_RQSTS_MISS/L2_TRANS_ALL_REQUESTS
|
||||
-
|
||||
This group measures the locality of your data accesses with regard to the
|
||||
L2 cache. L2 request rate tells you how data intensive your code is
|
||||
or how many data accesses you have on average per instruction.
|
||||
The L2 miss rate gives a measure how often it was necessary to get
|
||||
cache lines from memory. And finally L2 miss ratio tells you how many of your
|
||||
memory references required a cache line to be loaded from a higher level.
|
||||
While the# data cache miss rate might be given by your algorithm you should
|
||||
try to get data cache miss ratio as low as possible by increasing your cache reuse.
|
||||
|
||||
|
36
collectors/likwid/groups/haswell/L3.txt
Normal file
36
collectors/likwid/groups/haswell/L3.txt
Normal file
@@ -0,0 +1,36 @@
|
||||
SHORT L3 cache bandwidth in MBytes/s
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 L2_LINES_IN_ALL
|
||||
PMC1 L2_TRANS_L2_WB
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
L3 load bandwidth [MBytes/s] 1.0E-06*PMC0*64.0/time
|
||||
L3 load data volume [GBytes] 1.0E-09*PMC0*64.0
|
||||
L3 evict bandwidth [MBytes/s] 1.0E-06*PMC1*64.0/time
|
||||
L3 evict data volume [GBytes] 1.0E-09*PMC1*64.0
|
||||
L3 bandwidth [MBytes/s] 1.0E-06*(PMC0+PMC1)*64.0/time
|
||||
L3 data volume [GBytes] 1.0E-09*(PMC0+PMC1)*64.0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L3 load bandwidth [MBytes/s] = 1.0E-06*L2_LINES_IN_ALL*64.0/time
|
||||
L3 load data volume [GBytes] = 1.0E-09*L2_LINES_IN_ALL*64.0
|
||||
L3 evict bandwidth [MBytes/s] = 1.0E-06*L2_TRANS_L2_WB*64.0/time
|
||||
L3 evict data volume [GBytes] = 1.0E-09*L2_TRANS_L2_WB*64.0
|
||||
L3 bandwidth [MBytes/s] = 1.0E-06*(L2_LINES_IN_ALL+L2_TRANS_L2_WB)*64/time
|
||||
L3 data volume [GBytes] = 1.0E-09*(L2_LINES_IN_ALL+L2_TRANS_L2_WB)*64
|
||||
-
|
||||
Profiling group to measure L3 cache bandwidth. The bandwidth is computed by the
|
||||
number of cache line allocated in the L2 and the number of modified cache lines
|
||||
evicted from the L2. This group also output data volume transferred between the
|
||||
L3 and measured cores L2 caches. Note that this bandwidth also includes data
|
||||
transfers due to a write allocate load on a store miss in L2.
|
||||
|
35
collectors/likwid/groups/haswell/L3CACHE.txt
Normal file
35
collectors/likwid/groups/haswell/L3CACHE.txt
Normal file
@@ -0,0 +1,35 @@
|
||||
SHORT L3 cache miss rate/ratio
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 MEM_LOAD_UOPS_RETIRED_L3_ALL
|
||||
PMC1 MEM_LOAD_UOPS_RETIRED_L3_MISS
|
||||
PMC2 UOPS_RETIRED_ALL
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
L3 request rate PMC0/PMC2
|
||||
L3 miss rate PMC1/PMC2
|
||||
L3 miss ratio PMC1/PMC0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L3 request rate = MEM_LOAD_UOPS_RETIRED_L3_ALL/UOPS_RETIRED_ALL
|
||||
L3 miss rate = MEM_LOAD_UOPS_RETIRED_L3_MISS/UOPS_RETIRED_ALL
|
||||
L3 miss ratio = MEM_LOAD_UOPS_RETIRED_L3_MISS/MEM_LOAD_UOPS_RETIRED_L3_ALL
|
||||
-
|
||||
This group measures the locality of your data accesses with regard to the
|
||||
L3 cache. L3 request rate tells you how data intensive your code is
|
||||
or how many data accesses you have on average per instruction.
|
||||
The L3 miss rate gives a measure how often it was necessary to get
|
||||
cache lines from memory. And finally L3 miss ratio tells you how many of your
|
||||
memory references required a cache line to be loaded from a higher level.
|
||||
While the# data cache miss rate might be given by your algorithm you should
|
||||
try to get data cache miss ratio as low as possible by increasing your cache reuse.
|
||||
|
||||
|
36
collectors/likwid/groups/haswell/MEM.txt
Normal file
36
collectors/likwid/groups/haswell/MEM.txt
Normal file
@@ -0,0 +1,36 @@
|
||||
SHORT L3 cache bandwidth in MBytes/s
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
MBOX0C1 DRAM_READS
|
||||
MBOX0C2 DRAM_WRITES
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Memory load bandwidth [MBytes/s] 1.0E-06*MBOX0C1*64.0/time
|
||||
Memory load data volume [GBytes] 1.0E-09*MBOX0C1*64.0
|
||||
Memory evict bandwidth [MBytes/s] 1.0E-06*MBOX0C2*64.0/time
|
||||
Memory evict data volume [GBytes] 1.0E-09*MBOX0C2*64.0
|
||||
Memory bandwidth [MBytes/s] 1.0E-06*(MBOX0C1+MBOX0C2)*64.0/time
|
||||
Memory data volume [GBytes] 1.0E-09*(MBOX0C1+MBOX0C2)*64.0
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L3 load bandwidth [MBytes/s] = 1.0E-06*L2_LINES_IN_ALL*64.0/time
|
||||
L3 load data volume [GBytes] = 1.0E-09*L2_LINES_IN_ALL*64.0
|
||||
L3 evict bandwidth [MBytes/s] = 1.0E-06*L2_TRANS_L2_WB*64.0/time
|
||||
L3 evict data volume [GBytes] = 1.0E-09*L2_TRANS_L2_WB*64.0
|
||||
L3 bandwidth [MBytes/s] = 1.0E-06*(L2_LINES_IN_ALL+L2_TRANS_L2_WB)*64/time
|
||||
L3 data volume [GBytes] = 1.0E-09*(L2_LINES_IN_ALL+L2_TRANS_L2_WB)*64
|
||||
-
|
||||
Profiling group to measure L3 cache bandwidth. The bandwidth is computed by the
|
||||
number of cache line allocated in the L2 and the number of modified cache lines
|
||||
evicted from the L2. This group also output data volume transferred between the
|
||||
L3 and measured cores L2 caches. Note that this bandwidth also includes data
|
||||
transfers due to a write allocate load on a store miss in L2.
|
||||
|
46
collectors/likwid/groups/haswell/PORT_USAGE.txt
Normal file
46
collectors/likwid/groups/haswell/PORT_USAGE.txt
Normal file
@@ -0,0 +1,46 @@
|
||||
SHORT Execution port utilization
|
||||
|
||||
REQUIRE_NOHT
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 UOPS_EXECUTED_PORT_PORT_0
|
||||
PMC1 UOPS_EXECUTED_PORT_PORT_1
|
||||
PMC2 UOPS_EXECUTED_PORT_PORT_2
|
||||
PMC3 UOPS_EXECUTED_PORT_PORT_3
|
||||
PMC4 UOPS_EXECUTED_PORT_PORT_4
|
||||
PMC5 UOPS_EXECUTED_PORT_PORT_5
|
||||
PMC6 UOPS_EXECUTED_PORT_PORT_6
|
||||
PMC7 UOPS_EXECUTED_PORT_PORT_7
|
||||
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Port0 usage ratio PMC0/(PMC0+PMC1+PMC2+PMC3+PMC4+PMC5+PMC6+PMC7)
|
||||
Port1 usage ratio PMC1/(PMC0+PMC1+PMC2+PMC3+PMC4+PMC5+PMC6+PMC7)
|
||||
Port2 usage ratio PMC2/(PMC0+PMC1+PMC2+PMC3+PMC4+PMC5+PMC6+PMC7)
|
||||
Port3 usage ratio PMC3/(PMC0+PMC1+PMC2+PMC3+PMC4+PMC5+PMC6+PMC7)
|
||||
Port4 usage ratio PMC4/(PMC0+PMC1+PMC2+PMC3+PMC4+PMC5+PMC6+PMC7)
|
||||
Port5 usage ratio PMC5/(PMC0+PMC1+PMC2+PMC3+PMC4+PMC5+PMC6+PMC7)
|
||||
Port6 usage ratio PMC6/(PMC0+PMC1+PMC2+PMC3+PMC4+PMC5+PMC6+PMC7)
|
||||
Port7 usage ratio PMC7/(PMC0+PMC1+PMC2+PMC3+PMC4+PMC5+PMC6+PMC7)
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Port0 usage ratio = UOPS_EXECUTED_PORT_PORT_0/SUM(UOPS_EXECUTED_PORT_PORT_*)
|
||||
Port1 usage ratio = UOPS_EXECUTED_PORT_PORT_1/SUM(UOPS_EXECUTED_PORT_PORT_*)
|
||||
Port2 usage ratio = UOPS_EXECUTED_PORT_PORT_2/SUM(UOPS_EXECUTED_PORT_PORT_*)
|
||||
Port3 usage ratio = UOPS_EXECUTED_PORT_PORT_3/SUM(UOPS_EXECUTED_PORT_PORT_*)
|
||||
Port4 usage ratio = UOPS_EXECUTED_PORT_PORT_4/SUM(UOPS_EXECUTED_PORT_PORT_*)
|
||||
Port5 usage ratio = UOPS_EXECUTED_PORT_PORT_5/SUM(UOPS_EXECUTED_PORT_PORT_*)
|
||||
Port6 usage ratio = UOPS_EXECUTED_PORT_PORT_6/SUM(UOPS_EXECUTED_PORT_PORT_*)
|
||||
Port7 usage ratio = UOPS_EXECUTED_PORT_PORT_7/SUM(UOPS_EXECUTED_PORT_PORT_*)
|
||||
-
|
||||
This group measures the execution port utilization in a CPU core. The group can
|
||||
only be measured when HyperThreading is disabled because only then each CPU core
|
||||
can program eight counters.
|
22
collectors/likwid/groups/haswell/RECOVERY.txt
Normal file
22
collectors/likwid/groups/haswell/RECOVERY.txt
Normal file
@@ -0,0 +1,22 @@
|
||||
SHORT Recovery duration
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 INT_MISC_RECOVERY_CYCLES
|
||||
PMC1 INT_MISC_RECOVERY_COUNT
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Avg. recovery duration PMC0/PMC1
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Avg. recovery duration = INT_MISC_RECOVERY_CYCLES/INT_MISC_RECOVERY_COUNT
|
||||
-
|
||||
This group measures the duration of recoveries after SSE exception, memory
|
||||
disambiguation, etc...
|
35
collectors/likwid/groups/haswell/TLB_DATA.txt
Normal file
35
collectors/likwid/groups/haswell/TLB_DATA.txt
Normal file
@@ -0,0 +1,35 @@
|
||||
SHORT L2 data TLB miss rate/ratio
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 DTLB_LOAD_MISSES_CAUSES_A_WALK
|
||||
PMC1 DTLB_STORE_MISSES_CAUSES_A_WALK
|
||||
PMC2 DTLB_LOAD_MISSES_WALK_DURATION
|
||||
PMC3 DTLB_STORE_MISSES_WALK_DURATION
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
L1 DTLB load misses PMC0
|
||||
L1 DTLB load miss rate PMC0/FIXC0
|
||||
L1 DTLB load miss duration [Cyc] PMC2/PMC0
|
||||
L1 DTLB store misses PMC1
|
||||
L1 DTLB store miss rate PMC1/FIXC0
|
||||
L1 DTLB store miss duration [Cyc] PMC3/PMC1
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L1 DTLB load misses = DTLB_LOAD_MISSES_CAUSES_A_WALK
|
||||
L1 DTLB load miss rate = DTLB_LOAD_MISSES_CAUSES_A_WALK / INSTR_RETIRED_ANY
|
||||
L1 DTLB load miss duration [Cyc] = DTLB_LOAD_MISSES_WALK_DURATION / DTLB_LOAD_MISSES_CAUSES_A_WALK
|
||||
L1 DTLB store misses = DTLB_STORE_MISSES_CAUSES_A_WALK
|
||||
L1 DTLB store miss rate = DTLB_STORE_MISSES_CAUSES_A_WALK / INSTR_RETIRED_ANY
|
||||
L1 DTLB store miss duration [Cyc] = DTLB_STORE_MISSES_WALK_DURATION / DTLB_STORE_MISSES_CAUSES_A_WALK
|
||||
-
|
||||
The DTLB load and store miss rates gives a measure how often a TLB miss occurred
|
||||
per instruction. The duration measures the time in cycles how long a walk did take.
|
||||
|
28
collectors/likwid/groups/haswell/TLB_INSTR.txt
Normal file
28
collectors/likwid/groups/haswell/TLB_INSTR.txt
Normal file
@@ -0,0 +1,28 @@
|
||||
SHORT L1 Instruction TLB miss rate/ratio
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 ITLB_MISSES_CAUSES_A_WALK
|
||||
PMC1 ITLB_MISSES_WALK_DURATION
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
L1 ITLB misses PMC0
|
||||
L1 ITLB miss rate PMC0/FIXC0
|
||||
L1 ITLB miss duration [Cyc] PMC1/PMC0
|
||||
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
L1 ITLB misses = ITLB_MISSES_CAUSES_A_WALK
|
||||
L1 ITLB miss rate = ITLB_MISSES_CAUSES_A_WALK / INSTR_RETIRED_ANY
|
||||
L1 ITLB miss duration [Cyc] = ITLB_MISSES_WALK_DURATION / ITLB_MISSES_CAUSES_A_WALK
|
||||
-
|
||||
The ITLB miss rates gives a measure how often a TLB miss occurred
|
||||
per instruction. The duration measures the time in cycles how long a walk did take.
|
||||
|
48
collectors/likwid/groups/haswell/TMA.txt
Normal file
48
collectors/likwid/groups/haswell/TMA.txt
Normal file
@@ -0,0 +1,48 @@
|
||||
SHORT Top down cycle allocation
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 UOPS_ISSUED_ANY
|
||||
PMC1 UOPS_RETIRED_RETIRE_SLOTS
|
||||
PMC2 IDQ_UOPS_NOT_DELIVERED_CORE
|
||||
PMC3 INT_MISC_RECOVERY_CYCLES
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
IPC FIXC0/FIXC1
|
||||
Total Slots 4*FIXC1
|
||||
Slots Retired PMC1
|
||||
Fetch Bubbles PMC2
|
||||
Recovery Bubbles 4*PMC3
|
||||
Front End [%] PMC2/(4*FIXC1)*100
|
||||
Speculation [%] (PMC0-PMC1+(4*PMC3))/(4*FIXC1)*100
|
||||
Retiring [%] PMC1/(4*FIXC1)*100
|
||||
Back End [%] (1-((PMC2+PMC0+(4*PMC3))/(4*FIXC1)))*100
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Total Slots = 4*CPU_CLK_UNHALTED_CORE
|
||||
Slots Retired = UOPS_RETIRED_RETIRE_SLOTS
|
||||
Fetch Bubbles = IDQ_UOPS_NOT_DELIVERED_CORE
|
||||
Recovery Bubbles = 4*INT_MISC_RECOVERY_CYCLES
|
||||
Front End [%] = IDQ_UOPS_NOT_DELIVERED_CORE/(4*CPU_CLK_UNHALTED_CORE)*100
|
||||
Speculation [%] = (UOPS_ISSUED_ANY-UOPS_RETIRED_RETIRE_SLOTS+(4*INT_MISC_RECOVERY_CYCLES))/(4*CPU_CLK_UNHALTED_CORE)*100
|
||||
Retiring [%] = UOPS_RETIRED_RETIRE_SLOTS/(4*CPU_CLK_UNHALTED_CORE)*100
|
||||
Back End [%] = (1-((IDQ_UOPS_NOT_DELIVERED_CORE+UOPS_ISSUED_ANY+(4*INT_MISC_RECOVERY_CYCLES))/(4*CPU_CLK_UNHALTED_CORE)))*100
|
||||
--
|
||||
This performance group measures cycles to determine percentage of time spent in
|
||||
front end, back end, retiring and speculation. These metrics are published and
|
||||
verified by Intel. Further information:
|
||||
Webpage describing Top-Down Method and its usage in Intel vTune:
|
||||
https://software.intel.com/en-us/vtune-amplifier-help-tuning-applications-using-a-top-down-microarchitecture-analysis-method
|
||||
Paper by Yasin Ahmad:
|
||||
https://sites.google.com/site/analysismethods/yasin-pubs/TopDown-Yasin-ISPASS14.pdf?attredirects=0
|
||||
Slides by Yasin Ahmad:
|
||||
http://www.cs.technion.ac.il/~erangi/TMA_using_Linux_perf__Ahmad_Yasin.pdf
|
||||
The performance group was originally published here:
|
||||
http://perf.mvermeulen.com/2018/04/14/top-down-performance-counter-analysis-part-1-likwid/
|
35
collectors/likwid/groups/haswell/UOPS.txt
Normal file
35
collectors/likwid/groups/haswell/UOPS.txt
Normal file
@@ -0,0 +1,35 @@
|
||||
SHORT UOPs execution info
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 UOPS_ISSUED_ANY
|
||||
PMC1 UOPS_EXECUTED_THREAD
|
||||
PMC2 UOPS_RETIRED_ALL
|
||||
PMC3 UOPS_ISSUED_FLAGS_MERGE
|
||||
|
||||
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Issued UOPs PMC0
|
||||
Merged UOPs PMC3
|
||||
Executed UOPs PMC1
|
||||
Retired UOPs PMC2
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Issued UOPs = UOPS_ISSUED_ANY
|
||||
Merged UOPs = UOPS_ISSUED_FLAGS_MERGE
|
||||
Executed UOPs = UOPS_EXECUTED_THREAD
|
||||
Retired UOPs = UOPS_RETIRED_ALL
|
||||
-
|
||||
This group returns information about the instruction pipeline. It measures the
|
||||
issued, executed and retired uOPs and returns the number of uOPs which were issued
|
||||
but not executed as well as the number of uOPs which were executed but never retired.
|
||||
The executed but not retired uOPs commonly come from speculatively executed branches.
|
||||
|
31
collectors/likwid/groups/haswell/UOPS_EXEC.txt
Normal file
31
collectors/likwid/groups/haswell/UOPS_EXEC.txt
Normal file
@@ -0,0 +1,31 @@
|
||||
SHORT UOPs execution
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 UOPS_EXECUTED_USED_CYCLES
|
||||
PMC1 UOPS_EXECUTED_STALL_CYCLES
|
||||
PMC2 CPU_CLOCK_UNHALTED_TOTAL_CYCLES
|
||||
PMC3:EDGEDETECT UOPS_EXECUTED_STALL_CYCLES
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Used cycles ratio [%] 100*PMC0/PMC2
|
||||
Unused cycles ratio [%] 100*PMC1/PMC2
|
||||
Avg stall duration [cycles] PMC1/PMC3:EDGEDETECT
|
||||
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Used cycles ratio [%] = 100*UOPS_EXECUTED_USED_CYCLES/CPU_CLK_UNHALTED_CORE
|
||||
Unused cycles ratio [%] = 100*UOPS_EXECUTED_STALL_CYCLES/CPU_CLK_UNHALTED_CORE
|
||||
Avg stall duration [cycles] = UOPS_EXECUTED_STALL_CYCLES/UOPS_EXECUTED_STALL_CYCLES:EDGEDETECT
|
||||
-
|
||||
This performance group returns the ratios of used and unused cycles regarding
|
||||
the execution stage in the pipeline. Used cycles are all cycles where uOPs are
|
||||
executed while unused cycles refer to pipeline stalls. Moreover, the group
|
||||
calculates the average stall duration in cycles.
|
31
collectors/likwid/groups/haswell/UOPS_ISSUE.txt
Normal file
31
collectors/likwid/groups/haswell/UOPS_ISSUE.txt
Normal file
@@ -0,0 +1,31 @@
|
||||
SHORT UOPs issueing
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 UOPS_ISSUED_USED_CYCLES
|
||||
PMC1 UOPS_ISSUED_STALL_CYCLES
|
||||
PMC2 CPU_CLOCK_UNHALTED_TOTAL_CYCLES
|
||||
PMC3:EDGEDETECT UOPS_ISSUED_STALL_CYCLES
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Used cycles ratio [%] 100*PMC0/PMC2
|
||||
Unused cycles ratio [%] 100*PMC1/PMC2
|
||||
Avg stall duration [cycles] PMC1/PMC3:EDGEDETECT
|
||||
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Used cycles ratio [%] = 100*UOPS_ISSUED_USED_CYCLES/CPU_CLK_UNHALTED_CORE
|
||||
Unused cycles ratio [%] = 100*UOPS_ISSUED_STALL_CYCLES/CPU_CLK_UNHALTED_CORE
|
||||
Avg stall duration [cycles] = UOPS_ISSUED_STALL_CYCLES/UOPS_ISSUED_STALL_CYCLES:EDGEDETECT
|
||||
-
|
||||
This performance group returns the ratios of used and unused cycles regarding
|
||||
the issue stage in the pipeline. Used cycles are all cycles where uOPs are
|
||||
issued while unused cycles refer to pipeline stalls. Moreover, the group
|
||||
calculates the average stall duration in cycles.
|
31
collectors/likwid/groups/haswell/UOPS_RETIRE.txt
Normal file
31
collectors/likwid/groups/haswell/UOPS_RETIRE.txt
Normal file
@@ -0,0 +1,31 @@
|
||||
SHORT UOPs retirement
|
||||
|
||||
EVENTSET
|
||||
FIXC0 INSTR_RETIRED_ANY
|
||||
FIXC1 CPU_CLK_UNHALTED_CORE
|
||||
FIXC2 CPU_CLK_UNHALTED_REF
|
||||
PMC0 UOPS_RETIRED_USED_CYCLES
|
||||
PMC1 UOPS_RETIRED_STALL_CYCLES
|
||||
PMC2 CPU_CLOCK_UNHALTED_TOTAL_CYCLES
|
||||
PMC3:EDGEDETECT UOPS_RETIRED_STALL_CYCLES
|
||||
|
||||
METRICS
|
||||
Runtime (RDTSC) [s] time
|
||||
Runtime unhalted [s] FIXC1*inverseClock
|
||||
Clock [MHz] 1.E-06*(FIXC1/FIXC2)/inverseClock
|
||||
CPI FIXC1/FIXC0
|
||||
Used cycles ratio [%] 100*PMC0/PMC2
|
||||
Unused cycles ratio [%] 100*PMC1/PMC2
|
||||
Avg stall duration [cycles] PMC1/PMC3:EDGEDETECT
|
||||
|
||||
|
||||
LONG
|
||||
Formulas:
|
||||
Used cycles ratio [%] = 100*UOPS_RETIRED_USED_CYCLES/CPU_CLK_UNHALTED_CORE
|
||||
Unused cycles ratio [%] = 100*UOPS_RETIRED_STALL_CYCLES/CPU_CLK_UNHALTED_CORE
|
||||
Avg stall duration [cycles] = UOPS_RETIRED_STALL_CYCLES/UOPS_RETIRED_STALL_CYCLES:EDGEDETECT
|
||||
-
|
||||
This performance group returns the ratios of used and unused cycles regarding
|
||||
the retirement stage in the pipeline (re-order buffer). Used cycles are all
|
||||
cycles where uOPs are retired while unused cycles refer to pipeline stalls.
|
||||
Moreover, the group calculates the average stall duration in cycles.
|
Reference in New Issue
Block a user