Add likwid collector

This commit is contained in:
Thomas Roehl
2021-03-25 14:47:10 +01:00
parent 4fddcb9741
commit a6ac0c5373
670 changed files with 24926 additions and 0 deletions

View File

@@ -0,0 +1,26 @@
SHORT Branch prediction miss rate/ratio
EVENTSET
PMC0 RETIRED_INSTRUCTIONS
PMC1 RETIRED_BRANCH_INSTR
PMC2 RETIRED_MISPREDICTED_BRANCH_INSTR
METRICS
Runtime (RDTSC) [s] time
Branch rate PMC1/PMC0
Branch misprediction rate PMC2/PMC0
Branch misprediction ratio PMC2/PMC1
Instructions per branch PMC0/PMC1
LONG
Formulas:
Branch rate = RETIRED_BRANCH_INSTR/RETIRED_INSTRUCTIONS
Branch misprediction rate = RETIRED_MISPREDICTED_BRANCH_INSTR/RETIRED_INSTRUCTIONS
Branch misprediction ratio = RETIRED_MISPREDICTED_BRANCH_INSTR/RETIRED_BRANCH_INSTR
Instructions per branch = RETIRED_INSTRUCTIONS/RETIRED_BRANCH_INSTR
-
The rates state how often on average a branch or a mispredicted branch occurred
per instruction retired in total. The branch misprediction ratio sets directly
into relation what ratio of all branch instruction where mispredicted.
Instructions per branch is 1/branch rate.

View File

@@ -0,0 +1,32 @@
SHORT Data cache miss rate/ratio
EVENTSET
PMC0 RETIRED_INSTRUCTIONS
PMC1 DATA_CACHE_ACCESSES
PMC2 DATA_CACHE_REFILLS_ALL
PMC3 DATA_CACHE_REFILLS_NB_ALL
METRICS
Runtime (RDTSC) [s] time
data cache misses PMC2+PMC3
data cache request rate PMC1/PMC0
data cache miss rate (PMC2+PMC3)/PMC0
data cache miss ratio (PMC2+PMC3)/PMC1
LONG
Formulas:
data cache misses = DATA_CACHE_REFILLS_ALL + DATA_CACHE_REFILLS_NB_ALL
data cache request rate = DATA_CACHE_ACCESSES / RETIRED_INSTRUCTIONS
data cache miss rate = (DATA_CACHE_REFILLS_ALL + DATA_CACHE_REFILLS_NB_ALL)/RETIRED_INSTRUCTIONS
data cache miss ratio = (DATA_CACHE_REFILLS_ALL + DATA_CACHE_REFILLS_NB_ALL)/DATA_CACHE_ACCESSES
-
This group measures the locality of your data accesses with regard to the
L1 cache. Data cache request rate tells you how data intensive your code is
or how many data accesses you have on average per instruction.
The data cache miss rate gives a measure how often it was necessary to get
cache lines from higher levels of the memory hierarchy. And finally
data cache miss ratio tells you how many of your memory references required
a cache line to be loaded from a higher level. While the# data cache miss rate
might be given by your algorithm you should try to get data cache miss ratio
as low as possible by increasing your cache reuse.

View File

@@ -0,0 +1,26 @@
SHORT Cycles per instruction
EVENTSET
PMC0 RETIRED_INSTRUCTIONS
PMC1 CPU_CLOCKS_UNHALTED
PMC2 RETIRED_UOPS
METRICS
Runtime (RDTSC) [s] time
Runtime unhalted [s] PMC1*inverseClock
CPI PMC1/PMC0
CPI (based on uops) PMC1/PMC2
IPC PMC0/PMC1
LONG
Formulas:
CPI = CPU_CLOCKS_UNHALTED/RETIRED_INSTRUCTIONS
CPI (based on uops) = CPU_CLOCKS_UNHALTED/RETIRED_UOPS
IPC = RETIRED_INSTRUCTIONS/CPU_CLOCKS_UNHALTED
-
This group measures how efficient the processor works with
regard to instruction throughput. Also important as a standalone
metric is RETIRED_INSTRUCTIONS as it tells you how many instruction
you need to execute for a task. An optimization might show very
low CPI values but execute many more instruction for it.

View File

@@ -0,0 +1,16 @@
SHORT Load to store ratio
EVENTSET
PMC0 LS_DISPATCH_LOADS
PMC1 LS_DISPATCH_STORES
METRICS
Runtime (RDTSC) [s] time
Load to store ratio PMC0/PMC1
LONG
Formulas:
Load to store ratio = LS_DISPATCH_LOADS/LS_DISPATCH_STORES
-
This is a simple metric to determine your load to store ratio.

View File

@@ -0,0 +1,26 @@
SHORT Double Precision MFLOP/s
EVENTSET
PMC0 RETIRED_INSTRUCTIONS
PMC1 CPU_CLOCKS_UNHALTED
PMC2 RETIRED_UOPS
PMC3 RETIRED_FLOPS_DOUBLE_ALL
METRICS
Runtime (RDTSC) [s] time
Runtime unhalted [s] PMC1*inverseClock
DP [MFLOP/s] 1.0E-06*(PMC3)/time
CPI PMC1/PMC0
CPI (based on uops) PMC1/PMC2
IPC PMC0/PMC1
LONG
Formulas:
DP [MFLOP/s] = 1.0E-06*(RETIRED_FLOPS_DOUBLE_ALL)/time
CPI = CPU_CLOCKS_UNHALTED/RETIRED_INSTRUCTIONS
CPI (based on uops) = CPU_CLOCKS_UNHALTED/RETIRED_UOPS
IPC = RETIRED_INSTRUCTIONS/CPU_CLOCKS_UNHALTED
-
Profiling group to measure double precisision FLOP rate.

View File

@@ -0,0 +1,26 @@
SHORT Single Precision MFLOP/s
EVENTSET
PMC0 RETIRED_INSTRUCTIONS
PMC1 CPU_CLOCKS_UNHALTED
PMC2 RETIRED_UOPS
PMC3 RETIRED_FLOPS_SINGLE_ALL
METRICS
Runtime (RDTSC) [s] time
Runtime unhalted [s] PMC1*inverseClock
SP [MFLOP/s] 1.0E-06*(PMC3)/time
CPI PMC1/PMC0
CPI (based on uops) PMC1/PMC2
IPC PMC0/PMC1
LONG
Formulas:
SP [MFLOP/s] = 1.0E-06*(RETIRED_FLOPS_SINGLE_ALL)/time
CPI = CPU_CLOCKS_UNHALTED/RETIRED_INSTRUCTIONS
CPI (based on uops) = CPU_CLOCKS_UNHALTED/RETIRED_UOPS
IPC = RETIRED_INSTRUCTIONS/CPU_CLOCKS_UNHALTED
-
Profiling group to measure single precision FLOP rate.

View File

@@ -0,0 +1,21 @@
SHORT Floating point exceptions
EVENTSET
PMC0 RETIRED_INSTRUCTIONS
PMC1 RETIRED_FP_INSTRUCTIONS_ALL
PMC2 FPU_EXCEPTION_ALL
METRICS
Runtime (RDTSC) [s] time
Overall FP exception rate PMC2/PMC0
FP exception rate PMC2/PMC1
LONG
Formulas:
Overall FP exception rate = FPU_EXCEPTIONS_ALL / RETIRED_INSTRUCTIONS
FP exception rate = FPU_EXCEPTIONS_ALL / FP_INSTRUCTIONS_RETIRED_ALL
-
Floating point exceptions occur e.g. on the treatment of denormal numbers.
There might be a large penalty if there are too many floating point
exceptions.

View File

@@ -0,0 +1,23 @@
SHORT Instruction cache miss rate/ratio
EVENTSET
PMC0 INSTRUCTION_CACHE_FETCHES
PMC1 INSTRUCTION_CACHE_L2_REFILLS
PMC2 INSTRUCTION_CACHE_SYSTEM_REFILLS
PMC3 RETIRED_INSTRUCTIONS
METRICS
Runtime (RDTSC) [s] time
L1I request rate PMC0/PMC3
L1I miss rate (PMC1+PMC2)/PMC3
L1I miss ratio (PMC1+PMC2)/PMC0
LONG
Formulas:
L1I request rate = INSTRUCTION_CACHE_FETCHES / RETIRED_INSTRUCTIONS
L1I miss rate = (INSTRUCTION_CACHE_L2_REFILLS + INSTRUCTION_CACHE_SYSTEM_REFILLS)/RETIRED_INSTRUCTIONS
L1I miss ratio = (INSTRUCTION_CACHE_L2_REFILLS + INSTRUCTION_CACHE_SYSTEM_REFILLS)/INSTRUCTION_CACHE_FETCHES
-
This group measures the locality of your instruction code with regard to the
L1 I-Cache.

View File

@@ -0,0 +1,33 @@
SHORT L2 cache bandwidth in MBytes/s
EVENTSET
PMC0 DATA_CACHE_REFILLS_ALL
PMC1 DATA_CACHE_EVICTED_ALL
PMC2 CPU_CLOCKS_UNHALTED
METRICS
Runtime (RDTSC) [s] time
Runtime unhalted [s] PMC2*inverseClock
L2D load bandwidth [MBytes/s] 1.0E-06*PMC0*64.0/time
L2D load data volume [GBytes] 1.0E-09*PMC0*64.0
L2D evict bandwidth [MBytes/s] 1.0E-06*PMC1*64.0/time
L2D evict data volume [GBytes] 1.0E-09*PMC1*64.0
L2 bandwidth [MBytes/s] 1.0E-06*(PMC0+PMC1)*64.0/time
L2 data volume [GBytes] 1.0E-09*(PMC0+PMC1)*64.0
LONG
Formulas:
L2D load bandwidth [MBytes/s] = 1.0E-06*DATA_CACHE_REFILLS_ALL*64.0/time
L2D load data volume [GBytes] = 1.0E-09*DATA_CACHE_REFILLS_ALL*64.0
L2D evict bandwidth [MBytes/s] = 1.0E-06*DATA_CACHE_EVICTED_ALL*64.0/time
L2D evict data volume [GBytes] = 1.0E-09*DATA_CACHE_EVICTED_ALL*64.0
L2 bandwidth [MBytes/s] = 1.0E-06*(DATA_CACHE_REFILLS_ALL+DATA_CACHE_EVICTED_ALL)*64/time
L2 data volume [GBytes] = 1.0E-09*(DATA_CACHE_REFILLS_ALL+DATA_CACHE_EVICTED_ALL)*64
-
Profiling group to measure L2 cache bandwidth. The bandwidth is
computed by the number of cache line loaded from L2 to L1 and the
number of modified cache lines evicted from the L1.
Note that this bandwidth also includes data transfers due to a
write allocate load on a store miss in L1 and copy back transfers if
originated from L2.

View File

@@ -0,0 +1,20 @@
SHORT Main memory bandwidth in MBytes/s
EVENTSET
UPMC0 UNC_DRAM_ACCESSES_DCT0_ALL
UPMC1 UNC_DRAM_ACCESSES_DCT1_ALL
METRICS
Runtime (RDTSC) [s] time
Memory bandwidth [MBytes/s] 1.0E-06*(UPMC0+UPMC1)*64.0/time
Memory data volume [GBytes] 1.0E-09*(UPMC0+UPMC1)*64.0
LONG
Formulas:
Memory bandwidth [MBytes/s] = 1.0E-06*(DRAM_ACCESSES_DCTO_ALL+DRAM_ACCESSES_DCT1_ALL)*64/time
Memory data volume [GBytes] = 1.0E-09*(DRAM_ACCESSES_DCTO_ALL+DRAM_ACCESSES_DCT1_ALL)*64
-
Profiling group to measure memory bandwidth drawn by all cores of a socket.
Note: As this group measures the accesses from all cores it only makes sense
to measure with one core per socket, similar as with the Intel Nehalem Uncore events.

View File

@@ -0,0 +1,28 @@
SHORT Read/Write Events between the ccNUMA nodes
EVENTSET
UPMC0 UNC_CPU_TO_DRAM_LOCAL_TO_0
UPMC1 UNC_CPU_TO_DRAM_LOCAL_TO_1
UPMC2 UNC_CPU_TO_DRAM_LOCAL_TO_2
UPMC3 UNC_CPU_TO_DRAM_LOCAL_TO_3
METRICS
Runtime (RDTSC) [s] time
DRAM read/write local to 0 [MegaEvents/s] 1.0E-06*UPMC0/time
DRAM read/write local to 1 [MegaEvents/s] 1.0E-06*UPMC1/time
DRAM read/write local to 2 [MegaEvents/s] 1.0E-06*UPMC2/time
DRAM read/write local to 3 [MegaEvents/s] 1.0E-06*UPMC3/time
LONG
Formulas:
DRAM read/write local to 0 [MegaEvents/s] = 1.0E-06*UNC_CPU_TO_DRAM_LOCAL_TO_0/time
DRAM read/write local to 1 [MegaEvents/s] = 1.0E-06*UNC_CPU_TO_DRAM_LOCAL_TO_1/time
DRAM read/write local to 2 [MegaEvents/s] = 1.0E-06*UNC_CPU_TO_DRAM_LOCAL_TO_2/time
DRAM read/write local to 3 [MegaEvents/s] = 1.0E-06*UNC_CPU_TO_DRAM_LOCAL_TO_3/time
-
Profiling group to measure the traffic from local CPU to the different
DRAM NUMA nodes. This group allows to detect NUMA problems in a threaded
code. You must first determine on which memory domains your code is running.
A code should only have significant traffic to its own memory domain.

View File

@@ -0,0 +1,28 @@
SHORT Read/Write Events between the ccNUMA nodes
EVENTSET
UPMC0 UNC_CPU_TO_DRAM_LOCAL_TO_4
UPMC1 UNC_CPU_TO_DRAM_LOCAL_TO_5
UPMC2 UNC_CPU_TO_DRAM_LOCAL_TO_6
UPMC3 UNC_CPU_TO_DRAM_LOCAL_TO_7
METRICS
Runtime (RDTSC) [s] time
DRAM read/write local to 4 [MegaEvents/s] 1.0E-06*UPMC0/time
DRAM read/write local to 5 [MegaEvents/s] 1.0E-06*UPMC1/time
DRAM read/write local to 6 [MegaEvents/s] 1.0E-06*UPMC2/time
DRAM read/write local to 7 [MegaEvents/s] 1.0E-06*UPMC3/time
LONG
Formulas:
DRAM read/write local to 4 [MegaEvents/s] = 1.0E-06*UNC_CPU_TO_DRAM_LOCAL_TO_4/time
DRAM read/write local to 5 [MegaEvents/s] = 1.0E-06*UNC_CPU_TO_DRAM_LOCAL_TO_5/time
DRAM read/write local to 6 [MegaEvents/s] = 1.0E-06*UNC_CPU_TO_DRAM_LOCAL_TO_6/time
DRAM read/write local to 7 [MegaEvents/s] = 1.0E-06*UNC_CPU_TO_DRAM_LOCAL_TO_7/time
-
Profiling group to measure the traffic from local CPU to the different
DRAM NUMA nodes. This group allows to detect NUMA problems in a threaded
code. You must first determine on which memory domains your code is running.
A code should only have significant traffic to its own memory domain.

View File

@@ -0,0 +1,34 @@
SHORT TLB miss rate/ratio
EVENTSET
PMC0 RETIRED_INSTRUCTIONS
PMC1 DATA_CACHE_ACCESSES
PMC2 L2_DTLB_HIT_ALL
PMC3 DTLB_MISS_ALL
METRICS
Runtime (RDTSC) [s] time
L1 DTLB request rate PMC1/PMC0
L1 DTLB miss rate (PMC2+PMC3)/PMC0
L1 DTLB miss ratio (PMC2+PMC3)/PMC1
L2 DTLB request rate (PMC2+PMC3)/PMC0
L2 DTLB miss rate PMC3/PMC0
L2 DTLB miss ratio PMC3/(PMC2+PMC3)
LONG
Formulas:
L1 DTLB request rate = DATA_CACHE_ACCESSES / RETIRED_INSTRUCTIONS
L1 DTLB miss rate = (L2_DTLB_HIT_ALL+DTLB_MISS_ALL)/RETIRED_INSTRUCTIONS
L1 DTLB miss ratio = (L2_DTLB_HIT_ALL+DTLB_MISS_ALL)/DATA_CACHE_ACCESSES
L2 DTLB request rate = (L2_DTLB_HIT_ALL+DTLB_MISS_ALL)/RETIRED_INSTRUCTIONS
L2 DTLB miss rate = DTLB_MISS_ALL / RETIRED_INSTRUCTIONS
L2 DTLB miss ratio = DTLB_MISS_ALL / (L2_DTLB_HIT_ALL+DTLB_MISS_ALL)
-
L1 DTLB request rate tells you how data intensive your code is
or how many data accesses you have on average per instruction.
The DTLB miss rate gives a measure how often a TLB miss occurred
per instruction. And finally L1 DTLB miss ratio tells you how many
of your memory references required caused a TLB miss on average.
NOTE: The L2 metrics are only relevant if L2 DTLB request rate is
equal to the L1 DTLB miss rate!