The HP T620 is a "thin client" that is several years old, but still a quite capable little mini PC or server. However, the built-in storage is typically a 4 or 8gb M.2 SATA (NGFF) solid state drive, which is a little too small for many use cases. Because M.2 SATA drives are less popular than NVMe drives, they're more expensive per gigabyte. The T620 has a mini PCIe slot inside (typically used for a wifi card), which this card adapts to an M.2 PCIe slot suitable for an NVMe SSD.
I've installed my adapter board, with a cheap SK Hynix 128G SSD, into my T620. Since I don't have any M1.6 screws, I've temporarily secured it with some kapton tape.
I've installed Ubuntu on it (using the SATA SSD that came with the T620 as the /boot drive. I am assuming that the BIOS doesn't know how to boot NVMe drives, but I haven't tested it.)
I've used fio to benchmark the drive, per these instructions. All tests are done with 8 threads, queue depth 64.
Write throughput (1M writes)
Write IOPS (4k writes)
Read throughput (1M reads)
Read IOPS (4k reads)
The read throughput seems to have nearly saturated the PCIe connection: the mPCIe only has one PCIe v2 lane, which has a theoretical max speed of 500MB/s.
I noticed that with the NVMe drive in place, Ubuntu names my network interface enp2s0, whereas without the NVMe drive, the interface is called enp1s0. This means if you take an existing install and add an NVMe drive to it with this adapter, your network configuration might break. I was able to fix this by editing /etc/netplan/00-installer-config.yaml and adding an entry for enp2s0 underneath the entry for enp1s0:
$ cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'