Close
0%
0%

Standalone digital TDR (TDR-G2)

My second generation TDR which aims to be standalone (no oscilloscope) and as cheap as possible while being usable.

Similar projects worth following
A TDR (time-domain reflectometer) which requires no oscilloscope and no expensive parts. A TDR built out of mostly cheap components. Bandwidth about 2.5 GHz, time step 20 ps (can be set down to about 1.5 ps). Has averaging, autocalibration, firmware which guides the user through the calibration and measurement and software for PC which allows more comfortable user interface and calibration.
Also, it is my diploma thesis.

General parameters

The reflectometer is able to measure in 20 ps steps (or any other step, it is just a matter of software configuration). The pulse generator makes rectangular wave with risetime of 85 ps. Measured risetime is about 200 ps. As a standalone device, it can detect simple discontinuities, such as shorts, disconnected cable, a split which halves impedance. When connected to PC, it can perform OSL calibration and show calibrated data.
If you want to know more, look into project logs.

GUI in Octave

GUI on the device itself

gui.m

Software for PC. It expects Octave 5.1.0+.

x-objcsrc - 61.56 kB - 01/26/2020 at 22:38

Download

TDR_5351_core.pdf

Schematic diagram of the TDR.

Adobe Portable Document Format - 264.48 kB - 01/08/2020 at 22:50

Preview
Download

Firmware.zip

Firmware for the STM32F103 microcontroller, project for SW4STM32.

Zip Archive - 5.80 MB - 01/08/2020 at 22:48

Download

Hardware.zip

KiCAD project.

Zip Archive - 2.39 MB - 01/08/2020 at 22:47

Download

TDR_5351.zip

Gerber files

Zip Archive - 611.19 kB - 01/08/2020 at 22:46

Download

View all 6 files

  • A goal was achieved

    MS-BOSS02/07/2020 at 10:57 0 comments

    Yesterday, I got an engineer's degree. The mark for my diploma thesis (the reflectometer) was "A" and mark from the degree exam was also "A".

    Time to move to next projects!

  • Measured parameters

    MS-BOSS02/05/2020 at 17:46 0 comments

    I promised this log for several times and now it is finally here. Let's start with how a rising edge looks.

    Risetime

    The orange and light blue ones were measured using commercial devices, the Agilent 86100C and LeCroy WaveRunner (this one was already mentioned last time). As you can see, the Agilent has the highest risetime (its bandwidth is about 26 GHz), about 89 ps which corresponds to the true waveform of the CML buffer pulse generator. Then the LeCroy, which cannot cope with the risetime and even shows low sample rate (about 25 GSa/s). The green trace is the reflectometer without SOL calibration and noise reduction, the risetimeis about 220 ps. The calibrated reflectometer output with noise reduction is slightly slower and shows pre-echo on the edge due to restricted spectrum.

    Bandwidth and SNR


    The SNR is a semi-measured parameter since it uses measured noise and estimated Gaussian pulse. The resulting SNR touches 0 dB at 2.5 GHz and goes under 0 dB near 3 GHz. So, the estimated useful frequency range is about 2-2.5 GHz.


    The resulting Wiener filter is nearly equal to 0 dB until 2-2.5 GHz and then sharply cuts off the spectrum. The resulting spectrum of the reflectometer after SOL calibration and noise reduction looks like this. The calibration seems valid until 2 GHz, then noise starts to emerge and the filter starts cutting it.

    Pulse generator parameters

    Here you can see the falling edge of the pulse generator, its fall time and its jitter. The jitter is measured between one unused output of the Si5351 and the pulse generator output. So, it doesn't tell us anything about jitter between the two VCOs of the 5351. I couldn't come up with any method of measurement of jitter between the two asynchronous outputs.

    Test port impedance

    The input impedance looks quite good until 3.5 GHz. Since the working frequencies of the reflectometer are less than 3 GHz, the input match could be called as "better than -25 dB" which is extremely good. On gigher frequencies, you can see it gets really bad, mostly because of the bad footprint of the SMA connector. Also, the connector is the cheapest Chinese SMA connector ,which doesn't help much with the results. However, in the useable band, the match is quite good. Also, as you may have seen in the log about SOL calibration, the reflection is small enough to be completely suppressed by the calibration.

    The TDR measurement of the port follows. The impedance drops to 35 Ohms for a while. The position of the impedance drop correlates with the SMA footprint on the PCB. The position was tested by using a sliding short and finding where the impedance drop lies. The sliding short was just the SMA torque wrench, no special piece of equipment. Another smaller impedance drop happens on the transition from the footprint to coplanar waveguide. One more impedance drop happens at the resistive splitter.

    Noise reduction

    The noise reduction comprises of averaging and Wiener filtering. The result of filtering is on the next graph. It helps, but is too time-consuming and doesn't get rid of all the noise.

    The Wiener filtering suppresses the noise left after averaging. The filter looks at the averaging used for LOAD calibration, NOISE calibration and averaging used for the measurement itself. The filtering works for both calibrated and uncalibrated data.

    First, without noise reduction:

    And with noise reduction:

    The step at 68 ns happens because the original data ends there and the rest is a prediction of future events made by the Fourier transform. But I do not know why the step happens, I would expect the data to continue with the same level, just filled with noise.

    Conclusion

    The reflectometer works and it is usable from DC up to about 2 GHz (maybe slightly over that with reduced SNR). The time resolution is 20 ps and can store 4096 points. It can be calibrated as one-port VNA which removes the side effects of mismatched impedance of its input. An adaptive noise suppression algorithm was implemented.

    What now?

    Now, I will move onto...

    Read more »

  • An appendix to the noise reduction which was cut too early

    MS-BOSS02/03/2020 at 15:33 0 comments

    When I was talking about the Wiener deconvolution last time, I had it implemented, but not properly covered by analysis and graphs and therefore it did not contain as much information as I would like. A part of this appendix will repeat some statements from the last log, but in more detail and better-looking graphs. So, let's start with a graph of the measured spectrum. This one is a "semicalibrated" spectrum, which means that the "load" calibration has already been subtracted from the measurement, however the proper SOL calibration was not performed. The amplitude is logarithmic (and therefore in dB).

    As you can see, it is not normalized to 0 anywhere in the spectrum. The spectrum looks kinda flat until 2 GHz, then falls off rapidly. Between 3 and 4 GHz, the signal completely sinks into something really nasty. It looks like random noise. Because it is. The reflectometer has useable frequency range up to about 2 GHz, then its response falls off and the rest of the spectrum is just random garbage not related to the measurement.

    So you might ask how the noise spectrum really looks like. Here it is.

    At low frequencies, it is quite subtle, but rises fast until about the 5 GHz mark, then stays almost constant. This is used for the estimation of SNR. The second part needed for SNR estimation is a Gaussian pulse or more specifically the probability density function which is a quit good model of derivation of the rectangular TDR impulse. Its best property is that once you integrate the pulse, you obtain a single real number, which is exactly 1. That means that you don't have to bother yourself with proper scaling of its spectrum amplitude-wise. Its spectrum starts at 0 dB and then falls off. The shape of the pulse is on next graph. Its width was figured out by experimentation asa compromise between amount of noise and effects of restrained spectrum on the data (pre-echo on edges, smoothed out features, longer risetime etc.).

    And its corresponding spectrum looks like this. Again, it is logarithmic and in dB.

    Forget about the spectrum above 11 GHz. The noise floor corresponds to "near-zero" values in floating-point computations during FFT. Simply said, it is the noise floor of floating point numbers. When you do something in floating point, stay aware of the fact that its precision is not infinite. There are cases when you might run into this issue and then you have to resort to arbitrary-precision mathematics which can be slow as hell.
    The gaussian spectrum and noise spectrum can be divided (element-wise, not matrix-wise) by each other to obtain SNR. You can see the estimated SNR in next graph, Y axis is logarithmic again. See how it reaches 0 at about 2.5-3 GHz. This is the point from which nonsense data prevail.

    The resulting Wiener filter looks the like this.

    As you can see, its response is unit (Y axis is logarithmic) and then it sharply falls off above 2 GHz. This means that anything above this frequency is cut off. The cutoff frequency and the shape of the filter depends on the estimation of the TDR pulse.

    After the SOL calibration, the spectrum of the measurement looks like this. From about 1.5 Ghz you can see a fast onset of noise.

    So, what happens if the Wiener filter is applied? The resulting spectrum is on the next graph.

    As you can see, the calibration is "somewhat valid" up to 2 GHz and then it gets cut off by the Wiener filter. What does it do to the measured data? The measured data are on the next graph.

    Nasty, isn't it? The large falling edge is response to short on the end of cable. As you can see, it does not reach -1. Then, you can see another, smaller reflection about 12 ns after the first one. That's a reflection caused by the reflectometer's test port having wrong impedance (about 35 Ohms for the footprint of the SMA connector). Then it somehow slowly creeps to the -1 mark. Sadly, I do not know the origin of this effect. Dielectric soaking? Maybe, but not expected on high-quality cable. Heating of the driving transistor inside...

    Read more »

  • How Fourier transform can stab you in the back and one-port VNA calibration basics

    MS-BOSS01/26/2020 at 22:32 0 comments

    Last time, I mentioned I was having some problems with the open-short-load calibration. I didn't know what was going on, supervisor of my diploma thesis had no clues as well and even none of my friends could help me. This time I will tell you the result of this log in the first article: I fixed it.

    What was going wrong

    When I performed the calibration,  it seemed to be somewhat unstable. To make this statement less euphemistic, it was unusable, totally. After the calibration, I checked that applying the calibration onto the data used when performing the calibration, the result was correct. At t=0, there was the correct reflection coefficient and everywhere else were zeros.

    However, after measuring the same calibration standard again (and even without disconnecting it and then reconnecting), the results were off. The peak reflection coefficient was slightly off from t=0, didn't reach 1 or -1 and was not a single-point peak. It was dispersed over several points.

    Expectation can be the biggest mistake

    I thought this was completely bad. However, it turned out that calibration cannot guarantee that the results will be perfect. It may look so in spectrum domain, but the end result in time domain can be quite unexpected. It was mostly my fault, because I was expecting too good results. And it looked really bad, because when the TDR response is shown as impulse response instead of step response, it looks different and not that intuitive.

    So, I started with integrating the result after calibration. A simple algorithm which makes each point a sum of itself with the preceding point and iterating from t=0 to the end of the dataset did the job. However, the results were horrible. After re-measuring the calibration standards, everything looked quite normal.

    But after connecting a cable between the calibration point (end of first cable) and the calibration standard, the result was covered in some periodic noise. And it was large!

    Really, that makes no sense. Both I and my supervisor didn't know what was going on. I tried recreating the equations for calibration from 4 different articles/books only to come up with the same set equations in the end. I even checked them with wxMaxima (great tool, if you need something like Maple, go and get it), all looked OK. But the calibration didn't work. I thought there could be a problem with the FFT/IFFT but couldn't find out what it was. I thought it could be the DC part of the signal or the fact that the measured signal isn't periodic. So I thought I could use windowing in the FFT to mitigate this possible problem. However, when you look into the equations of the calibration, the windowing would disappear in the end (except for points where the window equals 0). So no luck using that. And so I thought it was just damned and that I'm an idiot.

    Yesterday, I thought whether differentiating the measured dataset could help. Simply said, each point in the dataset after differentation equals its original value minus value of the previous point. And... it worked.

    So, from this:

    I got this:

    It was not perfect, but substantially better. The "blip" on 16 ns caused by reflection on the connector of the reflectometer was gone. Overshoot and ringing on the edge? Gone. Noise? A lot of it emerged from nowhere.

    So, the calibration started to work. However, there was a LOT of noise. Way too much noise.

    When you feel like there is too much noise, get yourself a wiener

    Sorry, it should be Wiener with large W. But I couldn't resist playing with the word. The proper term is Wiener deconvolution. In this case, it is a bit simplified. The deconvolution uses an estimation of spectrum of the original signal and spectrum of noise of the piece of equipment.

    So, how it works? I expanded the calibration by one more standard called simply "Noise". First, you have to measure the "Load" standard on maximum averaging and saving it, then measuring the same standard without averaging and saving it as "noise". The software then subtracts these to get...

    Read more »

  • GUI should be for users, not against them

    MS-BOSS01/25/2020 at 20:01 0 comments

    The reflectometer has two GUIs. Each of them has a different usecase and therefore their usage slightly differs. One is a part of the firmware in the reflectometer, the other one is an Octave script which runs on a computer. Please remember that the reflectometer can run without computer or the Octave GUI.

    The part of GUI in firmware

    When the reflectometer is used without computer, it is meant to be used for detecting faults on cables hidden in walls and solving similar problems. Therefore, the GUI is minimalist. No menus, no options, no way to play with the reflectometer. It's meant to do its job and not more. It tells you about its current state, gives you commands during autocalibration and then allows you to run a measurement.

    When the measurement is done, you will be presented with the measured data. The GUI zooms in the data so that each pixel equals one sample (the smooth zooming effect is quite nice) and then scrolls from the beginning of the measured data up to the first found discontinuity, waits there for a while, then shows you the measured reflection coefficient and the position of the discontinuity in the cable. The position is given in picoseconds, because the reflectometer knows nothing about the cable including the phase velocity in the cable. So it's up to you to calculate the spatial position of the discontinuity.

    As you can see, it tries to detect the type of the discontinuity. It can detect "open", "short", "doubled impedance", "halved impedance" and for other cases it only shows "higher impedance" or "lower impedance".

    The position given in time and not in spatial dimensions is not a serious limitation of the firmware, since there are only two ways to measure the physical length of a cable using one-port measurement. The simple one is a case when you know the type of the cable and its velocity factor. The other one requires measuring the total physical length of the cable and then trying to measure the total electrical length, which might be impossible in case of heavily damaged cable. Adding this feature would require an input method for typing numbers (the built reflectometer has only one button). Using a calculator or multiplying in one's head sounds more reasonable to me than adding a keyboard, but I expect someone could argue that I should have added the keyboard (and yes, I could have).

    And that's all it does. Simple, easy to use, self-explaining, guides the user from power-on to results. For more involved analysis, there is the second GUI.

    Octave GUI

    This GUI runs as an Octave script on computer. Theoretically, it can run on anything that has USB, runs Linux and has Octave with support for serial ports. Once you launch it, you are greeted with a window telling you to connect the reflectometer. Well, not greeted, but... well, at least it tells you so. It's not an example of good looking GUI, but I don't count myself as a programmer and even less an artist so please excuse that.

    And it can even tell you if you are running the Octave binary from Flatpak with restricted access to serial ports and tells you how to correct it.

    If there are other available virtual serial ports, it probes them if they are connected to the reflectometer. If not, it waits for you to connect the reflectometer. After a new serial port appears, it tries to connect to it. If it is the reflectometer, it tries to connect to it.

    After a handshake is performed, the GUI tells you the version of the firmware (time of compilation) and then finally connects to the reflectometer.

    You may ask what happens to the reflectometer after the computer has connected. Is it paralyzed and can be controlled only from computer? The answer is no. It only knows that it is remotely controlled and can still be controlled using the physical button (the Octave script expects it as well and won't start to behave funny). The whole state machine of the autocalibration procedure and measurement is running inside the reflectometer, so nothing will happen even if the Octave...

    Read more »

  • What happens in software, stays in software

    MS-BOSS01/22/2020 at 15:49 0 comments

    Intro

    As you might have already noticed, the hardware is quite simple. Usually this either means you are looking at a non-complete prototype which will need additional bugfixes or that there is a lot of work done by software. This project aims to be of the second kind, however I won't pretend it shows some signs of the first kind.

    Synchronous sampling on STM32F103

    "An image is worth a thousand head-sratches." It's quite obvious how to use external ADC triggering on STM32F1 from the image. Or maybe not at all. Let's start by stating that you have to enable the ADC, connect it to appropriate input pins, connecting the trigger to EXTI11 and configuring the NVIC to handle ADC interrupts. More after the image. You might want to make yourself a large mug of coffee.

    First you have to enable the clock to the port with analog pins and AFIO (alternate function configuration peripheral). Then set the analog pins as "analog inputs" and the trigger pin as "floating input" or better connect a pull-up or pull-down. For those familiar with STM32 - if you were expecting the trigger pin to be set as "alternate function", you were wrong. It doesn't work that way. In any case, don't expect to find this piece of information in datasheet or reference manual. Then you have to remap the trigger of ADCs to the external trigger on EXTI11 input. Maybe it will seem simpler once you see the source code (there is no code to enable clock to the ports used, because they are already on).

    At this moment, it is all connected together, but can't trigger yet nor measure anything. What's missing? The trigger has to be enabled and the trigger condition has to be selected. ADC must be enabled, configured and calibrated. Interrupts need to be configured and enabled.

    So just set up the EXTI to react to falling edge on EXTI11 and initialize it. Enable ADC clock and set the ADC to trigger using the external trigger. Then set the sample time length and initialize the ADC. Then it has to be calibrated (internal procedure of the microcontroller). Then one has to wait for the calibration to wait. An anti-optimization measure is included. Then enable the ADC interrupt, configure NVIC to watch the ADC EOC interrupt flag and enable it.

    Now the microcontroller is ready to measure voltage using the ADC after being triggered using the external trigger, then run an interrupt which does some magic with the result. It could also perform a transfer using the DMA, but there are reasons why I used just an interrupt.

    Logic in the interrupt

    I won't explain the whole logic in the interrupt, because the state machine is a bit complicated. Let's concentrate on the practical part of the job.

    The measured data are 10 microseconds long and contain 500000 values 20 ps apart. Since the Si5351 PLL VCOs start with unknown phase offset, it is not known where the beginning of the measurement lies in the dataset. There is also one thing worth mentioning. One sample occupies two bytes and the whole dataset occupies 1 MB of RAM. The F103 has only 20 kB of RAM and you cannot allocate all of it for the sampled data. Therefore you cannot sample the whole dataset and then try to find something in it. It has to be done vice versa.

    The interrupt tries to find a point where the data contains the largest derivation. Because of large amount of noise, it uses average of 16 first-order differences. After a bit of thought, an equivalent of this average is a difference of two values 16 points apart. The equation below should prove that.

    Measurement plane position calibration

    Now we will leave the interrupt for a while. During autocalibration phase, the firmware tries to roughly find the beginning of the dataset by collecting the position of the largest differentiation over several runs. Again, this is because of noise in the dataset. Then it asks the user to connect a cable. Then it performs several runs of measurement. In the measured data, the firmware tries to find the rising edge corresponding...

    Read more »

  • What should one do with the sampled voltage and caveats of samplers

    MS-BOSS01/12/2020 at 23:46 0 comments

    Last time, we've got covered the method for sampling using diodes. However, that's only the beginning of sampling. Then you need to sample into something and then use it for something. Of course, it has to be a capacitor or something capacitor-like. This means you can sample into capacitor, gate of field-effect transistor, input of unipolar op-amp etc. I already mentioned the Houten's sampling head, which uses input of TL072 as sampling capacitor. However, its capacity is about 15 pF. This would heavily load the sampling bridge and also the input of the sampling head. Therefore the sampling head uses another 1 pF capacitor in series with the op-amp, which forms a capacitive divider and also limits loading the sampling bridge.

    There is a reason why use a unipolar amplifier. You have to store the sampled voltage, but that's only one part of the job. After storing the voltage, you want to read it, probably. Well, unless you are Schrödinger's relative and don't actually want to know the voltage. Speaking of Schrödinger, that is the problem. Reading the voltage will probably somehow alter the sampled voltage. If you use bipolar amplifier after the sampling circuit, you will experience an effect called droop. Because a bipolar amplifier has a significant input current (remember, the sampling capacitor is only a few picoFarads), the sampling capacitor gets gradually charged (or discharged). If the time between sampling the voltage and measurement is always the same, you will obtain the data DC shifted, downscaled or mix of both. If the time between those events is not constant, the data will look as if they were noisy. The noise is approximately equal to the droop rate multiplied by the jitter of the time between sampling and measurement (if the droop is approximately linear).

    Therefore there are two things you want to avoid. Input current of amplifier after the sampling circuit should ideally be zero and the time between sampling and measurement of the sampled voltage should be constant. And as short as possible.

    This leads us back to the unipolar amplifiers. These have almost no input current, or at least negligible if the time between sampling and measurement is short enough. You can use MOSFETs, most JFETs, MOSFETs and most unipolar op-amps. However, avoid HEMTs, even if they are called field effect, since their input current is almost the same as in bipolar transistors. Or at least this applies to those which have a proper datasheet (many have only a few limiting values and a set of S-parameters).

    To fight with droop, my reflectometer uses BF998 MOS tetrode as input amplifier followed by second sampler section with much larger sampling capacitor (1 nF). These two samplers are precisely synchronised. The output of the second sampler is then measured by ADC. There is one slight error, the C231 in source of the tetrode is not used in the final reflectometer. It would make the amplifier unstable or make it oscillate. The same would happen if the C230 was larger, R251 was smaller and so on. The circuit is tuned to be as fast as possible while being stable. Making the C230 larger also loads the sampling bridge much more and causes the input impedance of the instrument drop at higher frequencies. Since the bonding wires of the sampling bridge are inductive, the sampling bridge forms a resonator with the C230.

    To explain why there are so many resistors and capacitors placed in seemingly strange places, have a look at the transfer function of the amplifier. Please note that the -6 dB corner frequency is predicted by simulation to be at about 4 GHz, while the transistor is recommended for applications under 1 GHz. Yes, it smells of some positive feedback. But carefully crafted not to turn into instability. Sorry for the captions in graph being in Czech, the bright trace is S21 parameter.

    The transfer function of the whole sampler, from test connector to the output of the BF998 should look somewhat like the next image. According to...

    Read more »

  • How to sample fast while keeping your wallet safe

    MS-BOSS01/10/2020 at 17:23 0 comments

    Have you ever had the need to sample some analog signal? If so, it was probably at low frequencies. Quite probably some audio grade stuff. Everyone knows how to do that. Either take a complete sample-and-hold amplifier like the old LF398 or the DIY way by connecting together some generic amplifier, capacitor and some electronically controlled switch like the 4066.

    But what if you look a bit higher? Several megahertz and up? Okay, the 4066 method still may be useable. You can find quite a lot of receivers which use a 4053 or 4066 as an IQ demodulator for short-wave amateur radio. Still good, still cheap and simple.

    But the true question is what happens once you want to sample something which extends beyond several hundred megahertz or even several gigahertz. Then you will find yourself in a very uncomfortable place. If you wish to do some voodoo with OTAs, you can reach hundreds of megahertz, but for more you have to dive into your pocket and look very deep in your pocket. First, you will be shocked because there are only a few parts which can do such things, like the HMC760. But seriously, are you going to pay $360 for a chip? Maybe, but not me.

    Okay, so searching at e-shops answered the question showed us there is no part which would do this job for us. Time for diving into old manuals, schematics and articles! Maybe some of you have already met some old TDR machine or a 60s Tektronix sampling oscilloscope. They usually have quite similar sampler. Some are very simple, some are very intricate. But the principle is always the same - using diodes as sampling bridges. When current passes through them, so does signal. Turn off the current and the signal also stops going through. Ideally, that is.

    What you can see on this part of schematic form S-2 sampling head is the sampler. It doesn't look obvious, does it? If you don't believe it is a sampler, read the manual which explains the whole sampling head and each of its parts, how it works and how to make it work. It's sad you don't get a manual like this these days. Note that this sampling head was introduced in 1967 and offered 4.6 GHz of bandwidth and <75 ps of risetime.

    To summarize it: a very short pulse of current of defined length goes through the diodes, turns them on briefly and then turns them off again very fast. The two key parameters here are how fast you can turn off the diodes and how precisely you can control at what time it happens. Let's look at how the folks at Tek did this magic.

    What you see is an avalanche pulse generator. You may have already seen it in one of my projects. Essentially it is transistor which has collector-emitter voltage so high it almost goes into avalanche breakdown, but not on its own. The avalanche is triggered by other circuitry using the transformer on the right. This makes a very fast pulse. To make it even faster, a snapoff diode is used (these are very hard to come by these days). The length of the pulse is precisely set by the two "cliplines" on the left.

    It is fast and great, but it requires several supply voltages (and quite high!), draws a lot of current and cannot be triggered very often, because it has to reach steady state again. What about doing it less complicated?

    If you look at article by Hubert Houtman, you can see that it is possible to achieve about 1 GHz of bandwidth using just a "quite" fast comparator and two resistors.

    The MAX961 comparators have rise/fall times about 2.3 ns. The outputs of the Si5351 have those times under 1 ns and are implemented as current sources. Theoretically, this could get us at least 2 GHz of bandwidth using the same diodes. Since we can get faster diodes today, it could be even better. The only unpleasant side effect is a need for a virtual analog ground at 1.65 V potential (half the 3.3 V supply for the Si5351).

    To see how simple it can be, look at the lower right part of the next image. Four diodes, three resistors and directly connect it to Si5351. Nothing complicated. The result is about 200...

    Read more »

  • Equivalent-time sampling is the key to cheap reflectometer

    MS-BOSS01/10/2020 at 14:47 0 comments

    When I was designing this TDR, I was trying to avoid using expensive parts. Thus, ECL delay lines were out of question. I found a few articles which were using other ways to achieve equivalent-time sampling.

    For example, there was a project which used two oscillators which were slightly out of tune. The problem was that it was highly "academic". The time step was dependent on the frequency of those two free-running oscillators, thus it depended on temperature, supply voltage and moon phase. Good idea, but not useable in such form. See the original article on IEEE XPlore (or look it up using SciHub or Library Genesis if you do not have access to scientific articles) A 16ps-resolution Random Equivalent Sampling circuit for TDR utilizing a Vernier time delay generation.

    There were also several projects which used FPGA which implemented a DDS. This DDS generated a sinewave whose phase could be arbitrarily set in very small steps. Then this sinewave was turned into square wave. This way, the researchers were able to generate pulses for generating a pulse for the TDR and sampling pulses in one FPGA and a handful of components. The sampling was performed by a fast latchable comparator. In my opinion, this is the best architecture of TDR which I have seen in scientific articles, however I wanted to avoid FPGAs. See Miniaturized FPGA-Based High-Resolution Time-Domain Reflectometer

    Then there was project which utilized D-type registers in the FPGA to directly store a train of pulses. Very imprecise, temperature dependent and needs a re-evaluation each time you generate bitstream for the FPGA (yuck). I hate non-deterministic digital circuits also, so this one was also out of question. See A Time Domain Reflectometer with 100 ps precision implemented in a cost-effective FPGA for the test of the KLOE-2 Inner Tracker readout anodes

    Delay-line based reflectometers also turned up, however these are too expensive and too limited. See Sequential sampling time domain reflectometer

    And then there was the usual bag of sh** which you can find in scientific articles, usually from universities which have to publish articles, even though they have nothing to show. This usually revolves around connecting pulse generator to oscilloscope, several pages of description of the splitter used for connecting together the generator, oscilloscope and DUT, then several pages covered in equations describing everything from Maxwell equations to the effect of solar eclipse on price of donuts after Easter 1919.

    And so it seemed I had to come up with something new (almost). I was playing with dual-VCO PLLs Si5351 at the time, so I thought if I could use them for this purpose. The answer was "probably yes, why not try it". And so I did. These PLLs allow you to set frequency of the two VCOs which can be non-integer. The fractional part allows the two frequencies to differ by less than 1 ppm. And the relation is precisely controlled as opposed to the "Vernier" article. So, nothing really new, just using newer and more interesting parts. When set to exactly 1 ppm difference in frequency, one can get one million of samples during one measurement cycle. I set the measurement cycle to 10 microseconds so it could be easily measured by internal ADC of STM32 microcontroller. This gives the ability to measure 1 million of samples 10 picosecond apart.

    The frequency of the two VCOs is below, parameters a, b and c are the integer and fractional part of multiplier, d is the divider.

    The time step of sampling is then given by the next equation:

    From these equations, you can find out how to set the parameters to suit your needs (time step, number of points, measurement length).

View all 9 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates