12VDC [car battery]
a computer [Raspberry Pi Zero, + LCD + hub/keyboard, etc.] with internal DC-converters... ~1A, 12V
a USB-attached external hard drive, ~1A, 12V
each with identical five foot heavy-gauge power cables powered from the same point.
devices not yet connected via USB
~20-30mV difference at negative input to computer vs. negative input to powered-off drive [at the devices]
Thus ~20m ohm [0.02 ohms] on the negative power wires...
Both devices powered, the mV difference between their negative inputs varies between ~+15mV and -30mV depending on what each device is doing.
No biggy, right?
Connect them via USB!
Here's a resettable fuse on the drive's negative input:
<todo, grab picture from old phone with good macro mode>
Description: got so hot it looks nothing like a fuse anymore, more like a piece of burnt charcoal, complete with burn marks all around on PCB.
Now, I'm not certain that came from this setup, or the fact that my stupid DC Barrel Jack on compy was all-round metal, allowing one to touch tip-to-shell on insertion.
BUT: continue with me regarding the previously-described setup/measurements.
~20mV difference at negative terminals
~0.02ohms on power cables' negative wire
~1A, 12V to each device
Devices NOT connected to each other via USB cable [yet]
Connect ammeter between USB-A port's shell at compy, and USB-B port's shell at drive.
"OK, so a couple LEDs, big deal! USB ports handle 500mA!"
How does Ammeter measure current?
Puts a small resistor in series with probes and measures voltage across the resistor.
[a resistor whose value has to be greater than probes, which are roughly the same gauge, and together about the same length as the power cords... so they're prb ~0.02ohms, then per 10-to-1 rule-of-thumb series resistor is probably ~0.2 ohms, which would make sense being that the minimum voltage-setting is 200mV and the amp-setting is 10A "max" but displays 15A if pushed to do-so]
So, it's not measuring the current that would go through the shield/VBus-, but some *fraction* of it... plausibly 1/10th!
700mA might surge through that when the drive starts writing/seeking!
AND: that may go *either* way, the port may handle 700mA, but what about -700mA?
That's just the tip of the iceburg.
But, continuing that experiment, imagine the shield braid is roughly the same gauge as the power cords [when not wrapped around the USB signals] and the cable about 5ft... ~0.02ohms, just like the power cords' negative wire... so, now, if drive is idle/off and compy's running full-tilt at 1.5A, 1A will go through compy's power-cord's negative [0.02ohms], and 0.5A will go through the usb shield into the drive, and out its' power cord... [0.04ohms]. not so great. And, anyhow, 1/3rd of compy's current won't return down compy's power cord, when the drive is off/idle! And vice-versa!
Think that's not a problem? Still OK with a self-powered USB device sinking and sourcing 500mA whenever?
OK, how 'bout this:
Compy has a DC-DC converter which was designed with a far underrated input capacitor... [I figured this out later. Bear with me]...
So, imagine a DC-DC buck converter...
How does it work?
it has an SPDT switch and an inductor.
In one position the inductor current "charges" from the input-voltage [12V], and flows into your device/load.
In the other position the inductor current "discharges" into the load.
It switches *really fast* so that inductor barely has a chance to "discharge" before getting "recharged."
Thus, in both switch positions the current through the inductor and into the device/load remains roughly constant, and so then does the voltage.
"OK, what's the problem?"
When the inductor is charging, the full device current [say 1A] flows from the source and back to it. So, half the time the battery supplies 1A at 12V [for a 1A 5V load].
The other half of the time there is no current to/from the battery; the /inductor/ acts as the battery.
"Sounds brilliant! No wonder they caught on so fast! So, what's wrong with that?"
The battery sees a load switching between 0A and 1A at, say, 100KHz... that's a LOT of ugly, but we'll focus on the present scenario:
Remember how compy is using 1.5A... roughly half the time, 1.5A goes down compy's power-cord's positive wire. 1A of that goes through its power-cord's negative wire. Ugly-enough.
0.5A goes through the USB cable into the drive and down its power-cord's negative wire. UGLY.
But atop that, it's 0.5A /switched/ at 100KHz! Half the time it's going through the USB cable and half the time it doesn't, because the inductor "battery" is inside compy!
Now, if you don't think 0.5A at 100KHz is going to cause problems with data, then I've got even UGLIER stuff for your consideration.
The hard drive ALSO has a switching supply.
So, say both devices are running at full-tilt, 1.5A each...
And for simplicity, let's say their power supplies are synchronized, 180degrees out of phase. So, when compy's drawing its 1.5A from the car battery, the hard drive draws its 1.5A from its own internal dedicated inductor "battery". The real battery sends 1.5A from its positive terminal, up compy's power cord, into compy, then gets 2/3 of it returned via compy's power cord, and the other 1/3rd goes from compy to the drive and back to the battery through the drive's power cord. [Negative wire. Also Note: no current is flowing on the drive's positive wire, because the drive is running off the inductor].
OK, now the two switching-supplies switch, the whole scenario is reversed. INCLUDING: 1/3rd of the /drive's/ current is now flowing from the drive down the USB cable, and into Compy.
Whereas before we were concerned [well, I was] about 0.5A going through the USB cable, whereas then I was concerned about 0.5A and 0A alternating down the USB cable, /now/ that concern has /doubled/ to 0.5A and /negative/ 0.5A alternating down that cable.
BUT IT GETS WORSE!
What happens when the supplies *aren't* synchronized? Or run at different frequencies, entirely? And different loading? [All of which is pretty much guaranteed to be the case].
OK, still think that measly 20-30mV difference in "ground" is no big deal? Or how about how that 20mV somehow turned into 70mA? Or that 500mA? Or a 1A peak-to-peak square-wave alongside your data?
After much deliberation and research, I've decided to isolate each device's power supplies.... the normal way. Use 120VAC [thus, an inverter] and switching-wallwarts. I've checked, their outputs are indeed isolated.
It seems shielding, especially of USB cables/connectors/devices, is really quite an art. None of the resources I found consider a case like mine, where host and device both run off a single supply, where power *doesn't* go through the USB cable, because they draw too much power.
There was a bit of speculation from various engineers in various forums. This situation being somewhat similar to industrial control panels. Also automotive [duh] applications like CAN-bus. In such situations grounding/shielding is a considerable consideration, apparently for exactly these reasons.
The basic conclusion seems to be that USB was never intended for such situations. Some of the typical practices in such situations *can't* be applied to USB. E.g. shielding might be connected at the sender, but *not* at the receiver. But USB is bidirectional.
It seems there are USB isolators for such situations; I haven't looked into them, I imagine they're not cheap [bidirectional opto-coupling? signals which are only /sort-of/ differential, sometimes?]
Before I decided to revert to AC I was *almost* convinced to do the following: shield connected at host's ground, but connected to device's ground through a parallel resistor and capacitor.
The capacitor would keep the two ends essentially "shorted" to both ends' grounds during high-frequency events, like the ringing caused by switching power supplies. WITHOUT allowing the device to draw ground current through the shield and other device's ground. Thus, when either switcher switched from "internal battery" [inductor] to the external real battery, and suddenly a large current would flow down its ground wire to the real battery, that device's ground voltage would raise sharply [due to the wire resistance], coupling into the signals, etc. With the capacitor, both grounds would raise together, smoothing that edge. And, being that my wires were tens of milli-ohms, even when that capacitor settled, the difference in grounds would still only be tens of millivolts. Just not a *sharp*-edged difference.
And, thus, each device's return current goes down its own wire, NOT through the USB cable, and any sharp differences in ground get smoothed.
And the resistor? I was thinking somewhere around 3-33ohms, about 100-1000 times that of the wire-resistance. A tiny portion of the devices' return path would go through the cable, but minimal. More importantly, try to keep that shield "grounded" at both-ends, as much as feasible, by the original design... since both ends transmit. [Maybe 3ohms || capacitor at *both* ends woulda been smarter?]
It was probably hokey, all that thinking, but sorta made sense to me, and not unlike many others' suggestions [several suggested *mega*-ohm resistors!], but, again, were shielding designs for *other* signalling methods applied to a fundamentally different one. As were many other suggestions.
Frankly, from the sounds of it, design of such shielding is *very* application-specific... there are Numerous ideas from smart folk suggesting *entirely different* and even contradictory "best practices". Some focus on EMI radiation, some on EMI susceptibility. Some on high frequency, some on low, few I found focus on DC [like my concerns]. But, it seems those best-practices vary widely and depend dramatically on the needs...
AND, it seems, USB basically flies in the face of all that by, essentially, assuming that any device is essentially an extension of the host... e.g. shielding guidelines basically rely on the idea that the device is enclosed in the same faraday cage as the host, with no openings [e.g. no separate power source, etc.]. the shield on the cable, then, is essentially an outshoot of the host's cage into another such cage, making essentially *one*.
They get away with it, mostly, I think, because "self-powered" devices are usually powered by *isolated* power sources. Kinda like throwing a battery in the ol' faraday cage.
And when two devices are both earth-grounded at the shield *and* at their power source [which actually is rather rare, I think; maybe corporate printers, but they'd more likely use ethernet], I guess the idea is that no current from the device's power supply has reason to flow through the shield, nor earth, since the power source is at the device end, and otherwise isolated from the other device's power source, and at low voltages like these current likes to flow in circles right back to the source. [Thus "circuit"/circle]
Mine, again, is different because my devices' sources are *not* isolated, and are, in fact, the same [well, roughly 50% of the time, when they're not powered by the inductor "battery"]. So, current will take any and every path back to the source. Including right down that hefty shield.
Thing is, it seems, the majority of USB cables/devices don't really even follow that single-faraday-cage principle; mostly, probably, due to manufacturing costs, but also design-costs of understanding *exactly* what's going on, vs. following "best practices," and design based on trial-and-error, which, too, can switch-up concerns for other concerns [e.g. practices implemented in order to pass EMI-radiation testing don't seem at all concerned with current-flow *internal* to the system].
So, frankly, in a way it's kinda miraculous so many devices/cables actually function as well as they do. I suppose some of that has to do with error-checking and retries that're usually invisible to users. E.G. my two switching supplies running at ~100KHz, might've imparted data-destructive impulses 400,000 times per second... but at 480,000,000 bits per second, and packet-sizes of less than 1000 bits, the odds.... wait a minute... no... it's in fact *quite* likely a [small] 512Byte [4096bit] packet would occur during those impulses! Huh.
I suppose big-ass capacitors at the DC-converters probably helped.
Anyhow, this is too much for me... i'da had to modify all my devices' shielding-to-ground interfaces, and add bigger-ass-caps to all the DC-converter inputs, then consider the brownouts caused by switching those devices' converters' capacitors on, and so-on... and, once having done, wonder about those effects if/when I have a regular ol' AC source to work with.
It's surely less efficient, which'll drain my battery faster, but I'm going back to the ol' tried-and-[allegedly]-true method of isolated power sources, which can be off-the-shelf afforded by using a friggin' inverter to friggin' bump my friggin' 12VDC source up to 120VAC just to bump it back down to 12VDC with regular ol' power-bricks/wallwarts already in my collection, came with the devices, or could even get from a thrift-store, one for each device. Weee! So much easier and cheaper.
Still, it's probably wise to /make sure/ each power-supply is isolated, and the inverter, too.
[BTW: why do even cheap 120VAC->DC-Converters get all the isolated-output-fun when regular ol' DC-DC converters with isolated outputs are comparatively rare? They still use transformers! BUT: they can get away with MUCH smaller ones than the ol' wallwarts because: they can simply use a 1:1 transformer; fewer windings and even the same gauge wire for input and output. Sure, there are methods to put a 1:1 transformer on the *output* of a DC-converter; but consider the high current those windings would handle, and the wire gauge necessary! Putting it on the 120VAC side [they're, really, isolated-/input/ converters] means the windings handle 1/10th of the current, so the gauge can be much smaller... and, thus, so cheap as to throw into even cheap switching wall-warts. And thus removing all them common-source/unisolated woes I've encountered.]