Is there a transistor type, or a similar component, that has a low beta variation as a function of current?
Create an account to leave a comment.
Already have an account?
Take a look at the current mirror. The most basic 2 transistor mirror works with little regard to beta. To illustrate, (ignoring error sources) a 5V power supply and a 5k resistor are connected to the mirror input. The output of the mirror will draw 1mA from whatever it is connected to, even if you connect that transistor's collector to the power supply. If the input to the mirror is from a current source, just remove the 5k resistor. You can have multiple 1mA current sinks by adding more output transistors, or get a 1/2 mA sink by using 2 input transistors. Analog's article might convince you to place that order for a box full of matched transistors https://wiki.analog.com/university/courses/electronics/text/chapter-11
Are you sure? yes | no
That link at the end was exactly what I needed. I read that page 4 times, and I'm planning to go back there again. It's really nicely explained and just what kills the variation in beta.
Thanks a lot for your help.
You might consider using op-amps as your first step of the design. Simply put, they do math. A current-to-voltage op-amp circuit is linear over a fairly large range. There's also a variation of a current mirror that can do ratios.
I'd echo Tim, and quoting from The Art of Electronics: "Warning: beta is not a 'good' transistor parameter; for instance, its value can vary from 50 to 250 for different specimens of a given transistor type. It also depends upon collector current, collector-to-emitter voltage, and temperature. A circuit that depends on a particular value for beta is a bad circuit." (Last sentence in italics)
In electronics, one tries to avoid a requirement for absolute accuracy for a parameter value, and rely on stable ratios between values, or use the of feedback or biasing.
Hi James. This is a very good point but I'm just looking to minimize the impact of the variation in Beta. I did read upon the variation and even though I'm well aware that it ts there, I'm just trying to reduce its impact since I know I cannot really avoid it.
When I was working on designing the circuit I did use different values of beta in simulations to check if the circuit would work either way. And it did.
I'd just feel better if I knew I chose transistors that would have a minimum variation in beta before testing the final version on the PCB. That's the reasoning behind my question.
Have you looked at matched transistor arrays? You'll get matched betas, within a few percent, over the entire operating temperature range.
Thanks a lot. Just had a look at matched transistors but my issue is not necessarily temperature but current. Would that help me maintain a lower overall beta variation with current?
So you want a highly linear transistor? Probably there are some types that are better than others. But generally this is rather addressed by proper circuit design, e.g. a feedback amplier, and by choosing the right operating point.
Hi Tim. Thank you for your help. The circuit is a node in a neural network I'm working on, not an amplifier and I'm using the transistor only to apply negative weights in the nodes. I'm not experienced in circuit design and it's a relatively simple circuit. Not small, but simple. Would you mind looking at the circuit and sharing any ideas on how to minimize the impact of beta variation with collector current? https://hackaday.io/project/170591-discrete-component-object-recognition/log/176896-pcb-design-for-discrete-component-neural-network
this looks like an interesting project, indeed. I am not exactly sure I understand what the requiremts for each Neuron are, however. Here is what I understand:
- Each negative-weight neuron consists of a transistor with base resistor. There is no collect/emitter resistor.
- Resistor values to set the operating point are missing in the circuit?
- The base is the input. If it exceeds a certain voltage level, it will pull the network connected to the output low. Depending on the resistors setting, the transfer function for this could be highly nonlinear (exp) as we are operating around the turn-on points of the transistor.
A good way to tackle this problem would be to describe exactly how a single neuron should behave and try to optimize this circuit in isolation. One easy approach to stabilizing the behavior a bit more may be by introducing emitter degeneration:
I have to admit that I am not sure how this would affect the entire network, though.
I realised from your latest message that the logs are not as clear as they should be. I'll make a new log to explain the nodes better so that other people can follow along and replicate/improve upon.
In reality, what I call a "digit line" is a complete neuron. Each neuron is receiving a bunch of inputs from each sensor (or from upstream neurons in deeper networks). They all get multiplied and added and the output is the probability of the neuron's class (0, 1, 2, cat, dog, etc...).
In the meantime Darren B sent a message suggesting a current mirror solution. This is exactly what I was looking for and does solve the beta dependence issue. I also realise that both you and Darren were already pointing me in the right direction, I just don't know enough about the subject to understand your helpful comments :o/
© 2020 Hackaday