Cute dpot hack for increased resolution...

On Thursday, July 23, 2020 at 3:14:50 AM UTC-4, Bill Sloman wrote:
On Thursday, July 23, 2020 at 4:51:13 PM UTC+10, Ricketty C wrote:
On Thursday, July 23, 2020 at 1:18:30 AM UTC-4, Bill Sloman wrote:
On Thursday, July 23, 2020 at 12:57:35 PM UTC+10, Ricketty C wrote:
On Wednesday, July 22, 2020 at 9:05:23 PM UTC-4, Phil Hobbs wrote:
On 2020-07-22 19:50, Ricketty C wrote:
On Wednesday, July 22, 2020 at 6:43:57 PM UTC-4, Phil Hobbs wrote:
So I\'m doing a simplified version of the differential laser noise
canceller, which in the spherical cow universe looks like it does
very well out to about 10 MHz, thanks to the amazing properties of
BFP640s and some new photodiodes with reduced series resistance.
(At least according to Hamamatsu.)

One thing I need for this is an adjustable resistance with good
bandwidth. The fastest dpot I can find is the AD5273BRJZ1 (1k, 64
steps, ~6 MHz bandwidth at half scale).

The resolution is too coarse for my application, but as it\'s pretty
well set-and-forget, I don\'t mind some algorithmic complexity.

Turns out that if you make a sort of Darlington connection, with
one dpot connected as a rheostat in series with the wiper of the
other (which is connected to one end), you can get the approximate
resolution of a 10-11 bit dpot.

1k 0-*----R1R1R1---------------0 | A | *-------*-----* |
| | | V | 5k *------------R2R2R2--*

It works best if R2 is about 5 times R1, but the bandwidth may be
better if I stick with the 1k version.

Neglecting switch resistance, calculating the total resistance as
a function of the codes, sorting into a single 1-D array to get a
monotonically increasing resistance function, and taking the first
finite difference reveals a step size nearly always less than 0.1%
except near the low-resistance end, which I don\'t care much about.

There\'s a plot at
https://electrooptical.net/static/oldsite/www/sed/FightingDPOTs.png



Fun.

Looks like an interesting graph, but I don\'t see any error bars..
Won\'t the limitations of accuracy of the 6 bit dpots swamp out your
monotonicity?

I\'m not expecting it to be monotonic, for sure. The codes are
completely scrambled, so just finding them is an issue. I\'d expect the
main effect of the DNL of the dpots will be to scramble them differently
without making huge changes in the general outline of the plot. (The
sort order will change, certainly.)

I think my point is more that a 64 step pot is only going to have an accuracy close to that, so that won\'t go away by adding resolution. Or is resolution all you are looking for?

Clearly.

I guess I\'m not following what this gains? You get lots more steps, but you don\'t know any more about where they are. What am I missing?

The steps that correspond to the coarse digital pot changing are going to be gross, but tolerably well defined.

The trick is to use the fine digital pot to move the final output over a range which is larger than the biggest single change caused by a one bit change in the input to the coarse pot, so that you can be sure that you have over-lapping ranges of fine adjustment.

What Phil seems to be talking about doing is mapping all the possible outputs for any particular example of the circuit and setting up a look-up table of a monotonic string of desired outputs, and the inputs to the coarse and fine digital pots that get them.

It is an open question whether the outputs are going to be stable enough to let this work.

A twiddling program that knew when it was getting close to running out of range on the fine digital pot, and started over from scratch after changing input to the coarse pot might be another possibility, but it does rather depend on what Phil is actually doing. It wouldn\'t fit neatly into a simple control loop.

For any use of this combination to be useful, there has to be a way of making changes to the combinations that map to a set of points with better error than what a single pot achieves.

No. All you need is piece-wise monotonicity, and a control procedure that can deal with the coarse steps between the monotonic pieces.

If by \"piece-wise monotonicity\" you mean you need to pick your steps, you have no way to do that. Because you don\'t know where the thresholds are for the coarse pot, you have no idea what to set the fine pot to.

If you think this can be done, you need to explain in detail how to do it. Are you talking about calibrating each part separately?


Unless the error of the pots at their defined values is significantly better than would be expected by the 64 step resolution that won\'t happen.

Wrong.

Consider the case you describe. The coarse pot is at a setting and the fine pot reaches the end of it\'s range. To move to a new coarse setting the user/program/magic agent must know which coarse setting to switch to and what fine setting will go with that coarse setting to continue a forward adjustment. If the coarse device has a differential non-linearity that is not much better than expected for a 64 step pot, this won\'t be possible.

All it needs to know is whether it needs the next coarse step to be the next one up or the next one down.

Wrong. Unlike you I explain my thinking. The problem is not setting the coarse pot, it\'s knowing when to change it and what to set the fine pot to so as to match the coarse pot. If the error in the coarse pot is large there is no way to know where to set the fine pot when you change the coarse pot.

Coarse step of 1/64th of the range, nominally, can be 1/128th of the range or 3/128ths of the range or some such values as determined by the spec sheet. The relatively large error makes it impossible to know where to set the fine pot. It\'s that simple.


> It won\'t know how far it has moved, but it should be able to work out - once it has moved - whether it needs to make fine steps up or down to get closer to where it needs to be.

You mean make the coarse step and let your controller algorithm search for the right fine setting??? That means in the mean time the system is out of adjustment which is the point of using the fine adjustment.

So that idea is a fail.


It\'s no different than the problems of combining flash converters to obtain more resolution.

Not correct. All Phil needs is fine adjustment to let him eventually get close enough to where he needs to be. He doesn\'t care where that actually is - just that he can eventually get there.

Eventually? So this is not a real time control loop trying to maintain some parameter? This is just some setting he wishes to set once and forget?


One converter measures the coarse bits and the other measures the fine bits. But both must be accurate to the same level to be able to combine them.

Been there, looked at that.

There may be fancy techniques used in the flash converter situation, but they are not in play in Phil\'s pot setup.

He might be able to get a little more resolution, but I don\'t see how it will be much unless the pots are very much under specified.

You are missing the distinction between resolution and granularity.

He needs to get close enough to the right place (granularity), but he doesn\'t need to know exactly where it is (resolution).

I think you need to check the dictionary. lol

--

Rick C.

-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209
 
>I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs
 
>I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs
 
>I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs
 
On Thursday, July 23, 2020 at 6:12:02 PM UTC+10, Ricketty C wrote:
On Thursday, July 23, 2020 at 3:14:50 AM UTC-4, Bill Sloman wrote:
On Thursday, July 23, 2020 at 4:51:13 PM UTC+10, Ricketty C wrote:
On Thursday, July 23, 2020 at 1:18:30 AM UTC-4, Bill Sloman wrote:
On Thursday, July 23, 2020 at 12:57:35 PM UTC+10, Ricketty C wrote:
On Wednesday, July 22, 2020 at 9:05:23 PM UTC-4, Phil Hobbs wrote:
On 2020-07-22 19:50, Ricketty C wrote:
On Wednesday, July 22, 2020 at 6:43:57 PM UTC-4, Phil Hobbs wrote:
So I\'m doing a simplified version of the differential laser noise
canceller, which in the spherical cow universe looks like it does
very well out to about 10 MHz, thanks to the amazing properties of
BFP640s and some new photodiodes with reduced series resistance.
(At least according to Hamamatsu.)

One thing I need for this is an adjustable resistance with good
bandwidth. The fastest dpot I can find is the AD5273BRJZ1 (1k, 64 steps, ~6 MHz bandwidth at half scale).

The resolution is too coarse for my application, but as it\'s pretty well set-and-forget, I don\'t mind some algorithmic complexity.

Turns out that if you make a sort of Darlington connection, with one dpot connected as a rheostat in series with the wiper of the other (which is connected to one end), you can get the approximate resolution of a 10-11 bit dpot.

1k 0-*----R1R1R1---------------0 | A | *-------*-----* | | | | V | 5k *------------R2R2R2--*

It works best if R2 is about 5 times R1, but the bandwidth may be
better if I stick with the 1k version.

Neglecting switch resistance, calculating the total resistance as a function of the codes, sorting into a single 1-D array to get a monotonically increasing resistance function, and taking the first
finite difference reveals a step size nearly always less than 0.1% except near the low-resistance end, which I don\'t care much about.

There\'s a plot at
https://electrooptical.net/static/oldsite/www/sed/FightingDPOTs.png



Fun.

Looks like an interesting graph, but I don\'t see any error bars.
Won\'t the limitations of accuracy of the 6 bit dpots swamp out you monotonicity?

I\'m not expecting it to be monotonic, for sure. The codes are
completely scrambled, so just finding them is an issue. I\'d expect the main effect of the DNL of the dpots will be to scramble them differently without making huge changes in the general outline of the plot. (The sort order will change, certainly.)

I think my point is more that a 64 step pot is only going to have an accuracy close to that, so that won\'t go away by adding resolution. Or is resolution all you are looking for?

Clearly.

I guess I\'m not following what this gains? You get lots more steps, but you don\'t know any more about where they are. What am I missing?

The steps that correspond to the coarse digital pot changing are going to be gross, but tolerably well defined.

The trick is to use the fine digital pot to move the final output over a range which is larger than the biggest single change caused by a one bit change in the input to the coarse pot, so that you can be sure that you have over-lapping ranges of fine adjustment.

What Phil seems to be talking about doing is mapping all the possible outputs for any particular example of the circuit and setting up a look-up table of a monotonic string of desired outputs, and the inputs to the coarse and fine digital pots that get them.

It is an open question whether the outputs are going to be stable enough to let this work.

A twiddling program that knew when it was getting close to running out of range on the fine digital pot, and started over from scratch after changing input to the coarse pot might be another possibility, but it does rather depend on what Phil is actually doing. It wouldn\'t fit neatly into a simple control loop.

For any use of this combination to be useful, there has to be a way of making changes to the combinations that map to a set of points with better error than what a single pot achieves.

No. All you need is piece-wise monotonicity, and a control procedure that can deal with the coarse steps between the monotonic pieces.

If by \"piece-wise monotonicity\" you mean you need to pick your steps, you have no way to do that. Because you don\'t know where the thresholds are for the coarse pot, you have no idea what to set the fine pot to.

You know when you are changing the inputs to the coarse pot, and when you are changing the inputs to the fine pot.

> If you think this can be done, you need to explain in detail how to do it.. Are you talking about calibrating each part separately?

Doesn\'t seem to be necessary. You use the coarse pot to get roughly where you want to be, and use the fine pot to get closer to where you want to be.

Unless the error of the pots at their defined values is significantly better than would be expected by the 64 step resolution that won\'t happen.

Wrong.

Consider the case you describe. The coarse pot is at a setting and the fine pot reaches the end of it\'s range. To move to a new coarse setting the user/program/magic agent must know which coarse setting to switch to.

The next one, up or down (depending on which end of the fine range you hit)..

> > and what fine setting will go with that coarse setting to continue a forward adjustment.

You do have to work out in which direction you need to go with the fine pot (which will have been set back to mid-range when you changed the coarse pot) but that shouldn\'t be difficult.

> > If the coarse device has a differential non-linearity that is not much better than expected for a 64 step pot, this won\'t be possible.

Basically you are starting the fine tuning procedure over again from scratch, so it\'s going to be perfectly possible to improve

All it needs to know is whether it needs the next coarse step to be the next one up or the next one down.

Wrong. Unlike you I explain my thinking. The problem is not setting the coarse pot, it\'s knowing when to change it and what to set the fine pot to so as to match the coarse pot.

You don\'t have to set the fine pot to \"match the coarse pot\". You\'ve just worked out that fine pot can\'t move you as far as you need to go, so you change the coarse pot to give you a new starting point, and start over with the fine pot.

> If the error in the coarse pot is large there is no way to know where to set the fine pot when you change the coarse pot.

You aren\'t going to get back to where you were after you change the coarse pot - that\'s obvious - so you are starting the search again, hopefully from a more promising starting point.

> Coarse step of 1/64th of the range, nominally, can be 1/128th of the range or 3/128ths of the range or some such values as determined by the spec sheet. The relatively large error makes it impossible to know where to set the fine pot. It\'s that simple.

But perfectly irrelevant.

It won\'t know how far it has moved, but it should be able to work out - once it has moved - whether it needs to make fine steps up or down to get closer to where it needs to be.

You mean make the coarse step and let your controller algorithm search for the right fine setting??? That means in the mean time the system is out of adjustment which is the point of using the fine adjustment.

Actually, it isn\'t. The system is out of adjustment until you get it set \"close enough\" to the desired operating point.

A coarse search - fine search - strategy isn\'t as quick as a proportional integral linear trajectory, but it will get you there eventually.
So that idea is a fail.

Measured against a criterion you\'ve just invented.

It\'s no different than the problems of combining flash converters to obtain more resolution.

Not correct. All Phil needs is fine adjustment to let him eventually get close enough to where he needs to be. He doesn\'t care where that actually is - just that he can eventually get there.

Eventually? So this is not a real time control loop trying to maintain some parameter? This is just some setting he wishes to set once and forget?

Presumably. Once it is \"there\" - where ever that is - it might be monitored to see if a tweak to fine pot might get it closer to where it ought to be, or you might have an analog control loop for that. I did that once, but the user just turned it off rather than letting me tune it to get it stable.

One converter measures the coarse bits and the other measures the fine bits. But both must be accurate to the same level to be able to combine them.

Been there, looked at that.

There may be fancy techniques used in the flash converter situation, but they are not in play in Phil\'s pot setup.

He might be able to get a little more resolution, but I don\'t see how it will be much unless the pots are very much under specified.

You are missing the distinction between resolution and granularity.

He needs to get close enough to the right place (granularity), but he doesn\'t need to know exactly where it is (resolution).

I think you need to check the dictionary. lol

Regular dictionaries aren\'t much help in with the technical vocabulary in specialist areas, but, here\'s one definition of \"granularity\"

technical - the scale or level of detail in a set of data.
\"the bill data doesn\'t provide sufficient granularity to answer the questions\"

--
Bill Sloman, Sydney
 
On Thursday, July 23, 2020 at 6:12:02 PM UTC+10, Ricketty C wrote:
On Thursday, July 23, 2020 at 3:14:50 AM UTC-4, Bill Sloman wrote:
On Thursday, July 23, 2020 at 4:51:13 PM UTC+10, Ricketty C wrote:
On Thursday, July 23, 2020 at 1:18:30 AM UTC-4, Bill Sloman wrote:
On Thursday, July 23, 2020 at 12:57:35 PM UTC+10, Ricketty C wrote:
On Wednesday, July 22, 2020 at 9:05:23 PM UTC-4, Phil Hobbs wrote:
On 2020-07-22 19:50, Ricketty C wrote:
On Wednesday, July 22, 2020 at 6:43:57 PM UTC-4, Phil Hobbs wrote:
So I\'m doing a simplified version of the differential laser noise
canceller, which in the spherical cow universe looks like it does
very well out to about 10 MHz, thanks to the amazing properties of
BFP640s and some new photodiodes with reduced series resistance.
(At least according to Hamamatsu.)

One thing I need for this is an adjustable resistance with good
bandwidth. The fastest dpot I can find is the AD5273BRJZ1 (1k, 64 steps, ~6 MHz bandwidth at half scale).

The resolution is too coarse for my application, but as it\'s pretty well set-and-forget, I don\'t mind some algorithmic complexity.

Turns out that if you make a sort of Darlington connection, with one dpot connected as a rheostat in series with the wiper of the other (which is connected to one end), you can get the approximate resolution of a 10-11 bit dpot.

1k 0-*----R1R1R1---------------0 | A | *-------*-----* | | | | V | 5k *------------R2R2R2--*

It works best if R2 is about 5 times R1, but the bandwidth may be
better if I stick with the 1k version.

Neglecting switch resistance, calculating the total resistance as a function of the codes, sorting into a single 1-D array to get a monotonically increasing resistance function, and taking the first
finite difference reveals a step size nearly always less than 0.1% except near the low-resistance end, which I don\'t care much about.

There\'s a plot at
https://electrooptical.net/static/oldsite/www/sed/FightingDPOTs.png



Fun.

Looks like an interesting graph, but I don\'t see any error bars.
Won\'t the limitations of accuracy of the 6 bit dpots swamp out you monotonicity?

I\'m not expecting it to be monotonic, for sure. The codes are
completely scrambled, so just finding them is an issue. I\'d expect the main effect of the DNL of the dpots will be to scramble them differently without making huge changes in the general outline of the plot. (The sort order will change, certainly.)

I think my point is more that a 64 step pot is only going to have an accuracy close to that, so that won\'t go away by adding resolution. Or is resolution all you are looking for?

Clearly.

I guess I\'m not following what this gains? You get lots more steps, but you don\'t know any more about where they are. What am I missing?

The steps that correspond to the coarse digital pot changing are going to be gross, but tolerably well defined.

The trick is to use the fine digital pot to move the final output over a range which is larger than the biggest single change caused by a one bit change in the input to the coarse pot, so that you can be sure that you have over-lapping ranges of fine adjustment.

What Phil seems to be talking about doing is mapping all the possible outputs for any particular example of the circuit and setting up a look-up table of a monotonic string of desired outputs, and the inputs to the coarse and fine digital pots that get them.

It is an open question whether the outputs are going to be stable enough to let this work.

A twiddling program that knew when it was getting close to running out of range on the fine digital pot, and started over from scratch after changing input to the coarse pot might be another possibility, but it does rather depend on what Phil is actually doing. It wouldn\'t fit neatly into a simple control loop.

For any use of this combination to be useful, there has to be a way of making changes to the combinations that map to a set of points with better error than what a single pot achieves.

No. All you need is piece-wise monotonicity, and a control procedure that can deal with the coarse steps between the monotonic pieces.

If by \"piece-wise monotonicity\" you mean you need to pick your steps, you have no way to do that. Because you don\'t know where the thresholds are for the coarse pot, you have no idea what to set the fine pot to.

You know when you are changing the inputs to the coarse pot, and when you are changing the inputs to the fine pot.

> If you think this can be done, you need to explain in detail how to do it.. Are you talking about calibrating each part separately?

Doesn\'t seem to be necessary. You use the coarse pot to get roughly where you want to be, and use the fine pot to get closer to where you want to be.

Unless the error of the pots at their defined values is significantly better than would be expected by the 64 step resolution that won\'t happen.

Wrong.

Consider the case you describe. The coarse pot is at a setting and the fine pot reaches the end of it\'s range. To move to a new coarse setting the user/program/magic agent must know which coarse setting to switch to.

The next one, up or down (depending on which end of the fine range you hit)..

> > and what fine setting will go with that coarse setting to continue a forward adjustment.

You do have to work out in which direction you need to go with the fine pot (which will have been set back to mid-range when you changed the coarse pot) but that shouldn\'t be difficult.

> > If the coarse device has a differential non-linearity that is not much better than expected for a 64 step pot, this won\'t be possible.

Basically you are starting the fine tuning procedure over again from scratch, so it\'s going to be perfectly possible to improve

All it needs to know is whether it needs the next coarse step to be the next one up or the next one down.

Wrong. Unlike you I explain my thinking. The problem is not setting the coarse pot, it\'s knowing when to change it and what to set the fine pot to so as to match the coarse pot.

You don\'t have to set the fine pot to \"match the coarse pot\". You\'ve just worked out that fine pot can\'t move you as far as you need to go, so you change the coarse pot to give you a new starting point, and start over with the fine pot.

> If the error in the coarse pot is large there is no way to know where to set the fine pot when you change the coarse pot.

You aren\'t going to get back to where you were after you change the coarse pot - that\'s obvious - so you are starting the search again, hopefully from a more promising starting point.

> Coarse step of 1/64th of the range, nominally, can be 1/128th of the range or 3/128ths of the range or some such values as determined by the spec sheet. The relatively large error makes it impossible to know where to set the fine pot. It\'s that simple.

But perfectly irrelevant.

It won\'t know how far it has moved, but it should be able to work out - once it has moved - whether it needs to make fine steps up or down to get closer to where it needs to be.

You mean make the coarse step and let your controller algorithm search for the right fine setting??? That means in the mean time the system is out of adjustment which is the point of using the fine adjustment.

Actually, it isn\'t. The system is out of adjustment until you get it set \"close enough\" to the desired operating point.

A coarse search - fine search - strategy isn\'t as quick as a proportional integral linear trajectory, but it will get you there eventually.
So that idea is a fail.

Measured against a criterion you\'ve just invented.

It\'s no different than the problems of combining flash converters to obtain more resolution.

Not correct. All Phil needs is fine adjustment to let him eventually get close enough to where he needs to be. He doesn\'t care where that actually is - just that he can eventually get there.

Eventually? So this is not a real time control loop trying to maintain some parameter? This is just some setting he wishes to set once and forget?

Presumably. Once it is \"there\" - where ever that is - it might be monitored to see if a tweak to fine pot might get it closer to where it ought to be, or you might have an analog control loop for that. I did that once, but the user just turned it off rather than letting me tune it to get it stable.

One converter measures the coarse bits and the other measures the fine bits. But both must be accurate to the same level to be able to combine them.

Been there, looked at that.

There may be fancy techniques used in the flash converter situation, but they are not in play in Phil\'s pot setup.

He might be able to get a little more resolution, but I don\'t see how it will be much unless the pots are very much under specified.

You are missing the distinction between resolution and granularity.

He needs to get close enough to the right place (granularity), but he doesn\'t need to know exactly where it is (resolution).

I think you need to check the dictionary. lol

Regular dictionaries aren\'t much help in with the technical vocabulary in specialist areas, but, here\'s one definition of \"granularity\"

technical - the scale or level of detail in a set of data.
\"the bill data doesn\'t provide sufficient granularity to answer the questions\"

--
Bill Sloman, Sydney
 
On Thursday, July 23, 2020 at 6:12:02 PM UTC+10, Ricketty C wrote:
On Thursday, July 23, 2020 at 3:14:50 AM UTC-4, Bill Sloman wrote:
On Thursday, July 23, 2020 at 4:51:13 PM UTC+10, Ricketty C wrote:
On Thursday, July 23, 2020 at 1:18:30 AM UTC-4, Bill Sloman wrote:
On Thursday, July 23, 2020 at 12:57:35 PM UTC+10, Ricketty C wrote:
On Wednesday, July 22, 2020 at 9:05:23 PM UTC-4, Phil Hobbs wrote:
On 2020-07-22 19:50, Ricketty C wrote:
On Wednesday, July 22, 2020 at 6:43:57 PM UTC-4, Phil Hobbs wrote:
So I\'m doing a simplified version of the differential laser noise
canceller, which in the spherical cow universe looks like it does
very well out to about 10 MHz, thanks to the amazing properties of
BFP640s and some new photodiodes with reduced series resistance.
(At least according to Hamamatsu.)

One thing I need for this is an adjustable resistance with good
bandwidth. The fastest dpot I can find is the AD5273BRJZ1 (1k, 64 steps, ~6 MHz bandwidth at half scale).

The resolution is too coarse for my application, but as it\'s pretty well set-and-forget, I don\'t mind some algorithmic complexity.

Turns out that if you make a sort of Darlington connection, with one dpot connected as a rheostat in series with the wiper of the other (which is connected to one end), you can get the approximate resolution of a 10-11 bit dpot.

1k 0-*----R1R1R1---------------0 | A | *-------*-----* | | | | V | 5k *------------R2R2R2--*

It works best if R2 is about 5 times R1, but the bandwidth may be
better if I stick with the 1k version.

Neglecting switch resistance, calculating the total resistance as a function of the codes, sorting into a single 1-D array to get a monotonically increasing resistance function, and taking the first
finite difference reveals a step size nearly always less than 0.1% except near the low-resistance end, which I don\'t care much about.

There\'s a plot at
https://electrooptical.net/static/oldsite/www/sed/FightingDPOTs.png



Fun.

Looks like an interesting graph, but I don\'t see any error bars.
Won\'t the limitations of accuracy of the 6 bit dpots swamp out you monotonicity?

I\'m not expecting it to be monotonic, for sure. The codes are
completely scrambled, so just finding them is an issue. I\'d expect the main effect of the DNL of the dpots will be to scramble them differently without making huge changes in the general outline of the plot. (The sort order will change, certainly.)

I think my point is more that a 64 step pot is only going to have an accuracy close to that, so that won\'t go away by adding resolution. Or is resolution all you are looking for?

Clearly.

I guess I\'m not following what this gains? You get lots more steps, but you don\'t know any more about where they are. What am I missing?

The steps that correspond to the coarse digital pot changing are going to be gross, but tolerably well defined.

The trick is to use the fine digital pot to move the final output over a range which is larger than the biggest single change caused by a one bit change in the input to the coarse pot, so that you can be sure that you have over-lapping ranges of fine adjustment.

What Phil seems to be talking about doing is mapping all the possible outputs for any particular example of the circuit and setting up a look-up table of a monotonic string of desired outputs, and the inputs to the coarse and fine digital pots that get them.

It is an open question whether the outputs are going to be stable enough to let this work.

A twiddling program that knew when it was getting close to running out of range on the fine digital pot, and started over from scratch after changing input to the coarse pot might be another possibility, but it does rather depend on what Phil is actually doing. It wouldn\'t fit neatly into a simple control loop.

For any use of this combination to be useful, there has to be a way of making changes to the combinations that map to a set of points with better error than what a single pot achieves.

No. All you need is piece-wise monotonicity, and a control procedure that can deal with the coarse steps between the monotonic pieces.

If by \"piece-wise monotonicity\" you mean you need to pick your steps, you have no way to do that. Because you don\'t know where the thresholds are for the coarse pot, you have no idea what to set the fine pot to.

You know when you are changing the inputs to the coarse pot, and when you are changing the inputs to the fine pot.

> If you think this can be done, you need to explain in detail how to do it.. Are you talking about calibrating each part separately?

Doesn\'t seem to be necessary. You use the coarse pot to get roughly where you want to be, and use the fine pot to get closer to where you want to be.

Unless the error of the pots at their defined values is significantly better than would be expected by the 64 step resolution that won\'t happen.

Wrong.

Consider the case you describe. The coarse pot is at a setting and the fine pot reaches the end of it\'s range. To move to a new coarse setting the user/program/magic agent must know which coarse setting to switch to.

The next one, up or down (depending on which end of the fine range you hit)..

> > and what fine setting will go with that coarse setting to continue a forward adjustment.

You do have to work out in which direction you need to go with the fine pot (which will have been set back to mid-range when you changed the coarse pot) but that shouldn\'t be difficult.

> > If the coarse device has a differential non-linearity that is not much better than expected for a 64 step pot, this won\'t be possible.

Basically you are starting the fine tuning procedure over again from scratch, so it\'s going to be perfectly possible to improve

All it needs to know is whether it needs the next coarse step to be the next one up or the next one down.

Wrong. Unlike you I explain my thinking. The problem is not setting the coarse pot, it\'s knowing when to change it and what to set the fine pot to so as to match the coarse pot.

You don\'t have to set the fine pot to \"match the coarse pot\". You\'ve just worked out that fine pot can\'t move you as far as you need to go, so you change the coarse pot to give you a new starting point, and start over with the fine pot.

> If the error in the coarse pot is large there is no way to know where to set the fine pot when you change the coarse pot.

You aren\'t going to get back to where you were after you change the coarse pot - that\'s obvious - so you are starting the search again, hopefully from a more promising starting point.

> Coarse step of 1/64th of the range, nominally, can be 1/128th of the range or 3/128ths of the range or some such values as determined by the spec sheet. The relatively large error makes it impossible to know where to set the fine pot. It\'s that simple.

But perfectly irrelevant.

It won\'t know how far it has moved, but it should be able to work out - once it has moved - whether it needs to make fine steps up or down to get closer to where it needs to be.

You mean make the coarse step and let your controller algorithm search for the right fine setting??? That means in the mean time the system is out of adjustment which is the point of using the fine adjustment.

Actually, it isn\'t. The system is out of adjustment until you get it set \"close enough\" to the desired operating point.

A coarse search - fine search - strategy isn\'t as quick as a proportional integral linear trajectory, but it will get you there eventually.
So that idea is a fail.

Measured against a criterion you\'ve just invented.

It\'s no different than the problems of combining flash converters to obtain more resolution.

Not correct. All Phil needs is fine adjustment to let him eventually get close enough to where he needs to be. He doesn\'t care where that actually is - just that he can eventually get there.

Eventually? So this is not a real time control loop trying to maintain some parameter? This is just some setting he wishes to set once and forget?

Presumably. Once it is \"there\" - where ever that is - it might be monitored to see if a tweak to fine pot might get it closer to where it ought to be, or you might have an analog control loop for that. I did that once, but the user just turned it off rather than letting me tune it to get it stable.

One converter measures the coarse bits and the other measures the fine bits. But both must be accurate to the same level to be able to combine them.

Been there, looked at that.

There may be fancy techniques used in the flash converter situation, but they are not in play in Phil\'s pot setup.

He might be able to get a little more resolution, but I don\'t see how it will be much unless the pots are very much under specified.

You are missing the distinction between resolution and granularity.

He needs to get close enough to the right place (granularity), but he doesn\'t need to know exactly where it is (resolution).

I think you need to check the dictionary. lol

Regular dictionaries aren\'t much help in with the technical vocabulary in specialist areas, but, here\'s one definition of \"granularity\"

technical - the scale or level of detail in a set of data.
\"the bill data doesn\'t provide sufficient granularity to answer the questions\"

--
Bill Sloman, Sydney
 
On 2020-07-23 07:39, pcdhobbs@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

I duly re-did the calculation with +-1/8 units of random DNL on every
step, and it didn\'t change the conclusions at all. That\'s a few times
worse than the AD5273 spec.

Next thing is to bootstrap the supplies and see if I can get rid of some
of the capacitance. ;) (Not really--though I\'m not above such things.)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-07-23 07:39, pcdhobbs@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

I duly re-did the calculation with +-1/8 units of random DNL on every
step, and it didn\'t change the conclusions at all. That\'s a few times
worse than the AD5273 spec.

Next thing is to bootstrap the supplies and see if I can get rid of some
of the capacitance. ;) (Not really--though I\'m not above such things.)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-07-23 07:39, pcdhobbs@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

I duly re-did the calculation with +-1/8 units of random DNL on every
step, and it didn\'t change the conclusions at all. That\'s a few times
worse than the AD5273 spec.

Next thing is to bootstrap the supplies and see if I can get rid of some
of the capacitance. ;) (Not really--though I\'m not above such things.)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Thursday, July 23, 2020 at 7:39:36 AM UTC-4, pcdh...@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs

Ok, so this circuit is not a real time control loop as much as it is like a successive approximation. You don\'t mind the value varying unpredictably as long as it eventually reaches a setting.

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209
 
On Thursday, July 23, 2020 at 7:39:36 AM UTC-4, pcdh...@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs

Ok, so this circuit is not a real time control loop as much as it is like a successive approximation. You don\'t mind the value varying unpredictably as long as it eventually reaches a setting.

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209
 
On Thursday, July 23, 2020 at 7:39:36 AM UTC-4, pcdh...@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs

Ok, so this circuit is not a real time control loop as much as it is like a successive approximation. You don\'t mind the value varying unpredictably as long as it eventually reaches a setting.

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209
 
On 7/23/2020 7:39 AM, pcdhobbs@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs

I like it, one of those \"What if I...\"-inventions and as it so happens
gives you what you need. and then you gotta figure out why it do.
 
On 7/23/2020 7:39 AM, pcdhobbs@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs

I like it, one of those \"What if I...\"-inventions and as it so happens
gives you what you need. and then you gotta figure out why it do.
 
Hi Tim

I did not get your meaning about this:


For example, allowing the PWM counter\'s divisor to vary from 2^N - 1 to
2^(N-1), improves the average error by several bits.

Is it a delta sigma around the LSB or do you have a link that explains it?
 
On 7/23/2020 7:39 AM, pcdhobbs@gmail.com wrote:
I guess I\'m not following what this gains?  You get lots more steps, but >you don\'t know any more about where they are.  What am I missing?

What it gets me is a combination of settability and bandwidth that I can\'t get in any single part at any price. I\'ll dump in a bit of pseudorandom DNL and see if that causes any issues. I don\'t expect it to--it\'ll get homogenized much like the regular steps.

For an iterative adjustment (think LC filter tuning) I can get close by setting the fine pot to 0, adjusting the coarse pot, then walking the two together to find the optimum. Near full scale the possible search range is wider, but I don\'t think it\'ll take too long provided that I\'m willing to settle for \"good enough\" rather than a global optimum.

There are two mildly-interacting adjustments, but the response surface is simple--no local minima.

It\'s for taking out the effect of extrinsic emitter resistance in the BJT diff pair that\'s the heart of the circuit. It works by applying a signal to one of the bases that cancels out the effective emitter degeneration and so restores the desired Ebers-Moll behaviour at higher photocurrents.

If I feel like getting fancy, once I\'ve found the right neighbourhood I can give the turd a final polish to get a bit closer. ;)

There are non-negligible effects such as the tempco of wiper resistance that limit how good this approach can be. Sure beats a trimpot on a motor though!

Cheers

Phil Hobbs

I like it, one of those \"What if I...\"-inventions and as it so happens
gives you what you need. and then you gotta figure out why it do.
 
fredag den 24. juli 2020 kl. 00.24.45 UTC+2 skrev klaus.k...@gmail.com:
Hi Tim

I did not get your meaning about this:


For example, allowing the PWM counter\'s divisor to vary from 2^N - 1 to
2^(N-1), improves the average error by several bits.

Is it a delta sigma around the LSB or do you have a link that explains it?

as I understood it, changing the counter max value

so you can do a duty cycle of for example; n/8000 instead instead of n/8191
 
fredag den 24. juli 2020 kl. 00.24.45 UTC+2 skrev klaus.k...@gmail.com:
Hi Tim

I did not get your meaning about this:


For example, allowing the PWM counter\'s divisor to vary from 2^N - 1 to
2^(N-1), improves the average error by several bits.

Is it a delta sigma around the LSB or do you have a link that explains it?

as I understood it, changing the counter max value

so you can do a duty cycle of for example; n/8000 instead instead of n/8191
 
fredag den 24. juli 2020 kl. 00.24.45 UTC+2 skrev klaus.k...@gmail.com:
Hi Tim

I did not get your meaning about this:


For example, allowing the PWM counter\'s divisor to vary from 2^N - 1 to
2^(N-1), improves the average error by several bits.

Is it a delta sigma around the LSB or do you have a link that explains it?

as I understood it, changing the counter max value

so you can do a duty cycle of for example; n/8000 instead instead of n/8191
 

Welcome to EDABoard.com

Sponsor

Back
Top