## An easy way to remember Euclid GCD algorithm: laying tiles

I was a little embarrassed that I’ve never came across Euclid’s GCD algorithm until years after I graduated with Math and Engineering degrees. The descriptions I’ve found online (especially Wikipedia) are convoluted and confusing, and the mechanical description of the algorithm does not shine light to any intuition. Then there comes proofs and abstract algebra properties that even after I followed the reasoning, I’d soon forget after I stop looking at it in a few weeks.

Just by staring at one simple description of the algorithm:

The classic algorithm for computing the GCD, known as Euclid’s algorithm, goes as follows: Let and be variables containing the two numbers, divide by . Save the divisor in , and save the remainder in . If is , then stop: contains the GCD. Otherwise repeat the process, starting with division of by .

K.N. King’s C++ Programming (Chapter 6 Exercise 2)

I reversed engineered one line of reasoning how one could come up with the Euclid GCD on his own. Turns out there is one twist/angle that most traditional explanations glossed over: quotients do NOT matter and WHY it does NOT matter!

It goes back to what GCD stands for: greatest common divisor. You are looking for the largest factor that simultaneously divides and evenly. If you rethink in terms of multiplication, you are finding the biggest tile size that can simultaneously cover and without gaps:

Find s.t. and where are integers.

Imagine you are a lazy and cheap contractor who have unlimited tile samples of all sizes to try before making a large order:

• [Goal] you have to cover with two floors with potentially different sizes and gaplessly
• [Constraint: divisible] your customer won’t let you cut (fractional) tiles because it’s ugly
• [Constraint: common denominator] you can only order ONE tile size to take advantage of bulk discounts
• [Objective: greatest] more tiles means more work to lay it, so you want to shop for the biggest tile size that does the job.

For simplicity of the argument, we also assume the divisor is the smaller number. If is the bigger number, the roles will reverse in the next round because the remainder from dividing by a larger number is the dividend itself, which becomes the divisor in the next round while the old divisor (the larger number) becomes the dividend.

For relatively prime pairs, the algorithm will eventually stop with a GCD of because divides all integers.

Here’s the analog of Euclid algorithm:

• start out with 1 big tile covering the entire smaller floor
• see if we can cover without gaps using big tiles of size
• if yes, we are done. , the macro-tile is your GCD (perfect-sized tile).
• if there are gaps (non-zero remainder), pick a tile-size that covers the ugly gap (the remainder becomes the divisor), then repeat the same process above over the last tile size (the divisor becomes the dividend) until there are no gaps (remainder become zero).

The algorithm is taking advantage of the fact the remainder is smaller than the last tile size (divisor) so we can rewrite the last tile size in multiples of the remainder (old gap) plus the new gap. By the time the new gap divides the last tile size, all numbers before that can be written as multiples of it (i.e. the last gap evenly divides all the numbers in the chain all the way up to and ).

We focus on the last tile and write it in terms of the old remainder :

Since the new remainder is , the old remainder divides the last tile . Since we are dividing in terms of the remainder , we have

\begin{aligned}
b &= 15 \\
&= (5+5+5) \\
&= 3r \\
\\
a &= 50 \\
& = (15 + 15 + 15) + 5\\
& = 3b + r\\
&= (5+5+5) + (5+5+5) + (5+5+5) + 5\\
&= 3(3r)+r \\
& = 10r
\end{aligned}

So both and can be written in terms of integer multiples of right before the algorithm stops.

GCD takes a few more steps:

\begin{aligned}
b &= 18\\
a &= 50\\
&= (18 + 18) + 14\\
&=2b + r\\
r &= 14 \\
\\
b_1 &= r = 14\\
a_1 &= b = 18\\
&= (14) + 4\\
&= b_1 + r_1\\
r_1 &= 4\\
\\
b_2 &=r_1 = 4\\
a_2 &= b_1 = 14\\
&= (4+4+4)+2\\
&= 3b_2 + r_2\\
r_2 &= 2\\
\\
b_3 &= r_2 = 2\\
a_3 &= b_2 = 4\\
&=(2+2) + 0\\
r_3 &= 0\\

\end{aligned}

The algorithms stops at which means we successfully divided the last tile with (logistically stored in ) with no gaps left. (or is therefore the greatest common factor (divisor) that are shared by all the numbers involved all the way up to and .

This is the beauty of Euclid’s GCD algorithm: once the gap (remainder) is sliced small enough to evenly divide the tile (divisor) right before it, every term before it be can written in terms of integer multiples of the last number that divides the last tile without gap (no more remainder).

In our example, can be written as a multiple of , aka because integer multiples of numbers that are already integer multiples of can be written in terms of a larger integer multiple of . This is why quotients do not matter in the Euclid GCD algorithm:

\begin{aligned}
b_2 &= 2r_2\\
4 &= (2+2) \\
\\
b_1 &= 3b_2 + r_2\\
&= 3(2r_2) + r_2\\
& = 7r_2\\
14 &= [(4+4+4)+2] \\
&=  [(2+2)+(2+2)+(2+2)+2] \\

\\
b &= b_1 + r_1\\
&=7r_2 + 2r_2\\
&=9r_2\\
18 &= 14 + 4 \\
&=  [(4+4+4)+2] + 4 \\
&=  [(2+2)+(2+2)+(2+2)+2]+[(2+2)] \\
\\
a &= 2b + r\\
&=2(9r_2) + 7r_2\\
&= 25r_2\\
50 & = 18 + 18 + 14 \\
&= [(2+2)+(2+2)+(2+2)+2]+[(2+2)] \cdots \\
&+ [(2+2)+(2+2)+(2+2)+2]+[(2+2)] \cdots \\
&+ [(2+2)+(2+2)+(2+2)+2]
\end {aligned}

The spirit of the algorithm is that every time we see a gap, we fill it with a small tile that’s exactly the gap size, and attempt to replace the last tile by covering it in terms of the new smaller tile (that was used to fill the gap). We are recursively slicing the last tile with the gaps produced until there are no gaps left. Once you find the GCD tile, you can replace all the bigger tiles before in terms of it with no gaps.

There’s an interesting perspective of using the Euclid GCD algorithm to solve Diophantine Equations (integer written as a linear combination of integers with integer coefficients): turns if in is the same as writing . We can flip the sign of by saying substituting .

83 total views

## HP 54502A Datasheet typo about AC coupling

The cutoff frequency of 10Hz on the datasheet is a typo. Better scopes at the time claims 90Hz. 10Hz is just too good to be true.

Found the specs from the service manual:

Don’t be fooled by the -3dB cutoff and ignore how wide the transition band can be (depends on the filter type and the order). Turns out this model has a very primitive filter that AC couple mode still messes square waves below 3kHz up despite the specs says the -3dB is at 90Hz. You better have a 30+ fold guard band for old scopes!

Remember square wave pulse train in time domain is basically a sinc pulse centered at every impulse of the impulse train in frequency domain superimposed. Unless you have a tiny duty cycle (which is not the case for uniform square waves, they are 50%), the left hand side of the sinc function at 1kHz fundamental still have sub-1kHz components that can be truncated by the AC coupling (high pass filter).

159 total views

## System Engineers’ tip to HKCEE/HKALE Math and Physics

Shortly after I’ve graduated with Mathematics and Electrical (and Computer) Engineering degrees, I realized a few supposedly difficult topics in Hong Kong’s Mathematics and Physics (Electric Circuits) curriculum was taught in unnecessarily painful ways.

Here’s an article I’ve written to show that it is less work to teach secondary (high) school students a few easy-to-learn university math topics first than teaching them dumb and clumsy derivations/approaches to avoid the pre-requisitesHKDSE EE Tips

Here are the outline of the article

• Complex numbers with Euler Formula
• Trigonometric identities can be derived effortlessly using complex number than tricky geometric proofs
• Inverting matrices using Gaussian elimination instead of messing with cofactors and determinants
• Proper concepts of circuit analysis and shortcuts
• Solving AC circuits in a breeze with complex numbers instead of remembering stupid rules like ELI and ICE rules and messy trigonometric identities.

186 total views

– Sep 17, 2005

370 total views

## Demystifying comparisons between eBay managed payments and the old way Hint: it's still the old final value fees (before cap) + payment processing fees (analogous to Paypal)

Most often when we sell on eBay, we receive Paypal payments, so we ended up paying eBay’s commissions (also called the final value fee, FVF for short) and Paypal processing fees as a percentage of the sale.

Now eBay pushes out Managed Payments (MP) which combines payment processing fees with the eBay commissions (FVF) because eBay now manages the payment gateway. The rest is every time you sell something, you get a payout (sales – fees) deposited directly to your bank (it used to be collecting paypal balances then withdraw it).

They have a different calculation formula which they claimed the sellers are better off in most cases, but should we take eBay’s word for it? Regardless of whether you are enrolled in managed payments, the fee percentage for each sale depends on:

• eBay store subscription level
• category

It’s impractical to do the side by side fee structure comparison to see when we are better off for each sale, plus we cannot easily switch between the two plans.

It’d be very helpful if we can put the managed payment fee structure on the same form as the conventional eBay+Paypal fee structure, and figure out under what conditions we are better off or worse off with managed payments.

Initially I was ready to do the derivations to put both plans on the same scale, but I spotted that managed payment (combined) percentage is simply the vanilla (non-managed) eBay category final value fee + 2.35% payment processing fees! That’s how they’ve calculated the combined managed payments percentage!

This makes life a lot easier. Since I can factor out the 2.35% that applies to the whole sum (which also include shipping and sales tax) regardless of the fee cap, this works exactly the same as Paypal (which charges 2.9%) and we are getting a 0.55% discount in payment processing fees.

For managed payments, since we’ve already separated out the payment processing fees, the fee cap applies to the vanilla final value fee portion which is equivalent to the old eBay final value fee. Keep that in mind.

The part that has changed is the fee cap. The old way caps the commissions/FVF directly regardless of product category, yet the new way (managed payments) caps the sale price that are charged commissions. This means the fee cap goes up or down a little depending on the final value fee percentage class applying to your sale.

eBay set the (commissioned) sale price cap to closely match the realized commission cap in the old way, for example non-store subscriber who will have their raw final value fees capped at $750 will see the same cap for the most common 10% categories (12.35%-2.35% = 10%) because the (commissioned) sale price cap is$7500, which $7500*10%=$750.

Big industrial equipment also have the same cap regardless because $15,000*2% =$300.

For non-store subscribers, 10% is the anchor (iso-fee-cap) category. So you are a little worse of with books (price cap +2%) and better off with musical instruments (price cap -6.5%).

For store subscribers, you get a bit more break over the fee-cap (lower) because your final value fee percentage is lower than the anchor (likely they chose a breakeven point at 10% when determining the sales price to cap final value fee over. Just easy numbers, not rocket science):

 Managed Conventional Discount* Common (9.15%) $2,500*9.15% =$228.75 $350 (Small stores)$250 (Big stores) $121.25 (Small stores)$21.25 (Big stores) Heavy gears (1.5%) $15,000*1.5% =$225 $250$25 Books (12%) $2,500*12% =$300 $350 (Small stores)$250 (Big stores) $50 (Small stores) -$50 (Big stores worse off) Guitars (3.5%) $2,500*3.5% =$87.5 $350 (Small stores)$250 (Big stores) $262.5 (Small stores)$162.5 (Big stores)

I call Basic/Premium ‘small-stores’ and Anchor/Enterprise ‘big-stores’.

So here are the observations, which is all you need to know:

• Under managed payments, small-stores gets a better deal than big-stores, because the advantage of the $100 lower fee cap with big stores are eliminated with variable fee caps. • The breakeven line for small stores ($350 cap) is 14% fees ($350/$2500), which the highest category is 12% so far. This means small-stores are ALWAYS better off switching to managed payments.
• The breakeven line for big stores ($250 cap) is 10% fees ($250/$2500), which the only category above that right now is books (12%). Big-stores selling BOOKS are worse off with managed payments. So in other words, • everybody is better off with managed payments (fee-wise) EXCEPT big-stores selling books • under managed payments, there’s no final value fee advantage for being a big-store * Remember you got 0.55% discount over the payment processing portion of the fees too and is not shown here since we’re just talking about savings in vanilla final value fees. As far as books (12%) is concerned, if you are a big-store owner, your raw commissions cap raised from$250 to $300 because$300 = $2500 * 12%. But if you factor the 0.55% discount, if the sale price is$x,

Since the raw commissions stops at $300 ($2500 * 12%), the additional $50 cap increase starts to get offset (and turn out positive as the processing fee discounts outweighs the commissions cap increase) when the payment processing discounts ABOVE$2500 covers all of it:

So the range of sale price x which the only setup (books for big-store sellers) can do worse is

There are some ambiguities (technically incorrect documentation) on eBay’s website which implied vanilla final value fees (portion) are charged for sales tax. I made a sale and checked the numbers and it’s not true. Only the 2.35% payment processing fee portion is charged against the sales tax (like paypal for handling extra money), the category final value fee (in my case 9.15%) is not applied to the sales tax.

They actually meant “the 2.35% payment processing fee portion” when they said “managed payment final value fees”. This is also part of the reason why I wrote this article, because they do not use the language that conceptually separate the two portion of the combined final value fees (vanilla final value fee + payment processing fee) in managed payments, thinking that they are simplifying the math for sellers, without realizing if the two concepts really fused into one, they’ll be shortchanging sellers over sales tax.

The $0.3 fixed per-transaction fee applies to both managed payments and the conventional way (Paypal also charge +$0.3 fixed per-transaction), so nothing has changed.

249 total views

## Mathematics of calculating overheads to charge

It’s very common to have a percentage fee (called tax rate below) slapped on our earnings:

• Commissions (e.g. eBay final value fees)
• Sales Tax
• Incremental income taxable within the same bracket.

Many adults cannot directly figure out how much EXTRA do they need to earn to get the target amount of that are rightfully theirs (after fees).

They are missing out these handy intuitions:

• If your EXTRA earnings are taxed at 40%, not wasting an extra $100 is almost as good as earning$166.67!
• You always undercharge the fee overhead if you just multiply your target amount with it (intuitive but WRONG). The amount you get shortchanged goes up like a rocket for rates past 10%!

All you need is grade school math to figure it out, but the results are highly non-linear and are not easy to remember. This means the tutorial below makes an excellent discussion material to motivate mathematics education in grade school!

If you want to copy or use the materials, just remember to reference “Rambling Nerd with a Plan” blog page or “Humgar LLC”. You don’t have to ask and you are welcome. I certainly appreciate comments about this tutorial.

Before I start, hints for working with percentages:

• a factor of represents the full amount
• division by is multiplication by reciprocal
• preferably work in terms of shrink/expand factors (multiplications only) over the FULL amount, rather than working out the excesses (additions/subtractions)

Assigning intuitive names are very important for interpreting algebraic expressions with familiar life concepts!

Let’s define these variables first:

• is the target price you want to get AFTER the fees
• is the damage you should charge
• is the rate (fee/tax percentages)
• is the excess that goes to the fee/tax collector

These derived variables will be discussed in the body of the tutorial

• is the shrinkage (multiplier), where

• is the magnifier (multiplier),

• is the compensation (multiplier) for billing,

If somebody pays you the total damage before tax/fees (at rate  ), you are home free with after subtracting the fees from

In other words, you the FULL damages gets shrunk by a factor which becomes the target amount you get to keep for yourself.

Don’t be scared by the algebra! It’s just the other way of saying, if your tax rate is 30%, it means you get to keep 70% of your income, because 100%-30% = 70%, so the shrinkage is

Since you have the target price in mind and want to undo the shrinkage (multiplication) to calculate the full amount you should charge , you divide the target price by the by the same factor to magnify it back:

It is NEVER helpful to directly multiply the fee/tax percentage with the target price like what most people would intuitively do. We can think of the division above as multiplying by the reciprocal of the loss factor or , which we will call the magnifier:

An alternative view is to break the damage into target price along with excess to bill the customers for the fees/taxes (which goes to the collector):

My preferred way to calculate the excess is simply compute first by finding calculating the magnifier , or simply dividing the target price by the shrinkage

then subtract the target price from the damage to get the excess:

which you can write it in terms of target price which you know so you don’t have to work out the total damages first:

We can define compensation (multiplier) as how much you should multiply the target price with:

to recover your fee/tax costs :

e.g. if you need to charge the customer 1.43 times to breakeven after fees (given a 30% tax/fees rate), the overhead ‘item’ on the bill should be 0.43 (43%) of the target price. For a $100 item, you’ll need to bill your customer$43 more just to cover a 30% fees.

In summary, we have

with that can be figured out quickly as

where and are both bounded between 0 and 1 which adds up to 1:

Take an example of 30% tax rate (), the shrink is 70% (), because the factors are complements to each other which adds up to 1.

If you want to write it all in terms of shrinkage  :

Since , we can rewrite the compensation (factor) in these complement terms , which I think it’s the easiest way to remember:

or simply the rate  divided by shrinkage ,

You can also write it in terms of fees/tax rate in full:

e.g. if you have a 30% tax rate , you can find the compensation (factor) by calculating , because it’s simply 30% over 70%.

To estimate the overhead charge to offset the fee percentage, or compensation by multiplying with the target price :

 WRONG way CORRECT way

The gap (amount you get shortchanged by computing compensation wrong) is:

The numbers look close enough (for the intuitive but wrong way) when it’s like around 10%, but your loss (overheads not charged) shoots up (non-linearly) as the fee/tax percentage goes up beyond 10%!

 Fee percentage Correct overhead % to charge Undercharged 5% 5.26% 0.26% 10% 11.11% 1.1% 20% 25% 5% 30% 42.86% 12.86% 40% 66.67% 26.67% 50% 100% 50%

Intuitively, you need to charge DOUBLE the target price if half of it gets taxed to breakeven, NOT adding half of the target price as the overhead! The FULL target price itself is the right overhead to charge for a 50% tax rate!

Since the numerator is bounded because , the denominator can blow up (the overhead needed to charge your customer) as it gets small (i.e. high tax/fee percentage). Here’s a plot of

.

e.g. A 66.6% imports tax rate means giving two cars to the government when you import one!

To push it to the extreme, 90% tax rate would mean 900% overhead, which means if you buy one car, you give the government 9 cars, assuming they don’t tax themselves. Your estimates are off by 10 times if you use the wrong method to compute the excess making up for the tax that gets charged.

If people can think in terms of the excess you will need to earn EXTRA to offset the tax rate, you will think twice about letting the government dip in another 2% or so. That’s why welfare-state through taxes can be highly counter-productive when it get past certain point.

Just for fun, let’s investigate the impractical case (communist tyrants) when tax rates gets to 100% and above ():

 Fee percentage Correct overhead % to charge Undercharged 90% 900% 810% 100% 200% -200% -400% 300% -150% -450% -100% –

When charged 100% tax rate, the government seize any gains from your production. There’s never enough overhead you can charge your customer to breakeven.

The more perverted case is what happens when you go above 100%, say at 200% tax rate? When you sell $100 worth of stuff, you pay the government$200 to do so. You are paying a $100 penalty to work/produce$100 for the government. Twisted!

Then how do we explain the negative numbers in the compensation (factor) ? In reality, it means you ALWAYS make a loss no matter what. There’s no way you can bill your customer to make up for the loss.

In the unrealistic (unicorn) case, if the benevolent crazy dictator is willing and able to compensate your losses (at the sales tax rate), aka giving you money back every time you lose money,

means the the magnifier for sure when . It’s illustrated in the graph as a deadzone

:

which you can verify the same by solving this inequality

and realize you get when , which boils down to saying which is a contradiction.

That means to achieve target earnings under tax rates above 100%, you should charge the customer negative amounts (i.e. giving away goods + money) in order to get the negative taxes (subsidy).

e.g. At 300% tax rate (), it your sticker price (also target price) is $100, you’ll need to GIVE the customer$50 so the government will triple match your loss as a subsidy (which is $150), which is enough to cover the$50 giveaway (to the customer) plus the $100 so you’ll breakeven. Having negative tax (subsidy) is a perverse incentive. Forget about maintaining a target earning (same as the original sticker price for the goods you are selling), just lose (give away) money as fast as you can to suck up as much subsidy as you can before the system collapse. I still fondly recall my economics 301 (intermediate microeconomics professors) James Montgomery saying that “if you can graph it, it can possibly happen” in the lecture. Even more sophisticated math for the geeks Remember from above, the magnification factor can be rewritten as the reciprocal of shrinkage factor ? which reminds me of geometric series given ? For non-perverted cases, we are considering , which is well within the domain. This gives an idea of approximating the compensation factor : which after taking off both sides: This explains exactly why people are wrong to just assume is just , because their false intuition kept ONLY the first order term in the power series approximation! The gap (how much they are off with the WRONG way), is So while dies fast enough (0.01) by 2nd order term, , … are still very significant up to say 6th order term which makes it off by a factor of . Yes, you are off by 1% when the tax rate is 10%, but you can be wrong by a factor of times if you just multiplied the tax rate with the target price to estimate the amount of excess to collect to make up for the fees! Summary • The total damage (bill) you should charge your customer can be calculated by dividing the target amount (of money you want to home free with) by the shrinkage, which is (1 – tax rate). e.g. 30% tax rate means 70% shrinkage. Divide the target by 0.7 to get the final bill • A shortcut to calculate how much you should put in (by multiplying with the target price) for an charge item on the bill to make up for the fees is (tax rate)/(shrinkage) e.g. 30% tax rate means 70% shrinkage. Multiply the target by 30/70 (or 3/7, which is 42%) to get the excess which covers the fee. • Do NOT attempt to use the rule of thumb to calculate the excess by just multiplying with the tax rate unless it’s 10% or less. The ‘rule of thumb’ assumed (and thus the higher powers) are insignificant, which is nowhere true for 0.1 and above based on this graph! 231 total views ## The mess converting decibels to voltages in test instruments (dBm, dBW, W, dbV, V) Complex conversions between decibels and physical quantity has always been a rich source of confusion. The reason is that dB(something) is actually a loaded word with hidden assumptions: • dB always works on base-10 • dB is always a relative (dimensionless) POWER quantity. The POWER quantity can be expressed in terms of non-power quantities (like voltages). • the scaling factor is always 10 for power. Adding one zero (multiply by 10) to the power ratio means +10dB. • dB(something) is always with respect to a quantity (the something), and the reference quantity is often not written in full. Since there is an implicit reference, db(something) can be mapped to absolute quantities. If you are a diverse multi-disciplinary techie like me (math, electronics, programming, computers), it’d frustrate the hell out of you when you talk to people who has been working exclusively on a narrow field for at least a decade and they have a table of commonly used numbers in their memorized: they act like you are supposed to know how to get the numbers in the dB-variant that they use, than explaining to you what the field-specific assumptions are (likely because they forgot about it). I hope this post will clear up the confusion by working out an example in test instrumentation, most commonly in RF as well, converting dBm to Volts. Before I start, I’ll clarify the most common form of beginner confusion in EE and physics: converting between dB and voltages: This looks like a definition of decibel, except the scaling factor is 20 magically for Volts. It is correct because we always work in power and power can be expressed as voltage-squared (resistances in the divisor cancel out if they are the same). Most people take it as an equivalent definition of decibels, and throw away these important assumptions behind it: • the reference is 1V, • and the resistance* (common to the voltage of interest and the reference voltage) gets cancelled and run into troubles when they venture into those dB-variants like dBm. Technically the above should be written as dBV, but I have seen very few people use the clearer term. The decibel formula for voltage came from where and , you get The get cancelled out and you get People moved the squaring out and lumped (multiplied) it with the scaling factor 10: So the whole reason why it is instead of is simply because , and . Now back to the business converting dBm to dBV or Volts. First of all dBm is dB(mW), NOT dB(mV). The RF/telecom people are just too lazy to write out the most important part: the physical quantity expressly, because nearly all the time, it’s the power that matters to them. However, I often need to connect a RF generator to a high bandwidth oscilloscope, so the very self-centered RF/telecom nomenclature start to become problematic when people of different fields need to talk to each other. Oscilloscope see everything in volts. RF sees everything in power, often in dB. Then we get to the (mW) part, which means the reference quantity in the definition is 1mW, which is a physical quantity with dimensions. Then how are we going to convert it to Volts? You cannot jump to the shortcut formula I illustrated above with the 20 factor this time because the reference is in mW and your quantity is in Volts. You’ll need to convert power to voltages. To do so, you’ll need to know voltages induced by power ‘dissipated’ through a ‘resistance’ across a component (load). The missing gap is that you will need to know the load ‘resistance’ before the conversion. With that, you can use , or rewritten as when it’s more convenient. All RF-related test-instruments and bench function generators typically have a 50Ω output impedance, which means it also assumes a matching 50Ω as mathematically, it provides the maximum power transfer (sadly split evenly between the load and wasted at the instrument’s output impedance). For convenience, the amplitude you see in the instrument control panel refers to the amplitude you see at a 50Ω load, not what the instrument pumps out internally (that’s why you see 2Vpp when your function generator says 1Vpp if you hook it up to a low-end oscilloscope that serves 1MΩ by default). Since we are dealing with continuous wave (not transient power), all amplitude quantities on RF test instruments are in RMS (power or voltage) unless otherwise specified. So the quantities we have for dBm is when written in terms of voltages, Instead of splitting it into 3 terms and immediately grouping the constants, I’d like to first convert dBm to dBW: The linear quantity in dBm is artificially scaled 1000 times bigger than in dbW, to put it in a comfortable scale for us to work with smaller signals. Therefore dBm is always 30dB higher than dbW (the smaller the reference, the bigger the relative numbers look). So back to the above in dBW, we subtract 30dB to get to dBW: where We can separate the load and put it on the left hand side The right hand side is dBV, and you can think of the load as scaling the power up (inducing) the voltage-squared quantity (, or ). is 16.9897dB, for most purposes I’ll just say the load lift the dBW by 17dB when turning it into dBV. Having both together, (This is how you should remember it, so you can replace the +17dB for 50Ω with when you work on other applications, like 600Ω, 4Ω, 8Ω for audio.) Basically: -30dB to undo the mili- prefix at reference power (with dBm, the power ratio is bloated by 1000 times, aka by 3 zeros) +17dB to account for the load inducing the voltage by burning Watts The end result (for the 50Ω case): Then you can convert dBV to : Phew! That’s a lot of steps to get to something this simple. So the moral of the story is that these assumptions cannot be ignored: • The quantity is always power in dB, not voltages • dB(mW) has a reference of 1mW. The smaller the reference, the bigger the numbers • RMS voltages and power are used in RF • 50Ω is the load required to convert from power to voltages Keysight already has a derivation, but it’s just a bunch of equations. The missing gap I want to fill in this blog post is that people find this so confusing they’d rather believe a formula or a table pulled on the internet: it doesn’t have to be this way after realizing that there’s a bunch of overlooked assumptions. * Technically I should call it (load) impedance , as in RF, capacitive and inductive elements are nearly always involved, but I want to make it appealing to those with high school physics background. 1,207 total views ## Synthesized/Arbitrary Waveform/Function Generators: sampling rate matters It’s basic (signal processing) mathematics that square waves (or any waveform with sharp edges) carry harmonics that doesn’t die fast enough, therefore not band-limited. The more sudden the transition is, the more harmonics you need to preserve to faithfully represent a signal in practice. Unless the signal itself is known to be band-limited (like sine waves, classical modulation schemes, SRRC), it puts the burden on the test instruments involved to provide large bandwidths and the matching high sampling rate (needs to do better than Nyquist). The technology today provides generous bandwidths and sampling rates for oscilloscopes at reasonable prices, but synthesized function/waveform generators with the same bandwidth/sampling rate can easily cost 10 times if not even more. For the money to buy a used not-too-old 500Mhz pulse generator, you can buy a 4GHz 20GS/s oscilloscope! For oscilloscopes, users are often aware that the rounded square waves/transitions they see on the screen is due to bandwidth limitations, and will account for the reality distortion in their mind. If your square wave clocks are pushing the sampling rate of the scope and the combined bandwidth between the scope and the probes, you should very well expect that you cannot catch much glitches and pretty much use the oscilloscope as a frequency counter or check for modulations. For synthesized signal generators, bandwidth and samplers are way more precious. Oscilloscopes generally has 8-bit vertical resolution (256 steps), but synthesizers typically has 12-bit vertical resolution (4096 steps) or better. There are imperfect techniques trading vertical resolution and sampling rate, but there is no free lunch. I have on my bench these 12-bit function generators: • an old Analogic 2045B (roughly US$1300, 400Mhz, 800MS/s, 2Mpts) and
• a much more modern Agilent/HP 33120A (roughly US\$600, 15Mhz, 40MS/s, 16Kpts)

that I’d like to illustrate the value of getting a higher bandwidth (and therefore higher sampling rate) unit. The 33120A has a much finer control over frequency/amplitude/offset steps (2045B only allows fixed point increments) and might have better noise characteristics and much smaller form factor considering Analogic 2045B is made in the 1970s and HP 33120A is made at least 20 years after that. I would have liked to keep only my 33120A or 33255B on my bench to save space, but once you’ve seen this screenshot, you’ll know why I’m still willing to cough up the space for 2045B:

The upper waveform (Channel 1, yellow) is a from Analogic 2045B while the lower waveform (Channel 2, green) is from HP 33120A. They are both set at around 15Mhz and you can see that when approaching the limit of the synthesizer, 33120A rounds off the square wave to almost a sinusoid.  Square waves gets ugly quickly above 10Mhz and this is as far as 33120A is capable of.

On the other hand, the square waves are bang-on for 2045B, and is still decent at around 50Mhz (my BNC cable starts to come in question). That’s why it’s still worth getting a high bandwidth synthesized function generator even if your budget only allows for clumsy old models if you use your function generator for something more than sinusoids.

Note that function generators are typically designed to pump out to 50Ω loads and the amplitude displayed in the function generator () assumed so. That’s why beginners gets confused why they read when the function generators says : the function generator sends out nearly double the voltage so the potential divider formed between the 50Ω output impedance of the function generator and the 50Ω load impedance will split the voltage into half. If you set your oscilloscope to 1MΩ load, you are getting , which is nearly .

More importantly, if the load is not matched, the small capacitances in the chain will distort the waveform received by your oscilloscope severely at higher frequencies, so you can barely get a square wave at fundamental frequency above 2Mhz undistorted if you feed it into an oscilloscope with 1MΩ input impedance, while in reality the signal generator, cables, and oscilloscope can do much better.

Most cheap low bandwidth oscilloscopes (like 100Mhz) do not have 50Ω option. Nonetheless the impedance needs to be matched if you work with square waves at 2Mhz or above. Just buy a 50Ω feed-through BNC ‘termination’ adapter and plug it right at the 1MΩ input port. In reality, it’s a divider between 50Ω and (1MΩ//50Ω), with the scope seeing  of . For all practical purposes the oscilloscope sees or .

With 50Ω feed-through BNC ‘termination’ adapters, make sure you work out the impedance matching if you split the signals if it’s not the simple nearly 1:1 potential divider halving the voltage. At low frequencies, the amplitudes will be off, but when you start going into Mhz range and above, your signal will be distorted badly as well.

989 total views

989 total views

## A genuine probability problem at Foxtrot

1,364 total views

## Super-simplified: What is a topology

‘Super-simplified’ is my series of brief notes that summarizes what I have learned so I can pick it up at no time. That means summarizing an hour of lecture into a few takeaway points.

These lectures complemented my gap in understanding open sets in undergrad real analysis, which I understood it under the narrow world-view of the real line.

: Universal set

Topology ≡ open +

Open ≡ preserved under unions, and finite intersections.

Why finite needed for intersections only? Infinite intersections can squeeze open edge points to limit points, e.g. .

Never forget that is always there because it might not have properties that the meat open set doesn’t have. e.g. a discrete topology of on universal set means for any irrational point, is the only open-neighborhood (despite it looks far away) because they cannot be ‘synthesized*’ from using operation that preserves openness.

* ‘synthesized’ in here means constructed from union and/or finite intersections.

[Bonus] What I learned from real line topology in real analysis 101:

1. Normal intuitive cases
2. Null and universal set are clopen
3. Look into rationals (countably infinite) and irrationals (uncountable)
4. Blame Cantor (sets)!

1,027 total views