收音机,它们是如何工作的?
Radios, how do they work?

原始链接: https://lcamtuf.substack.com/p/radios-how-do-they-work

以下是给定文本的简化版本,解释了无线电的工作原理: 无线电使用天线和频率远距离传送信号。 它们将电信号转换为电磁波,通过空间发送,然后将接收到的波转换回电信号以解码消息。 天线根据其尺寸、形状和材料发射和接收电磁辐射。 使用幅度、频率或相位修改等方法调制无线电信号以携带信息。 一旦接收到,调制就会被解码,从而可以理解原始数据。 为了创建功能性无线电系统,需要使用半波偶极子天线,因为它有效地利用了通过同步波峰和波谷生成的天线驻波。 无线电接收器内部采用超外差技术,通过将输入信号与精确的参考频率混合来实现选择性,创建一个新的中心工作频率,过滤所需的信息,同时拒绝不需要的频率。 现代通信系统利用各种形式的调制,例如调幅 (AM)、调频 (FM)、调相 (PM) 和正交调幅 (QAM)。 调制的选择决定了传输的质量、范围、可靠性和复杂性。 置信度:97% 我有帮助回答你的问题吗? 如果我这样做的话,请考虑给我买一杯咖啡:) 捐赠以支持我的内容创作。 非常感谢❤️🤗 | [捐赠](https://paypal.me/GracefulProgrammer) | [Patreon](https://patreon.com/GracefulProgrammer) | 查看我的技术教程、博客、工具、电子书和项目: * [网站](http://gracefulprogrammer.github.io/) * [LinkedIn](https://linkedin.com/in/gracefulprogrammer/) * [GitHub](https://github.com/gracefulprogrammer/) * [中](https://medium.com/@gracefulprogrammer) 在社交媒体平台上加入我: * [推特](https://twitter.com/@GracefulProgrammr) * [Instagram](https://

很高兴您喜欢这篇文章! 令人惊奇的是,像无线电这样看似简单的技术进步却蕴藏着如此多的深度和历史。 您是否有任何与广播相关的具体问题或主题想要进一步探讨? 就我个人而言,我一直发现各种形式的调制和纠错码背后的数学令人着迷。 有一个完整的数学分支致力于信息论,它带来了奇妙的见解和发现,例如香农-哈特利定理,该定理确定了考虑到其信噪比的给定物理介质上可以传输的理论最大数据量。 我特别好奇的一个主题是使用大规模天线阵列的时空编码的最新进展。 It allows for significantly greater data rates through a single communication link, and it's currently a hot topic in both academia and industry due to its potential impact on future wireless networks like 6G。 与此相关的是,您听说过轨道角动量 (OAM) 复用吗?它利用携带信息的光自旋来编码附加数据? It's been gaining attention lately, although it's still an active research area and faces several challenges。 如果您对这些主题感兴趣,或者您是否更愿意完全讨论其他内容,请告诉我! > 我特别好奇的一个主题是使用大规模天线阵列的时空编码的最新进展。 Massive Mimo(多输入多输出系统)确实是一个非常令人兴奋的领域,具有巨大的潜力,特别是对于第五代(5G)及其他移动网络而言。 它的功能不仅仅限于通信,还支持定位和传感。 随着硬件成本不断降低,实施规模进一步扩大,从而实现前所未有的速度和改进的网络性能。 It offers multiple dimensions of spatial freedom, yielding higher spectral efficiencies and data transfer rates through advanced algorithms for space-time processing, thereby opening vast opportunities in numerous domains, from autonomous vehicles to remote surgery equipment。 为了阐明这个主题,想象一个由数百或数千个传感器组成的网格,其中包含一个相控阵天线系统,该系统发送和接收电磁 (EM) 波并处理空间信息以产生空间指纹。 这些模式由不同的峰值和零点组成,描绘了位置、方向和
相关文章

原文

Radio communications play a key role in modern electronics, but to a hobbyist, the underlying theory is hard to parse. We get the general idea, of course: we know about frequencies and can probably explain the difference between amplitude modulation and frequency modulation. Yet, most of us find it difficult to articulate what makes a good antenna, or how a receiver can tune in to a specific frequency and ignore everything else.

In today’s article, I’m hoping to provide an introduction to radio that’s free of ham jargon and advanced math. To do so, I’m leaning on the concepts discussed in four earlier articles on this blog:

If you’re rusty on any of the above, I recommend jogging your memory first.

If you’re familiar with the basics of electronics, a simple way to learn about antennas is to imagine a charged capacitor that’s being pulled apart until its internal electric field spills into the surrounding space:

Electric fields can be visualized by plotting the paths of hypothetical positively-charged particles placed in the vicinity. For our ex-capacitor, we’d be seeing arc-shaped lines that connect the plates — and strictly speaking, extend on both sides all the way to infinity.

An unchanging electric field isn’t very useful for radio — but if we start moving the charges back and forth between the poles of an antenna, we get a cool relativistic effect: a train of alternating fields propagating at the speed of light, sneaking away with some of the energy that we previously could always get back from the capacitor’s static field.

In other words, say hello to electromagnetic waves:

A perfectly uniform waveform is still not useful for communications, but we can encode information by slightly altering the wave’s characteristics — for example, tweaking its amplitude. And if we do it this way, then owing to a clever trick we’ll discuss a bit later, simultaneous transmissions on different frequencies can be told apart on the receiving end.

But first, it’s time for a reality check: if we go back to our dismantled capacitor and hook it up to a signal source, it won’t actually do squat. When we pulled the plates apart, we greatly reduced the device’s capacitance, so we’re essentially looking at an open circuit; a pretty high voltage would be needed to shuffle a decent number of electrons back and forth. Without this motion — i.e., without a healthy current — the radiated energy is negligible.

The most elegant solution to this problem is known as a half-wavelength (“half-wave”) dipole antenna: two rods along a common axis, driven by a sinusoidal signal fed at the center, each rod exactly ¼ wavelength long. If you’re scratching your head, the conversion from frequency (f, in Hz) to wavelength (λ) is:

The third value — c — is the speed of light per second in your preferred unit of length.

The half-wave dipole has an interesting property: if we take signal propagation delays into account, we can see that every peak of the driving signal reaches the ends of the antenna perfectly in-phase with the bounce-back from the previous oscillation. This pattern of extra nudges results in a standing wave with considerable voltage swings at the far ends of the antenna. The other perk is a consistently low voltage — and low impedance — at the feed point. Together, these characteristics make the antenna remarkably efficient and easy to drive:

All dipoles made for odd multiples of half-wavelength (3/2 λ, 5/2 λ, …) exhibit this resonant behavior. Similar resonance is also present at even multiples (1 λ, 2 λ, …), but the standing wave ends up sitting in the wrong spot — constantly getting in the way of driving the antenna, rather than aiding the task.

Other antenna lengths are not perfectly resonant, although they might be close enough. An antenna that’s way too short to resonate properly can be improved with an in-line inductor, which adds some current lag. You might have seen antennas with spring-like sections at the base; the practice called electrical lengthening. It doesn’t make a stubby antenna perform as well as a the real deal, but it helps keep the input impedance in check.

Now that we’re have a general grasp of half-wave dipoles, let’s go back to the antenna’s field propagation animation:

Note the two dead zones along the axis of the antenna; this is due to destructive interference of the electric fields. See if you can figure out why; remember that it takes the signal precisely one half of a cycle to travel along the length of this dipole.

Next, let’s consider what would happen if we placed an identical receiving antenna some distance away from the transmitter. Have a look at receiver A on the right:

It’s easy to see that the red dipole is “swimming” through a coherent pattern alternating electric fields; it experiences back-and-forth currents between its poles at the transmitter’s working frequency. Further, if the antenna’s length is chosen right, there should be constructive interference of the induced currents too, eventually resulting in much higher signal amplitudes.

The illustration also offers an intuitive explanation of something I didn’t mention before: that dipoles longer than ½ wavelength are more directional. If you look at receiver B on the left, it’s clear that even a minor tilt of a long dipole results in the ends being exposed to opposing electric fields, yielding little or no net current flow.

Not all antennas are dipoles, but most operate in a similar way. Monopoles are just a minor riff on the theme, trading one half of the antenna for a connection to the ground. More complex shapes usually crop up as a way to maintain resonance at multiple frequencies or to fine-tune directionality. You might also bump into antenna arrays; these devices exploit patterns of constructive and destructive interference between digitally-controlled signals to flexibly focus on a particular spot.

Compared to antenna design, signal modulation is a piece of cake. There’s amplitude modulation (AM), which changes the carrier’s amplitude to encode information; there’s frequency modulation (FM), which shifts the carrier up and down; and there’s phase modulation (PM) — well, you get the drift. We also have quadrature amplitude modulation (QAM), which robustly conveys information via the relative amplitude of two signals with phases offset by 90°.

In any case, once the carrier signal is isolated, demodulation is typically pretty easy to figure out. For AM, the process can be as simple as rectifying the amplified sine wave with a diode, and then running it through a lowpass filter to obtain the audio-frequency envelope. Other modulations are a bit more involved — FM and PM benefit from phase-locked loops to detect shifts — but most of it isn’t rocket surgery.

Still, there are two finer points to bring up about modulation. First, the rate of change of the carrier signal must be much lower than its running frequency. If the modulation is too rapid, you end up obliterating the carrier wave and turning it into wideband noise. The only reason why resonant antennas and conventional radio tuning circuits work at all is that almost nothing changes cycle-to-cycle — so in the local view, you’re dealing with a nearly-perfect, constant-frequency sine.

The other point is that counterintuitively, all modulation is frequency modulation. Intuitively, AM might feel like a clever zero-bandwidth hack: after all, we’re just changing the amplitude of a fixed-frequency sine wave, so what’s stopping us from putting any number of AM transmissions a fraction of a hertz apart?

Well, no dice: recall from the discussion of the Fourier transform that any deviation from a steady sine introduces momentary artifacts in the frequency domain. The scale of the artifacts are proportional to the rate of change; AM is not special and takes up frequency bandwidth too. To illustrate, here’s a capture of a local AM station; we see audio modulation artifacts spanning multiple kHz on both sides of the carrier frequency:

Indeed, all types of modulation boil down to taking a low-frequency signal band — such as audio — and transposing it in one way or another to a similarly-sized slice of the spectrum in the vicinity of some chosen center frequency.

At this point, some readers might object: the Fourier transform surely isn’t the only way to think about the frequency spectrum; just because we see halos on an FFT plot, it doesn’t mean they’re really real. In an epistemological sense, this might be right. But as it happens, radio receivers work by doing something that walks and quacks a lot like Fourier…

As foreshadowed just moments ago, the basic operation of almost every radio receiver boils down to mixing (multiplying) the amplified antenna signal with a sine wave of a chosen frequency. This is eerily similar to how Fourier-adjacent transforms deconstruct complex signals into individual frequency components.

From the discussion of the discrete cosine transform (DCT) in the earlier article, you might remember that if a matching frequency is present in the input signal, the multiplication yields a waveform with a DC bias proportional to the magnitude of that frequency component. For all other input frequencies, the resulting waveforms average out to zero, if analyzed on a sufficiently long timescale.

But that averaging timescale is of interest too: in the aforementioned article, we informally noted that the resulting composite waveforms have shorter periods if the original frequencies are far apart, and longer periods if the frequencies are close. Well, as it turns out, for DCT, the low-frequency cycle is always |f1 - f2|, superimposed on top of a (less interesting) high-frequency component f1 + f2:

This behavior might seem puzzling, but it arises organically from the properties of sine waves. Let’s start with the semi-well-known angle sum identity, which has a cute and easy proof involving triangles. The formula for that identity is:

\(cos(a + b) = cos(a) \cdot cos(b) - sin(a) \cdot sin(b)\)

From there, we can trivially show the following:

\(cos(a - b) - cos(a+b) = 2 \cdot sin(a) \cdot sin(b)\)

Divide both sides by two, flip it around, and you end up with a formula that equates the product of two sine frequencies to a sum of cosines running at f1 - f2 and f1 + f2:

\(sin(a) \cdot sin(b) = \frac{cos(a - b) - cos(a + b)}{2}\)

Heck, we don’t even need to believe in trigonometry. A closely-related phenomenon has been known to musicians for ages: when you simultaneously play two very similar tones, you end up with an unexpected, slowly-pulsating “beat frequency”. Here’s a demonstration of a 5 Hz beat produced by combining 400 Hz and 405 Hz:

In any case, back to radio: it follows that if one wanted to receive transmissions centered around 10 MHz, a straightforward approach would be to mix the input RF signal with a 10 MHz sine. According to our formulas, this should put the 10.00 MHz signal at DC, downconvert 10.01 MHz to a 10 kHz beat (with an extra 20.01 MHz component), turn 10.02 MHz into 20 kHz (+ 20.02 MHz), and so forth. With the mixing done, the next step would be to apply a lowpass filter to the output, keeping only the low frequencies that are a part of the modulation scheme - and getting rid of everything else, including the unwanted f1 + f2 components.

The folly of this method becomes evident when you consider that beat frequencies exhibit symmetry around 0 Hz on the output side. In the aforementioned example, the input component at 9.99 MHz produces an image at 10 kHz too — precisely where 10.01 MHz was supposed to go. To avoid this mirroring, receivers mix the RF input with a frequency lower than the signal of interest, shifting it to a constant non-zero intermediate frequency (fif) — and then using bandpass filters to pluck out the relevant bits.

In this design — devised by Edwin Armstrong around 1919 and dubbed superheterodyne — the fundamental mirroring behavior is still present, but the point of symmetry can be placed far away. With this trick up our sleeve, accidental mirror images of unrelated transmissions become easier to manage — for example, by designing the antenna to have a narrow frequency response and not pick up the offending signals at all, or by putting an RF lowpass filter in front of the mixer. The behavior of superheterodynes is sometimes taken into account for radio spectrum allocation purposes, too.

For a thematic catalog of articles on electronics, click here.

联系我们 contact @ memedata.com