![]() |
|
![]() |
| > Then we might as well start to call all classical phenomena repetition codes
All classical phenomena are repetition codes (e.g., https://arxiv.org/abs/0903.5082 ). And this is perfectly compatible with the meaning in communication theory, except that the symbols we're talking about are the states of the fundamental physical degrees of freedom. In the exact same sense, the von Neumann entropy of a density matrix is the Shannon entropy of its spectrum, and no one says "we shouldn't call that the Shannon entropy because Shannon originally intended to apply it to macroscopic signals on a communication line". |
![]() |
| Yeah, I agree it's unusual to describe "increased brightness" as "bigger distance repetition code". But I think it'll be a useful analogy in context, and I'd of course explain that. |
![]() |
| > Stuff like DRAM storing a 0 or 1 via the presence or absence of 40K electrons
I'd assume that these days it's a couple of orders of magnitude fewer than that (the cited source is from 1996). Incidentally, 40k e- is roughly the capacity of a single electron well ("pixel") in a modern CMOS image sensor [1] – but those 40k electrons are able to represent a signal of up to ~14 bits, around 10k distinct luminance values, depending on temperature and other noise sources. [1] https://www.princetoninstruments.com/learn/camera-fundamenta... |
![]() |
| > And to be clear, you can absolutely send a classical signal with individual quanta.
Yes. Though how 'classical' your understanding of that system is stands to reason? |
![]() |
| You might be making the mistake of thinking that quantum mechanics runs on probabilities, which work in the way you are used to, when in fact it runs on amplitudes, which work quite differently. |
![]() |
| > How many photons are received per bit transmitted from Voyager 1?
wouldn't you also want to know how many photons are transmitted and how many bits transmitted are received? |
![]() |
| I often wondered why MIMO was such an investigated topic. It would make sense if the Shannon limit is higher for this channel. Is there a foundational paper or review that shows this? |
![]() |
| In principle, you can send more than one photon per bit on average. Photons have a lot of ways they can encode additional bits, eg frequency, polarisation, timing.
You are right that you can randomly lose some photons. That's what error correcting codes are. See https://en.wikipedia.org/wiki/Error_correction_code As an example, assume every photon can encode 10 bits without losses, but you lose 10% of your photons. Then with a clever error correcting code you can encode just shy of 9 bits per photon. You can think of the error correcting code 'smearing' 9 * n bits of information over 10 * n photos, and as long as you collect 0.9 * n photons, you can recover the 9 * n bits of information. It's the same reason your CD still plays, even if you scratch it. In fact, you can glue a paper strip of about 1 cm width on the bottom of your CD, and it'll still play just fine. Go wider, and it won't, because you'll be exceeding the error correcting capacity of the code they are using for CDs. |
![]() |
| Special relativity will give limits on how aligned the clocks can be (which should be a function of the distance between the clocks). So there will be precision limits. |
![]() |
| Maybe I’m mixing Shannon’s limit with the sampling rate imposed by the Nyquist-Shannon Sampling theorem
> Around 2004, Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires.[4][5] This idea is the basis of compressed sensing … > However, if further restrictions are imposed on the signal, then the Nyquist criterion may no longer be a necessary condition. A non-trivial example of exploiting extra assumptions about the signal is given by the recent field of compressed sensing, which allows for full reconstruction with a sub-Nyquist sampling rate. Specifically, this applies to signals that are sparse (or compressible) in some domain From: https://en.m.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_samp... |
![]() |
| An interesting thing about photons (which may not be true, I just enjoy this stuff amateurishly, that is, without the effort or rigor to actually understand it.) is that they might not exist. the em field is not quantized, or at least is not quantized at the level of photons. A "photon" only exists where the em field interacts with matter, where the electrons that create the disturbance can only pulse in discrete levels.
https://www.youtube.com/watch?v=ExhSqq1jysg Not that this changes anything, we can only detect or create light with matter. but it does make me curious about single photon experiments and what they are actually measuring. |
![]() |
| Isn't that simply the principle of particle-wave duality? When particle/wave in field X interacts with field X, it behaves like a wave, but interactions with other fields are quantised. |
![]() |
| Didn’t realize the math would be that straightforward. Is there something the author isn’t taking into account or is that a decent plausible range? |
![]() |
| That’s because his results are about pure information (and in the limit for infinite string lengths), so sooner or later some hardware will hit onto those limits or tend to them. |
![]() |
| Not my field, but assuming transmitting hardware (including beam forming) is constant and that atmosphere can mostly be ignored (see comments about it usually being a non-impact in the transmission frequencies), two approaches would suggest:
1. Increase the effective receiving dish size, to capture more of the signal. Essentially, this would be effective in direct proportion to beam spread (the more beam spread, the bigger dish you can use to capture signal). In practice, this would use multiple geographically-displaced dishes to construct a virtually-larger dish, to allow for better noise-cancellation magic (and at lower cost than one huge dish). I believe the deep space network (DSN) already does this? Edit: It certainly has arrayed antennae [0], though not sure how many are Voyager-tasked. 2. Increase the resilience of the signal, via encoding. The math is talking about bits and photons, but not encoded information. By trading lower bit-efficiency for increased error tolerance (i.e. including redundant information) we can extract a coherent signal even accounting for losses. Someone please point out if I'm wrong, but afaik the Shannon–Hartley limit speaks to "lower" in the physical stack than error coding. I.e. one can layer arbitrary error coding on top of it to push limits (at the expense of rate)? If the above understanding is correct, is there a way to calculate maximum signal distance assuming a theoretically maximally efficient error coding (is that a thing?) ? Or is that distance effectively infinite, assuming you're willing to accept an increasingly slow bit receiving rate? [0] https://en.m.wikipedia.org/wiki/NASA_Deep_Space_Network#Ante... |
![]() |
| The resilience of the signal part...
https://www.allaboutcircuits.com/news/voyager-mission-annive... > The uplink carrier frequency of Voyager 1 is 2114.676697 MHz and 2113.312500 MHz for Voyager 2. The uplink carrier can be modulated with command and/or ranging data. Commands are 16-bps, Manchester-encoded, biphase-modulated onto a 512 HZ square wave subcarrier. The "Manchester encoding" brings us to https://www.allaboutcircuits.com/technical-articles/manchest... https://en.wikipedia.org/wiki/Manchester_code Note that "16 bps" while the system runs at 160 bps. This suggests that the data is repeated ten times and xor'ed with a clock running at 10 HZ. While there's no VOY set up now, https://eyes.nasa.gov/dsn/dsn.html will occasionally show it. When that happens, you will likely see two set up for it. I've not seen them set up across multiple facilities - the facilities are 120° apart and only one has a spacecraft above the horizon for any given length of time. --- In the "sensitivity to photons" category, I'll also mention https://en.wikipedia.org/wiki/Lunar_Laser_Ranging_experiment... At the Moon's surface, the beam is about 6.5 kilometers (4.0 mi) wide[24][i] and scientists liken the task of aiming the beam to using a rifle to hit a moving dime 3 kilometers (1.9 mi) away. The reflected light is too weak to see with the human eye. Out of a pulse of 3×10^17 photons aimed at the reflector, only about 1–5 are received back on Earth, even under good conditions. They can be identified as originating from the laser because the laser is highly monochromatic. While there's no signal there, we're still looking at very sensitive equipment. |
![]() |
| There's another major factor. I suppose it falls under the encoding. Frequency stability. In a certain sense, having an extremely precise oscillator at both the receiver and the transmitter, is the same thing as just having a better more frequency-stable antenna, or less noise in the channel (because you know what the signal you're listening for should look like).
I'm no physicist here so take this with a major grain of salt. I think the limit might ultimately arise from the uncertainty principle? Eventually the signal becomes so weak that measuring it, overwhelms the signal. This is why the receiver of space telescopes is cooled down with liquid helium. The thermally-generated background RF noise (black bodies radiate right down into the radio spectrum) would drown everything else out otherwise. Along those lines, while I'm still not quite sure where the limit is, things become discrete at the micro level, and the smallest possible physical state change appears to be discrete in nature: https://en.wikipedia.org/wiki/Landauer%27s_principle Enough work physically must occur to induce a state change of some kind at the receiver, or no communication can occur. (But this interpretation is disputed!) |
![]() |
| TIL from another comment that the Shannon limit assumes Gaussian noise so it's not actually always the theoretical limit.
You can't work around the Shannon limit by using encoding. It's the theoretical information content limit. But you can keep reducing the bandwidth and one way of doing that is adding error correction. So intuitively I'd say yes to your question, the distance can go to infinity as long as you're willing to accept an increasingly low receive bit rate. What's less clear to me is whether error correction on its own can be used to approach the Shannon limit for a given S/N ratio - I think the answer is no because you're not able to use the entire underlying bandwidth. But you can still extract a digital signal from noise given enough of a signal... EDIT: There is a generalization of the Shannon limit to non-white Gaussian noise here: https://dsp.stackexchange.com/a/82840 |
![]() |
| I guess we could just program Voyager 1 to lower the data bitrate, adding more redundancy / error correction. I think there's no real limit to how far it could get if we keep doing that. |
![]() |
| Maybe some kind of relay in space (maybe it could follow Voyager 1, slightly faster) which amplifies the signal and retransmits to Earth ? |
![]() |
| Could we e.g. decrease the bit rate, and in effect send less information but for longer?
Eg send a continuous wave with low signal beyond its phase, and measure at a rate of (digital) bits per month? |
![]() |
| You know, I never really thought of lower wavelengths than light as being carried by photons, but I suppose it's all EM. Antennas are technically just really red light bulbs. |
![]() |
| I just wanted to chime in with a reminder that though Voyager 1 is speeding away from Sol at a constant velocity because of the Earth's revolution around the sun it can be up to ±1 AU closer or further away, depending on the time of the year.
This article is for Voyager 2, but the issue is the same. For a brief moment every year we actually get closer to Voyager 1, then we pivot away in our revolution around the sun and the distance between Earth and Voyager 1 or 2 increases sharply. So distance, when plotted over time, looks like a wobbly line. https://earthsky.org/space/voyager-spacecraft-getting-closer... |
![]() |
| > Also very surprised voyager is using 2.3ghz, that's crazy saturated on earth due to wifi
In addition to it being 2.4GHz, this is the reason for having the National Radio Quiet Zone near Green Bank |
![]() |
| Nope. It's one of those things that can take a bit to get used to, but everything on the electromagnetic spectrum is just light in the general sense. The only difference between radio-waves, x-rays, infra-red and (human) visible light is the frequency/wavelength.
If the frequency is high enough then the waves of light can be detected by things as small as cells in the back of your eye, or the pixels in a camera sensor. If it is too low then you need much larger detectors. Other animals have detectors for different frequencies/wavelengths, allowing them to see either infra-red (mosquitos) or ultraviolet (bees, butterflies etc). What we call "visible light" is just the particular range that our eyes can detect (about 400 to 800THz). If we were the size of a planet, and our eye cells were the size of a radio-telescope dish we would be able to "see" in those wavelengths. In fact, when we see images taken by radio telescopes, those have been essentially pitch-shifted up to something we can see, like the reverse of what we do when listening for bat clicks (where the pitch is downshifted to our hearing range). The wikipedia article has a nice little diagram putting the wavelengths into perspective. https://en.wikipedia.org/wiki/Electromagnetic_spectrum |
![]() |
| WiFi is at 2.4 GHz. LTE band 30, satellite radio (XM/Sirius) and aeronautical telemetry all exist between the deep space downlink at 2290 to 2300 MHz and WiFi at 2400 MHz. |
![]() |
| That's how many are sent by Voyager. Only about 1500 or 400 photons per bit are actually received by the radio dish (depending on which frequency is being used). |
![]() |
| Does this number account for all photons or just those in the rather narrow optical band?
Then again RF photons just don't fit through the pupils and will get backscattered, i guess. |
I work in quantum error correction, and was trying to collect interesting and quantitative examples of repetition codes being used implicitly in classical systems. Stuff like DRAM storing a 0 or 1 via the presence or absence of 40K electrons [1], undersea cables sending X photons per bit (don't know that one yet), some kind of number for a transistor switching (haven't even decided on the number for that one yet), etc.
A key reason quantum computing is so hard is that by default repetition makes things worse instead of better, because every repetition is another chance for an unintended measurement. So protecting a qubit tends to require special physical properties, like the energy gap of a superconductor, or complex error correction strategies like surface codes. A surface code can easily use 1000 physical qubits to store 1 logical qubit [2], and I wanted to contrast that with the sizes of implicit repetition codes used in classical computing.
1: https://web.mit.edu/rec/www/dramfaq/DRAMFAQ.html
2: https://arxiv.org/abs/1208.0928