A telescope's effectiveness arises from two factors. 1. It's ability to collect light (a function of the objective size of the light-gathering lens or reflector). 2. It's ability to focus in on far away detail (the telescope's "power").
If your objective lens is proportionately too small for your telescope's power, you get a dim, noisy image. If the light-gathering capacity exceeds the telescope's power, then you're wasting available information.
The best possible telescope of arbitrary size would be one that gathers all the available light coming from a faraway object and resolves it into an image that is as accurately detailed as that quantity of light allows.
This means that a telescope's power, or "zoom" is always a function of its ability to gather light. Bigger lens = better zoom.
Let's compare two telescopes, the Hubble Space Telescope and the proposed James Webb Space Telescope. The Hubble has a light-collecting area of 4.5m^2 (imagine a square about 7 ft on a side). Webb "will" be 25m^2, (imagine a square about 16.5 ft on a side). That means that the Webb would have about 5.5 times more "zoom" than the Hubble. However, because Webb embodies many technological improvements, it's total performance boost is somewhere between 100x and 400x (in the IR range) of that of Hubble.
I want you to imagine what would happen if we took this to an extreme.
Let's imagine the space telescope that might be built fifty years from now using micro-scale assembly of nano-engineered metamaterials, built in the weightlessness of space, far from the Sun.
Some factors to consider when we calculate the factor of improvement.
First of all, metamaterials may enable us to create telescopes that don't require the light to be reflected to a single small collector (think camera or eyepiece). Instead, we may be able to multiply optical efficiency by several factors by foregoing the inherent lossiness of reflection. Layers of specifically-tuned metamaterials may allow us to detect the direction from which a photon is arriving, filtering or disallowing photons from undesired directions. Essentially, the light-gathering surfaces of the future could be thought of as trillions of nano-scale refractor telescopes that use quantum effects and electric fields in place of physical lenses. Essentially, we're talking about an advanced form of the compound eye.
Instead of interacting with the light twice- once upon reflection, once upon detection- we may be able to interact with it just once, upping efficiency in the process.
Instead of curving the entire surface of the collector, small sections of it could be aimed independently. This would allow us to use a flat, essentially two dimensional support structure instead of a far-more-complex 3D structure. Remember, we're building in space- not just because there's no atmosphere to gobble precious photons, but also because zero gravity means you can build big without the object crushing itself. If we build in a remote enough place- more on that later- the object's own mass, and the self-gravity generated thereby, is of more significance than outside forces. Building in a plane confines those forces to the same plane.
Imagine a great flat sheet of high-strength composite honeycomb. The structure might be built in space by robot "bees" that employ some of the same tricks real bees use to build nearly-perfect hexagonal cells. In each of the cells is a gimbaled ring and, inside the ring, a small flat panel of optically-sensitive metamaterial. We'll call these panel sections "scales." Each scale can be aimed separately.
This gives us several advantages which fall under three categories.
1. Perfect focus. Using image feedback from, for instance, known imagery (pointing the telescope at earth for instance), we can adjust individual scales to create the equivalent of a mathematically perfect surface. Only problem is that light traveling to the edges of the gathering plane travels slightly farther, relative to a hypothetical spherical section, than light hitting the center. This means that frequency-scale science would require large amounts of computing power to simulate coherent data. One might expect computing power to be very cheap in the year 2060, but it's good to remember that we're talking about single "images" equivalent to billions of megapixels. Best approach may be to build a computer into each scale and do all the heavy math locally.
2. Selectable focal distance. Our telescope would have one setting for focusing to its maximum detail level at infinity, where the stars and galaxies are, but it could also focus directly on objects within the solar system, or be reconfigured as either an over- or under-powered telescope.
3. Multiple focus. Different areas of the telescope could focus on different objects within the telescope's optimal field of view. In other words, our telescope could be repurposed, in a matter of seconds, from the role of one extremely-large telescope into thousands of merely-large telescopes. This would allow the telescope to be shared in by a large numbers of scientists studying a large number of different objects. Let's say that some interesting signals are coming from a distant star. According to some futuristic mathematical analysis, a supernova is suspected to take place within the next several years. So, part of the telescope is focused on that particular place at all times. One day, things start to evolve. Within three seconds, 95% of the telescope is focused on the supernova. The remaining 5% continues observations that cannot be cut-off without causing intolerable loss of data.
And now for the math you've all been waiting for. How much more powerful than Hubble would our 100-mile diameter telescope be?
First, let's base our calculations on a average optical efficiency of about 50 times that of Hubble. This is based on the suggestion that Webb is about 15 - 70 times more sensitive than Hubble (particularly in the IR range, where dim / small stars are much more visible). It's possible that the numbers may be significantly higher- well over 100x Hubble may be possible. However, metamaterials are almost entirely theoretical at this point in history, so let's not get carried away.
A 100-mile circular diameter comes to about 406,834,381 m^2 of light collecting area. Divide that by 4.5 m^2 and you get a factor of 90,000,000. Multiply that by 50, and you get 4.5 billion.
What could you do with a telescope 4.5 billion times more powerful than Hubble? Crazy things. There are currently 461 known extrasolar planets. With a 100-mile telescope, you could eventually multiply that to over ten billion, some of which might be very interesting. Of known, nearby extrasolars (less than 50 light years) you could map their continents and count their moons. You could measure the gases in their atmospheres to within 1 part in a million. You could study the Oort Cloud in precise detail. You could even study the Oort Clouds of nearby stars. You could study other galaxies with the same level of detail we currently study the Milky Way. Deep field galaxies- the most distant galaxies we've ever seen- could be studied at the same level we currently study nearby galaxies. You'd study quasars- objects the size of solar systems- from across the universe. You could take a picture of Voyager as it leaves the solar system.
For objects within our solar system, it would be like having a microscope. We could study cloud formation on Neptune. We could track microfissures on Europa from billions of miles away. You could map every asteroid with 1:1 detail without sending a single probe.
And you would discover things that no one could possibly predict. For fans of SETI, this could be your key to find the missing aliens, or prove conclusively that they're really missing. Instead of confining your search for lower order life to our own solar system, you could expand your search for life-specific atmospheric oxygen to thousands of planets. And you could search for the apparently non-existent rock-rock-gas-gas-ice-ice solar systems (like ours) with Goldilocks planets (not too hot, not too cold, just right).
And that brings us to the next big question.
Q. Why 100 miles?
A. Because that's the title of this blog entry.
Q. Is there any reason we couldn't use the same modular construction technique to build a 1000-mile telescope?
A. No.
In fact, as you build a 100-mile telescope with individually-aimable scales, you'd start with a 0.01 mile telescope and then build outward. After a while, you'd have a 1-mile telescope, and then a 10-mile telescope. There's no reason you couldn't use these as you continue construction. And there's no reason you couldn't continue construction beyond 100 miles.
At some point, you will reach the physical limits of your construction technique. At that point, you'll have to stop building.
Using the kinds of macroscale construction regimes I discussed in a previous post, Replacement Earths for $1, there's no reason you couldn't continue construction all the way up to the physical limit. So, let's go ahead and do so.
However, there is a way to go beyond the physical limit- a way that solves another problem at the same time.
Here's the problem: the bigger you build, the less feasible it is to change your optimum viewing angle- to aim the whole telescope in a new direction.
Instead of explaining the solution, I'm going to leave this up to the reader to figure out.
Q. How do you build an extremely large (hundreds or even thousands of miles in diameter), single-surface telescope that never needs to be aimed as a whole?
A. See comments below.
2 comments:
Word "diameter" have certain customary meaning. You should use words like "broadness" or "width" to avoid suggesting the answer.
But I'm trying to suggest the answer!
Post a Comment