Thursday, 30 September 2010

telescope - What's a good step up from 7x50 binoculars?

For higher magnifications, you are going to get into the territory where a tripod becomes a must. I mention this because you didn't say anything about a tripod, so the implication may not have crossed your mind. Unfortunately, the cost of a tripod will eat into your budget.



I am inclined to think that your best option would be a telescope. (Unless you also want to view terrestrial objects, then a cheap spotting scope might be what you want.) Maybe your best bet, to get a good one, would be to ask around on forums like cloudynights or iceinspace that have some support from astronomers local to you, let them know you are looking to buy, and maybe you can get one second-hand off someone who is trading up. You might get a better telescope for your money than trying to buy new.



If you want a smaller length telescope, you may be put off something like a newtonian reflector or another simple 2-mirror reflector. Still, try one out (maybe you can attend a local star-party and see some up close, even if there isn't a telescope store handy). A 114mm newtonian on a tripod would meet almost all your requirements including budget, even purchased new. It will be a bit less than 2' long; slightly bigger newtonians like 130mm start getting longer than 2'.



Otherwise, what might fit the bill is a small cassegrain-style reflector - these usually also have a corrector lens, making them a catadioptric type telescope - such as a Maksutov-Cassegrain. The extra manufacturing cost of making the lenses pushes up the price of these well past the simple reflecting telescopes, however.



Refracting telescopes with an objective of 100+mm are probably in your "too big" basket, and you're also not likely to find one with that budget.



You use different eyepieces to provide different magnifications using the same telescope. If you buy a telescope as a kit, it will come with at least one eyepiece, and it's likely the magnification provided will be more than 10x (divide the focal length of the telescope by the focal length of the eyepiece to find out).

planet - Do planetary rings have geometric bounds?

The Saturn's Phoebe ring that has an orbital inclination 173° to the ecliptic so it is actually in a retrograde orbit and is tilted 27° to Saturn's inner rings, shows that there clearly isn't any limit to orbital inclination. It is feeding off the Saturn's moon Phoebe (possibly due to micrometeorite impacts) indicating that as long as the planetary ring has a source of its materials with already inclined orbit, the ring will follow it (preserve the angular momentum of its originating body). That of course doesn't exclude polar orbits, but we have yet to see any rings like that. Theoretically though, their orbit is just as stable as any other as long as it doesn't intersect other gravitational disturbances, like Lagrange points, or paths of other celestials that would stop it forming a complete ring with accretion (something that is only partially done by Iapetus intersecting the Phoebe ring). Majority of planetary rings might be made of planet's own materials though, and would then naturally follow its own rotation, more or less (depending on their formation) and again preserving angular momentum.



Phebe ring is also extremely huge, stretching from calculated 59 to 300 Saturn's radii (observed by NASA's Spitzer Space Telescope in the infrared range between 128 and 207 Saturn's radii, more can be read in this Emily Lakdawalla's blog post), attesting that as long as the ring's materials would not reach escape velocity or be gravitationally attracted more by another celestial body (beyond the L1 point) there isn't much of a limit in their size either. In theory, if we for example imagine some rogue brown dwarf that isn't gravitationally bound to any solar system and floats freely in the interstellar medium not close to any stars, the ring size would only be limited by the interstellar medium's own pressure. At least until the brown dwarf in questions roams closer to some stronger gravitational influence and loses its ring, that is.



So the maximum size of the ring would in theory indeed be limited only to the L1 point (where gravitational attractions of two massive bodies cancel each other). As for the minimum distance, that depends on the size of particles that the ring is made of, stretch of planet's atmosphere, and strength of its radiation pressure that would be negating gravitational attraction. So it is difficult to give any fair minimum value. If we take a fast spinning planet (e.g. dwarf planet Haumea) as an example, one that would spin so fast that it would actually lose some of its materials due to it reaching escape velocity at the planet's equator, and if we assume this process can be sustained for long enough period, then there isn't any minimum as the planetary disk would be essentially touching the planet's surface. Eventually, the planet would lose much of its own rotation, similar like a fast spinning figure skater would by stretching her arms, by shifting some of its mass into larger radius, and the planet would cease replenishing disk's materials.

Saturday, 25 September 2010

observation - how does redshift prove expansion is accelerating?

I don't know the exact details (so I'll be leaving the math out anyway), but this is my understanding and I hope it helps you get some idea about what's being done.



When you look at galaxies farther away, you're looking at them farther back in time (because speed of light is constant). So, to sample the velocities of galaxies over time, you don't need to observe them for a long time, but you can work with galaxies that are at different distances, and hence at different times in the history of the universe.



Now if you notice that the recession velocities of galaxies that are farther away are slower than what you would expect from a constant expansion scenario (Hubble's law, for example), that means the universe was expanding slower, back then, than it is, now. If there is a clear trend that this is happening, then you can say that the speed of expansion of the universe has been increasing over time. This means that the universe is accelerating.



Of course, this is a lot more complicated, since if the universe is accelerating, Hubble's law itself becomes shaky, if not invalid (as it assumes a direct proportionality between distance and recession velocity). Also, if you cannot find distance using Hubble's law, then you have to have an independent distance measurement that far away, which is also problematic, since afaik, there aren't such distance indicators that are valid that far away (since physics is different due to changes in stellar metallicities etc.; a lot of problems start showing up). So I don't know how this is done in practice, but in theory what I described above is how one would arrive at the conclusion of an expanding universe. I would like to know how this is really done in practice, as well.

Thursday, 23 September 2010

stellar dynamics - How can the life time of a multiple star system, such as for example the trinary system PSR J0337+1715 be derived?

As for example explained at the beginning of this blog post, the trinary system consists of a millisecond pulsar ($1.438$ times the mass of the sun) orbited by two white dwarfs. One of the white dwarfs ( $0.198$ solar masses) is very close to the pulsar and has an orbit period of $1.6$ d, whereas the other one ($0.410$ solar masses) is farther away and and needs about a year ($327$ d) to orbit the central pulsar.



Such a three-body system is in principle expected to show a chaotic behavior sooner or later, which means that collisions between these three celestial bodies can be expected and a finit life time of the system can be assumed.



Making some in my opinion way too hand-waving arguments, the blog post further explains that the collisions can not be expected too soon however, by taking into account that the distant white dwarf "sees" the inner white dwarf and the pulsar as a single central body and the relative motion of the inner white dwarf around the pulsar is rather stable and eliptic too.



Thinking about such multiple star systems as chaotic dynamical systems, another approach to estimate the liftime could be to make use of some chaos-theoretic methods which could for example involve the Lyapunov Exponent of the system, such that a large exponent would mean that the collisions happen soon and the stellar system has a rather short life time, whereas the converse would be true if the Lyapunov exponent is small (which is what I would expect for the system in my question).



So in short my question is: how can the liftime of a multiple star system be calculated in a not just hand-waving way?



This question is interestingly related to my issue, but it does not yet answer it...

Wednesday, 22 September 2010

star - stellar metallicity - Astronomy

The idea of self-pollution in a cluster may be viable. When you talk about metallicity, there are two things - one is what you see, the photospheric metallicity; the second is the bulk metallicity.



To alter the former only requires that adjacent stars accrete enriched material and then that they are unable to mix and dilute this. The second is much harder. If you want to form a star with high metallicity, the material it forms from must be enriched first, e.g. a supernova progenitor must have lived and died first. As the earliest age that a supernova can explode at is about 5 million years, this means having an extended period of star formation in a single cluster. This is controversial. There is a lot of evidence that young star clusters form very quickly, in which case the supernova mechanism is unviable. Nearby open clusters also show no evidence of star-to-star composition variations.



However, ancient globular clusters may be different. There is evidence for multiple populations with differing compositions.



Your final question is too broad. Yes, there are many nuances and gradations in populations, though the catch-all terms of Populations I and II are still used. But for example we also talk about thin and thick disc populations or bulge populations.

landing on a comet is more important than landing on an asteroid?

This is a very brief answer but I hope it helps.



Distance



Hayabusa collected samples from a near earth asteroid while Philae landed on a comet after travelling a distance of over 6.4 billion kilometers.



The Type Of Mission



Hayabusa's did not literally land on the asteroid. It just touched down and collected samples. While, Philae had to land, steady itself, drill, study the composition and drill. Plus, the asteroid on which Hayabusa landed was a near earth asteroid and so it's nature could have been known easily. On the other hand, the Rosetta team came to know about the exact nature of 67P just weeks before the landing.



The Complexity



The Rosetta mission used about 6 gravity assists and performed 2 flybys. It was not an easy task to plan a journey of 10 years. Plus, when the probe reaches the comet, radio signals would take a lot of time to reach it and thus it would be very difficult to control the probe if something went wrong. Plus, the surface of an asteroid is a lot very different form the surface of a comet. The Hayabusa Mission was not the first to touchdown on an asteroid. It had been done once by NEAR Shoemaker and so we knew what to expect. See this for a detailed report.



I think this sums up how the Rosetta mission was a major breakthrough.

the sun - How do large solar flares compare to flares on other stars?

This is a part answer, a comparison of sun-like star superflare behaviour with our sun.



According to the article Superflares on Solar Type Stars Observed with Kepler
I. Statistical Properties of Superflares
(Shibayama et al. 2013), observations were made on solar-like stars (type G), over 500 days.



One key observation was that they




found 1547 superflares on 279 G-type dwarfs




Despite this seemingly massive amount, they deduced that




the occurrence frequency
of superflares with energy of 10^34 − 10^35 erg is once in 800-5000 years.




and




in some G-type dwarfs the occurrence frequency of superflares was
extremely high, ∼ 57 superflares in 500 days (i.e., once in 10 days). In the case of
Sun-like stars, the most active stars show the frequency of one superflares (with
10^34 erg) in 100 days.




These are associated with very large star-spots, far larger than those on our sun.



There was an earlier theory that the presence of Hot Jupiters were a major contributor to super flares, hence the reason that our Sun did not exhibit this phenomenon often. However, there is some evidence of a possible past superflare occurred from our Sun:




an occurrence of a energetic cosmic ray event in 8th century
recorded in a tree ring of Japanese cedar trees. There is a possibility that this event was
produced by a superflare (with energy of ∼ 10^35 erg) on our Sun.




and no hot Jupiters were detected around many of the stars observed either, so this theory is largely ruled out by the authors. Rather, they postulate that G-type stars 'store' magnetic energy.

Tuesday, 21 September 2010

star - Does the Sun have hard radiation?

The Sun outputs several different kinds of things.



Electromagnetic radiation



The Sun is (partially) a black-body radiator at a temperature of near 6000 K, and therefore emits all sorts of electromagnetic energy, including UV and X rays.



enter image description here



UV is stopped in the upper atmosphere. X rays are absorbed by the whole atmosphere, and are pretty weak anyway.



The Sun's EM activity in terms of harsh radiation increases greatly if there's a solar flare blowing up. Even then, we're pretty safe on Earth.



High energy particles



The solar wind is basically a flux of particles (nuclei of hydrogen, etc) shooting out of the Sun at pretty high energies. These are charged particles. When approaching the Earth, the Earth's magnetic field for the most part deflects them. If you're on Earth, you're safe. Even in space, the flux is not very great; you'd be safe inside a spacecraft.



enter image description here



However, during a solar flare, the flux of particles increases greatly. The Earth is safe, but an astronaut en route to Mars would be in a pretty unsafe situation.

Sunday, 19 September 2010

the sun - Temperature of the Sun's rays

Yes, you *can *assign a temperature to the radiation from the Sun. It is approximately a blackbody radiation field with a temperature of 5800 Kelvin. But no, it doesn't "cool" on the way to the Earth.



The radiation field is still a blackbody radiation field with a temperature of 5800K when it reaches the Earth, but its intensity (or power per unit area) has decreased from $6.4 times 10^{7} Wm^{-2}$ at the surface (photosphere) of the Sun, to about $1.4 times 10^3 Wm^{-2}$, when it reaches the Earth.



It is this latter figure that is important in terms of how much the Earth is heated, not the temperature of the radiation field, because it determines how much energy per second is fed into the Earth's biosphere. An analogy would be an electric bar heater. The element will be emitting radiation at a temperature of 1000 C, but you don't burst into flames if you sit more than a metre away from it, because the power is distributed over a larger area.



In the absence of an atmosphere, the Earth would achieve an equilibrium temperature, such that it radiated away as much energy as it absorbed. This equilibrium temperature is much lower than the temperature of the Sun's surface because the power per unit area incident on the Earth is much less than the power per unit area emitted at the Sun's surface. The calculation, which I won't bore you with can be found here.
$$ T = T_{odot}(1-a)^{1/4}(R_{odot}/2D)^{1/2},$$
where $T_{odot}$ is the temperature of the Sun, $R_{odot}$ is the radius of the Sun and $D$ is the distance between the Earth and the Sun. $a$ is the "albedo" - the fraction of the Sun's light that is reflected back into space.



Putting numbers - $a=0.3$, $L_{odot}=3.8times10^{26} W$, $R_{odot}=7times10^{8} m$, $D=1.5times 10^{11} m$; we find $T= 256 K$ or $-16 C$. Of course the presence of an atmosphere warms us over this simple calculation by some tens of degrees.

Saturday, 18 September 2010

star - How did Bradley arrive at the +/- correct speed of light when he calculated it?

He didn't need to know the absolute position of the stars. The change of the appearent position has been sufficient to get a good estimate.
For velocites small in comparison to the speed of light, the shift of the angle is still small.



For small angles the sine is proportional to the angle with 2nd order precision.
The first Taylor summands are
$$sin x = x - frac{x^3}{3!} + frac{x^5}{5!} - + ... $$



The 2nd order approximation $$sin xapprox x$$ can be applied on stars perpendicular, or at least apearently perpendicular, to the plane of the Earth orbit, where the method is most accurate.



Hence the underlying reason, for which it works, is the smoothness of sine.



This argument can be adjusted to less optimal conditions.

Thursday, 16 September 2010

Is there an element of chance/chaos in stellar evolution?

This is not an answer that stems from vast cosmological knowledge and experience, but from an intuitive mathematical understanding of stochasticity. I would point out that stars do not have the visual appearance of stability or a predictable nature. They are churning, belching balls of nuclear plasma. On a macroscopic scale, I would expect stability to look like smoother surfaces or uniform activity. Any object that you might describe as "constantly exploding" or even "on fire", or particularly, "undergoing nuclear reaction", I would ordinarily say involves an element of chance.



The flare and CME activity of our Sun is governed by a roughly 11 year cycle. But it's not exactly old faithful. The activity of the Sun follows patterns, but they are noisy patterns. However, even though stars are the very image of chaos to our view, it seems that they are actually quite predictable. One reason is that they live in a vacuum. Literally. Or very near a vacuum. Particularly since your question eliminates consideration of external interactions, we are talking about stars like ours, surrended by mostly a lot of nothing. So when plasma erupts out of the star at less than escape velocity, we know that ultimately that plasma is coming right back home to the star. It's not going to smear onto some other object. And the goop from other objects isn't going to smear onto the star.



Second, the time-scale of a star's evolution is enormous. Imagine watching a timelapse movie of our Sun where each frame of the movie shows the average (mean) appearance of the sun over 100 eleven-year solar cycles. Now you would see something that looks quite smooth, stable. And you could watch that movie of a stable glowing ball all day long without seeing any dramatic change. (Assuming 24 frames per second, watching the movie all day long would take you through 2.3 billion years of the Sun's life.) Because the timescale in question is so long, the bubbling, churning, and exploding we see on our timescale amounts to unfathomably insignificant blips in a stable, predictable, burn process.



In general, any stochastic (random) event reproduced enough times, in the same way (with the same kind of randomness), will lead to stable, predictable outcomes. A man has solar panels on his roof. Some days are sunny, and some are overcast. Some days the panels get covered up completely in snow. But over the course of a year, we can still predict approximately how much power his panels will produce with about 98% accuracy, assuming there is no external influence, like the house burning down.



So yes, excluding external interactions, if you know the total mass and composition of a star at conception, it makes sense that the chaos and noise of the atomic fireball would have a net zero effect on the outcome.

Saturday, 11 September 2010

light - How to calculate the limiting magnitude of Hubble?

The table you linked to gives limiting magnitudes for direct observations through a telescope with the human eye, so it's definitely not what you want to use.



The quoted number for HST is an empirical one, determined from the actual "Extreme Deep Field" data (total exposure time ~ 2 million seconds) after the fact; the Illingworth et al. PDF you linked to explains how it was done. Briefly, they decided the limit would be defined by a signal-to-noise ratio of 5, and then figured out what the noise level in the final combined images was by integrating the total flux in many circular apertures of diameter 0.35 arc seconds placed in regions of blank sky background, and then computing the standard deviation ($sigma$) of these measurements. Five times $sigma$ was then the limiting flux. Knowing the photometric calibration (the conversion between observed counts and apparent magnitude; see the paper for specific details), they converted this $S/N = 5$ limit into an apparent magnitude.



If, instead, you want try to estimate in advance what the limiting magnitude should be, then you have to consider several factors. Simplifying a bit, the equation for long-exposure, background-limited $S/N$ is
$$
S/N approx frac{s t}{ sqrt{ s t + n_{rm pix}(b t)} }
$$
where $s$ = detected flux from the source (star, galaxy, etc.) in electrons (i.e., detected photons) per second, $t$ is the integration time in seconds , $b$ is the flux from the background in electrons per pixel, and $n_{rm pix}$ is the number of pixels you integrate over for the measurement. $s$ can be computed from the combination of the object's apparent magnitude, the diameter of the telescope's mirror and the overall efficiency of the telescope + detector system. $b$ can be determined in a similar fashion, using the estimated background brightness (which, for a telescope like HST, is mostly sunlight reflected from dust in the inner solar system). $n_{rm pix}$ is a number chosen as a compromise between maximizing the amount of light from the source (higher $n_{rm pix}$) and minimizing the noise from the background (lower $n_{rm pix}$). This depends on the telescope's resolution (better resolution = smaller images of stars and other point-like sources = fewer pixels needed); the empirical calculations by Illingworth et al. used an aperture size amounting to about 100 pixels.

Friday, 10 September 2010

Why is the speed of light 299,792,458 meters/sec?

Ok, I am majoring in physics (4th year) and I never understood this fundamental (kinda) question. Maybe I haven't explored it enough.



For example, why does it take 8min20sec for the light from the sun to get to us?
I know the answer to this question on a 'surface' scale. The sun is 1AU away, c=3E8 m/s, and d=v/t to get approx 8 min 20 sec.



my question is on a deeper level.



Say you could "ride a photon" I KNOW THIS IS IMPOSSIBLE, but just say you could. Or a better question: what does a photon experience? The photon, as per my understanding, would leave the sun and (if it is on the right trajectory) hit the Earth instantaneously. A photon leaving Alpha Centauri would see the universe all at once, in a infinitesimal small unit of time (if directed out to space).



If a photon sees everything all at once, why do we perceive it to have a speed? I am sure this has something to do with frames of reference, special relatively, Lorentz transforms? but just seems strange. why is the speed of light finite to us... if it was infinite would this be problematic?

Wednesday, 8 September 2010

orbit - Review Content for my Comet Infographic

I've been hard at work in recent months on an infographic that details the anatomy of a comet... I'm a graphic designer and this is a fun little side-project of mine. It's only quite basic in terms of information (it's the design itself that I've been slaving over).



Astronomy is something I've always found pretty fascinating (hence me deciding to work on this project) but I'm far from an expert. So I thought I'd post the content of my infographic on here just to check I've got the right idea, and not got anything drastically wrong! The content that I've come up with is based on my own research that I've done online.



If anyone could let me know what they think that'd be much appreciated:




01/ Nucleus — Sometimes referred to as a ‘dirty snowball’, the nucleus
is the centre of a comet—made up of rock, dust, and frozen gases. As
the comet gets closer to the sun, these gases sublimate and form the
‘coma’ around the comet.



02/ Coma — The high-energy particles from the solar wind interact with
the ice and dust from the comet’s nucleus, resulting in a fuzzy
appearance known as a coma. A coma will form a cloud around the comet
and will typically grow in size as the comet approaches the sun.



03/ Ion Tail — A bright tail composed of ionised gas molecules. The ion
tail is pushed by the solar wind, resulting in it being straight and
always and pointing directly away from the sun. Like dust tails, ion
tails can also be very large with the longest recorded being over 570
million kilometres.



04/ Dust Tail — A tail composed mainly of dust particles, released from
the nucleus as the ices are vaporised by the sun. The dust tail curves
around the orbital path of the comet and can extend hundreds of
millions of kilometres in length.




Does that all sounds about right? I'm a little confused about the lengths of the tails, not sure if that world record is strictly for an ion tail or a dust tail? or both?



And incase you're wondering what the infographic looks like... here it is!



enter image description here

orbit - Is the ratio between earth's distance from the sun and the speed of light just a coincidence?

I was doing some calculations to see how hard it would be to observe the speed of light and discovered an interesting correlation between the speed of light and the average distance from the earth to the sun. In my example, an object would be placed in orbit around the sun so that its shadow would pass over the earth at the speed of light. After doing some math, I found that an object roughly 1 km above the sun, orbiting at 2 m/s would achieve this. I though that these numbers were a bit odd, so I took a closer look at the speed of light (299,792,458 m/s) and AU (149,597,871 km.) I found that 299,792,458 / 149,597,871 = 2 within a .2% degree of inaccuracy. To me, this seems too close to be chance. Is there an established relationship here, or is this just luck?

Tuesday, 7 September 2010

solar system - Do we get any benefit from being in a galaxy?

I've had my eye on this question for some time, but I haven't had the time to answer it. Sorry for not getting to it sooner.



As zibadawa timmy said, you'd be hard-pressed to find the materials necessary for a star system if you're outside a galaxy. For one thing, there aren't any stellar nurseries in the intergalactic void (well, none that we know of. But I'd guess it's highly unlikely that they exist plentifully). So you have no hydrogen or helium, or dust, or other elements - in fact, you have nothing that you could use. That's a wee bit of a problem.



There is a way you could get around this, though - if your star system is in a globular cluster. They're groups of stars that orbit a galaxy. We don't know just how they got out there. They could be from dwarf galaxies, which then lost them, or they could have been partially chucked out of the larger galaxy but stayed in an orbit. The trouble is, as Wikipedia says,




However, they are not thought to be favorable locations for the survival of planetary systems. Planetary orbits are dynamically unstable within the cores of dense clusters because of the perturbations of passing stars. A planet orbiting at 1 astronomical unit around a star that is within the core of a dense cluster such as 47 Tucanae would only survive on the order of $10^8$ years.




Yep. Not a great place to live.



But I take it you didn't intend for our solar system to be in a globular cluster, so I'll go back into intergalactic space. It's not actually a vacuum - not that most parts of outer space are. There's a lot of plasma and hot gas there (the intergalactic medium), sometimes getting to a toasty $10^5$ or $10^7 text { K}$. Could it harbor a stellar system? That depends on just what material the star brings with it. If planets haven't formed around it yet, and the star has a protplanetary disk, then perhaps that disk could dominate over the intergalactic medium. Remove the ICM and maybe planetary formation could proceed normally.



What happens then? Well, there would be some interesting effects. For example, the Oort Cloud might not have become spherical, because it was partly deformed by the galactic tide. Here, tidal forces probably wouldn't be as strong (although this depends in part on just how far away we are from the galaxy). We probably wouldn't have a lot of negative effects, because we don't have a lot of extreme benefits here in the Milky Way. Our galaxy does not appear to have an AGN, there aren't any monstrous black holes nearby, and we aren't undergoing any catastrophic galactic collisions. And the Milky Way-Andromeda collision won't be too bad.



We probably wouldn't be in too bad of a position if we were in intergalactic space. There's not a lot of benefits from being in a galaxy - well, no significant ones, at least. Well, besides star formation depositing all the materials we need for the solar system to form. That's a biggie.

Sunday, 5 September 2010

the moon - Is it possible the Nebular Hypothesis and Planetesimal Theory are not correct?

TidalWave says it perfectly:




"What is the question? So far, all I can think of as an answer is explaining the meaning of scientific theory as an integral part of the scientific method. Of course it is possible it is not correct, science doesn't function on dogmatism. Theories are frequently corrected, expanded, or even completely dismissed. Having a status of the most accepted theory often means it's just the second best thing to the one that'll withstand the trial of reality better. Theory != Axiom."




The nebular hypothesis is simply the best model that fits our observations - that's how the scientific method works. In the last decade, we have been discovering and analysing increasing numbers of planetary and protoplanetary systems. Our models of the formation of these systems are continually refined as both our observations improve and our computing power increases.




"When I ask Senior Astronomers in the U.S. who specialize in Planetary Evolution to review the concept, they personally insult me for even asking rather than employ any scientific theory or scientific methods to evaluate it."




You're insulting them, by asking them to justify why the work they base their livelihood upon isn't a load of rubbish.



You cannot justifiably criticize an accepted scientific theory without coming up with an equally plausible idea (unless you manage to prove that something is wrong). And unless someone does come up with a better idea, the nebular theory will continue to expand and evolve to better match our observations. That is the scientific process.

Friday, 3 September 2010

star - Is gravitational energy released when a body contracts?

Gravitational contraction will always release gravitational potential energy. In most systems, where this happens slowly, you can apply the virial theorem to say that half the released energy is radiated and half is used to heat the contracting gas.



The real question is whether the release of gravitational PE is significant compared with other sources. In most stars it is not. Nuclear fusion is responsible for extending their lives way beyond the Kelvin-Helmholtz timescale - the amount of GPE divided by their luminosity.



In some stars GPE is hardly released at all though. The capability to contract depends on the behaviour of gas's pressure as a function of temperature. If pressure is independent of temperature then no contraction of an object is possible even as it radiates away its energy and even if it has no other energy sources.



This is the case for white dwarfs supported by electron degeneracy pressure. They cannot contract further and will simply cool down at constant radius.



Jupiter is an interesting intermediate case. Yes it is contracting, but not at the rate you would expect for a perfect gas. Eventually, its contraction will slow right down and it will just cool.

Thursday, 2 September 2010

photography - What is the relationship between the focal length and f number of a telescope compared to a guide scope?

Don't worry about the main telescope. There doesn't have to be a relationship between the focal length and f ratio of the telescope and the guidescope. I think this is proven by the use of off-axis-guiders, which use the same focal length as the telescope. They just come with the headache of finding guide stars.



What is important is: can the guidescope provide data for the controlling software to accurately automate the telescope?



You want a wide, fast field of view, so that you are guaranteed to be able to find a bright star to guide by.



You want a camera on the guidescope that has sufficiently small pixels that the guide star that gets picked will at least cover a few pixels - the more the better. To some extent, this conflicts with the wish for a wide, fast view, as that will result in smaller stars on the sensor. This can be mitigated by ever so slightly de-focusing the guidescope so the stars form fuzzier, larger circles on the sensor.



What is more important that the focal length of your telescope will be: (a) what the limit of your seeing tends to be, and (b) the accuracy with which your mount can be controlled, and (c) how accurately the guiding software can place the guide star (usually a fraction of a pixel - use maybe 0.2 pixels as a worst case, if unknown).



There is not much point in being more accurate than the seeing permits.



The formula for figuring out how much each pixel sees is:
$$
px_{fov}=206*px_{microns} / f_l
$$
where f_l is the focal length of a telescope and px_microns is how large each pixel is on the sensor.



You can therefore estimate a suitable focal length guide scope this way:
$$
f_l = px_{microns} / (res_{seeing}/G_{acc}) * 206
$$
where px_microns is the size of the sensor pixels,



res_seeing is how accurate you need the guiding to be,



G_acc is the guider software's centroid accuracy (use 0.2 as a worst case if unknown)

Wednesday, 1 September 2010

constellations - Which is really larger, Big Dipper or Small Dipper, in 3D

This obviously depends on which stars you choose, but in general Ursa Major is likely to win by a factor of about five(ish), since it occupies a much larger chunk of the sky (1280 square degrees) than Ursa Minor (256 square degrees). At a fixed magnitude, and therefore at a fixed visibility radius for each intrinsic luminosity, the volume will be (very roughly) proportional to the area of each constellation on the celestial sphere. (There are subtleties, such as these, and this only rigorously holds when there are many stars, but it is a good rule of thumb.)



The volumes themselves can quite easily be calculated in Mathematica using the curated data. There is one easy way, which selects what Wolfram Research thinks are the 'bright stars' in each constellation. If you do that, you get



$$mathrm{Vol}(mathrm{UMa})=9.20133times 10^{20}mathrm{AU}^3,
mathrm{Vol}(mathrm{UMi})=1.7105times 10^{20}mathrm{AU}^3$$



which is the volume of the convex hull of the stars shown here:



Mathematica graphics



If you have Mathematica (v10+ only, I think) and you want to tinker with the code, here it goes:



RegionMeasure[
ConvexHullMesh[
StarData[
ConstellationData[Entity["Constellation", #],
EntityProperty["Constellation", "BrightStars"]],
"HelioCoordinates"]]
] & /@ {"UrsaMajor", "UrsaMinor"}



What one should really do is set a threshold magnitude and only count stars brighter than this, and then see the dependence of the volumes on the magnitude threshold. If you do this, you get pretty similar results:



Mathematica graphics



That is, Ursa Major has a consistently higher volume at all naked-eye magnitude thresholds.



For the curious, here's the code.



nakedEyeStarProperties = 
StarData[#, {"Name", "Constellation", "ApparentMagnitude",
"HelioCoordinates", "RightAscension", "Declination"}
] & /@ StarData[EntityClass["Star", "NakedEyeStar"]]
constellationVolume[constellation_, magnitude_] :=
Block[{selectedStars},
selectedStars =
Select[nakedEyeStarProperties,
And[#[[2, 2]] == constellation, #[[3]] < magnitude] &];
RegionMeasure[ConvexHullMesh[selectedStars[[All, 4]]]]
]
ListLogPlot[
Table[
{{m, constellationVolume["UrsaMajor", m]}, {m,
constellationVolume["UrsaMinor", m]}}
, {m, 3, 6.5, 0.1}][Transpose]
, AxesLabel -> {"magnitude threshold",
"constellation volume/!(*SuperscriptBox[(AU), (3)])"}
, PlotLegends -> {"Ursa Major", "Ursa Minor"}
]