Tuesday, 31 January 2012

Do magnetic fields affect planetary rings?

(I will assume that these planets orbit a star.) Planetary rings are formed by the gravitational capture of small objects, mostly ice and dust. These particles can be magnetic and in such cases would be affected by the planetary magnetic field. It is also to be pointed out that the magnetic field is not really uniform, but let us assume that it is. The presence of a magnetic field would be important for the sustenance of a ring, but the stability of the ring is an entirely different issue. If the ring does consist of magnetically sensitive materials, then large enough fluctuations in the magnetic field can in principle cause the ring to become unstable. So depending on the answer of stability, I choose to say yes, it does depend on the magnetic field. To calculate the stability would be a complex n-body problem but can be averaged over.

n body simulations - What is a possible software for simulating binary star systems?

mikeonly,



I stumbled across PHOEBE code the other day in another response in this forum. I started looking into it.



"PHOEBE stands for PHysics Of Eclipsing BinariEs. It is a tool for modeling eclipsing binary stars based on photometric, spectroscopic, interferometric and/or polarimetric data." -- abstracted from About Phoebe webpage.



Perhaps this can fulfill your needs. I've not yet installed and looked at it.
It is developed by a group of professional astronomers from around the world.



Tom Kosvic

Friday, 27 January 2012

observational astronomy - What study profiles could land me the job of astronomer?

My career path:3-year Bachelor's degree in "Physics with Astrophysics"; PhD in X-ray astronomy; 5-years as a postdoctoral research assistant (two separate posts); got a lectureship at a UK university doing teaching and research in Physics and Astrophysics.



This is reasonably typical. These days, the content of the first degree is not so important - Physics, Astrophysics, Applied Maths all would be ok. "Astronomy" would put you at a disadvantage, since the implication is a non-physical, observational approach; but you would have to look at the course content.



A masters degree or 4-year first degree is usually necessary to get onto PhD programmes in the best places (this has changed since my day). Doing your PhD quickly and writing several publications is usually necessary to proceed any further.



The normal next step is to get a postdoctoral position; preferably somewhere other than your PhD institute. Then after 2-3 years of producing more research papers (2-3 per year), you could try for personal research fellowships. If you can get one of these, or perhaps a second/third postdoc position, and your research is going well and is productive, then there is a few year window in which to get into a tenured or tenure-track position. Getting some teaching experience at this stage is probably important.



For someone on a "normal" career path, it would be unusual to get a University position before the age of 30 (i.e. 8-9 years after your first degree). The large majority of people with a PhD in Astrophysics do not end up doing that for a living.

Thursday, 26 January 2012

newtonian mechanics - Elastic collisions and conservation of momentum

In an elastic collision the masses of both objects, the total kinetic energy, and the total linear momentum are conserved. The kinetic energy has contributions from the motions of the objects as well as their rotations. If we assume that no exchange between these two forms of kinetic energy occurs, i.e. that both forms are separately conserved, we have
$$
m_1boldsymbol{v}_1 + m_2boldsymbol{v}_2 = m_1boldsymbol{w}_1 + m_2boldsymbol{w}_2
$$
and
$$
tfrac{1}{2}m_1boldsymbol{v}_1^2 + tfrac{1}{2}m_2boldsymbol{v}_2^2 = tfrac{1}{2}m_1boldsymbol{w}_1^2 + tfrac{1}{2}m_2boldsymbol{w}_2^2
$$
where $boldsymbol{v}$ and $boldsymbol{w}$ denote the velocities before and after the collision, respectively. Thus, we have 4 equations (3 components of momentum and the energy) for 6 unknowns (3+3 components of the post-collision velocities). Consequently, the above equations (on conjunction with $boldsymbol{v}_{1,2}$) do not uniquely constrain the $boldsymbol{w}_{1,2}$. Some other information is required to determine the relative direction. This depends on the properties of the objects and the point of impact.



The above system of equations is invariant under a Galilean transformation, i.e. a change of the origin of velocity. They become particularly simple in the frame in which the total momentum vanishes. Let a prime denote velocities in that frame, then
$$
boldsymbol{v}' = boldsymbol{v} - boldsymbol{V},qquad
boldsymbol{V} = frac{m_1boldsymbol{v}_1+m_2boldsymbol{v}_2}{m_1+m_2}
$$
and we have
$$
m_1 boldsymbol{w}'_1+m_2 boldsymbol{w}'_2=0,quad
|boldsymbol{w}'_1| = |boldsymbol{v}'_1|,quad
|boldsymbol{w}'_2| = |boldsymbol{v}'_2|
$$
(these equations are not independent, there still only 4 independent scalar constraints). In particular, the speeds remain the same in this frame.



In 1D we have 2 equations for 2 unknowns and hence the solution is completely determined (also, there is no rotation in 1D). Since a collision requires $w_{1,2}neq v_{1,2}$, we have
$w_{1,2}=-v_{1,2}$. Transforming back to the original frame, this gives
$$
w_{1,2} = 2 V - v_{1,2}.
$$
Requiring the speeds to remain the same ($w_{1,2}=-v_{1,2}$) gives us $V=0$. Thus in 1D, the speeds reamin the same if and only if the total momentum vanishes.

galactic dynamics - Could discoid galaxies be expanding?

I understand that astronomers once thought that the material in the disc of a galaxy was moving around the galactic centre (where most of the mass was thought to be) in roughly circular orbits. The circular motions would be explained by the combination of inertia and gravitational acceleration towards the centre. This follows the Kepler/Newton model which describes/explains the orbits of planets around the Sun. In a Kepler/Newton system the tangential or transverse velocity of a low-mass orbitting object depends on the radial distance of the high-mass object from the centre. Objects further out must have lower velocities so that the weaker centrally-directed acceleration at that distance can keep the object moving in a roughly circular track.



In the centre of the galaxy the velocities are low and increase rapidly with distance away from the centre. This is understandable as the mass of the central bulge of the galaxy is not concentrated at the centre, it is spread out over a relatively large volume.



However it is now well-known that the measured velocities of visible material in the disks of discoidal galaxies do not vary as expected for a Kepler/Newton system. At a certain distance where the velocities would be expected to start decreasing (as per curve A) they either level out or continue increasing at a slow rate (as per curve B).



galaxy velocity rotation curve



The current explanantion is that a large mass of Dark Matter extends throughout the galaxy in such a way as to produce non-Keplerian behavior.



QUESTION



But isn't there another possible explanantion of the non-Keplerian velocity profile namely that the ordinary detectable material (e.g. gas, dust, stars) in the disk is not completely gravitationally bound to the galactic centre and is actually gradually moving outwards along a spiral path?

Wednesday, 25 January 2012

gravity - Why do we have the cosmological constant?

Let's look at the Friedmann equations without the cosmological constant.



$$ frac{dot{a}^2 }{a^2} = frac{8 pi G rho}{3}-frac{kc^2}{a^2}$$



The term on the LHS is just the Hubble constant squared $H^2$ which can be measured the direct measurement of recession velocity of galaxies



The density term can be said to be a combination of $rho_{matter}+rho_{dark- matter}$ both of which can be measured directly;$p_{matter}$ by observation of matter in our galaxy and other galaxies while $rho_{dark- matter}$ by rotation curves of galaxies.



The curvature constant $k$ can be estimated today by the anisotropy measurements in the CMBR.



As it turns out the parameters don't fit and we need more mass-energy in the universe(almost 2-3 times of that we had estimated).



So comes along Dark energy or basically the cosmological constant. Cosmological constant or the dark energy are just two ways of looking at the equation,either as just a constant or a form of mass-energy(though we have solid reasons to believe the latter).



And this is our picture of the universe today:enter image description here



Now historically the cosmological constant was necessary for an altogether different reason.



The second Friedmann equation without the cosmological constant looks:



$$ frac{ddot{a}}{a} = -frac{4 pi G}{3}left(rho+frac{3p}{c^2}right) $$



Now this predicts for normal type of matter,the universe must decelerate.($ddot{a}<0$)



Now,people measured the redshift of the type-1a supernovae and found out the quite paradoxical result that the universe was being accelerated in its expansion.



enter image description here



Since normal matter can't explain this type or behaviour,we again have to look at Dark Energy(or the cosmological constant).And so with the cosmological constant the equation becomes:



$$ frac{ddot{a}}{a} = -frac{4 pi G}{3}left(rho+frac{3p}{c^2}right) + frac{Lambda c^2}{3} $$



Thus $ddot{a}>0$ is possible.



Therefore the cosmological constant is necessary to both explain the current rate of expansion and the accelerated expansion.



So finally the accelerated expansion can be explained and today we have the $ΛCDM$ model of the universe.



References:



1:http://en.wikipedia.org/wiki/Friedmann_equations



2:http://hyperphysics.phy-astr.gsu.edu/hbase/astro/univacc.html



3:http://en.wikipedia.org/wiki/Lambda-CDM_model

astrophysics - Pulsars: How do astronomers measure minute changes in period (~picoseconds per year)?

Let us suppose that the pulsar is spinning down at a uniform rate. So it has a period $P$ and a rate of change of period $dP/dt$ that is positive and constant (in practice there are also second, third, fourth etc. derivatives to worry about, but this doesn't change the principle of my answer).



Now let's assume you can measure the period very accurately - say you look at the pulsar today and measure its radio signals for a few hours, do a Fourier transform of the signal and get a nice big peak with a period of 0.1 seconds (for example).



With that period, you can "fold" the data to create an average pulse profile. This pulse profile can then be cross-correlated with subsequent measurements of the pulse to determine an offset between the predicted time of "phase zero" in the profile, calculated using the 0.1 s period, and the actual time of phase zero. This is often called an "O-C" curve or a residuals curve.



If you have the correct period and $dP/dt=0$, then the residuals will scatter randomly around zero with no trend as you perform later and later observations (see plot (a) from Lorimer & Kramer 2005, The Handbook of Pulsar Astronomy). If the initial period was in error, then the residuals would immediately begin to depart from zero on a linear trend.



If however, you have the period correct, but $dP/dt$ is positive, then the residuals curve will be in the form of a parabola (see plot (b)).



If you have second, third etc. derivatives in the period, then this will affect the shape of the residuals curve correspondingly.



The residuals curve is modelled to estimate the size of the derivatives of $P$. The reason that $dP/dt$ can be measured so precisely is that pulsars spin fast and have repeatable pulse shapes, so changes in the phase of the pulse quickly become apparent and can be tracked over many years.



Pulsar residual timing curves



Mathematically it works something like this. The phase $phi(t)$ is given by
$$phi(t) simeq phi_0 + 2pi frac{Delta t}{nP} - frac{2pi}{2}frac{(Delta t)^2}{nP^2} frac{dP}{dt} + ...,$$
where $phi_0$ is an arbitrary phase zero, $Delta t$ is the time between the first and last observation and $n$ is the integer number of full turns the pulsar has made during that time. If the period is approximately correct, then $n = int(Delta t/P)$.



The "residual curve" would be given by
$$phi_0 - 2pifrac{Delta t}{nP} -phi(t) simeq frac{2pi}{2}frac{(Delta t)^2}{nP^2} frac{dP}{dt} + ...,$$



For example, if the period of a $P sim 0.01$ second pulsar changed by a picosecond in a year, then there would be an accumulated residual of almost $10^{-4}$ seconds after 1 year of observation. Depending on how "sharp" the pulse is, then this shift of about 1% in the phase of the pulse might be detectable.



Perhaps needless to say, but there are a host of small effects and corrections to make in order to get this very high precision timing. You need to know exactly how the Earth is moving in its orbit. The proper motion of the pulsar on the sky also has an effect. These and more can be found in Lorimer and Kramer's book, but there is also a summary here.

Tuesday, 24 January 2012

big bang theory - Could the universe have evolved differently?

I don't think your attempt to distinguish the micro and macro scales is helpful. As it stands, the physical theories (quantum mechanics etc) are deterministic, that is the initial conditions pre-determine all future: there is no random element ("Gott würfelt nicht" -- god does not throw dice -- to quote Albert Einstein).



Of course, this is debatable, but certainly beyond astronomy, better (meta)phyiscs and philosophy ...




Edited to respond to the comment, effectively rephrasing the question to consider a "slighly different" early state of the universe.



Well, it pretty much depends what you mean by "slighly different". If the laws of physics are completely untouched, I would think that the universe would evolve statistically in exactly the same way. This is because all the statistical characteristics of the early universe (the power spectra and correlation functions of the various energy densities, for example) come about from the laws of physics (via natural instabilities, for example) rather than the initial state.



However 1, our understanding of the laws of physics and their deeper origin is still very limited. This means that it is not clear whether a slightly different early state with exactly the same laws of physics is actually at all possible. This is particularly true for the nature of gravity and the vacuum, neither of which is currently properly understood at the quantum mechanical level.



However 2, one has to bear in mind that we don't know whether the universe is finite or infinite. If it is infinite (and the observational evidence from our finite patch of the universe indicates that it has a flat or open, but not closed, spacetime), we cannot really make any sensible conclusions from our finite observable patch. This includes this answer!
Except, of course, if we invoke an extended version of the cosmological principle, something like "an infinite universe must be the same everywhere", which is (meta)physical non-sense. This is the main reason why I don't like cosmology too much.

Monday, 23 January 2012

Fourier Transform of Galaxy Images

I'm working on detecting optically overlapping galaxies for future astronomical surveys. I'm looking into feature extraction for HST images and have been computing the fourier transform of images in the hope to construct a metric signifying overlapping vs non-overlapping light profiles.



For example, I have the following image after having filtered out noise:



Real Image



I've been having trouble trying to interpret the results after taking the fourier transform, taking the magnitude of each element, and then plotting the logarithm of the magnitude.



Logarithm of the magnitude of the Fourier Transform

Saturday, 21 January 2012

cosmology - How long would it take for a rogue planet to evaporate in the late stages of the Universe?

This page by physicist John Baez explains what will happen in the long term to bodies that aren't massive enough to collapse into black holes, like rogue planets and white dwarfs, assuming they don't cross paths with preexisting black holes and get absorbed. Short answer: they'll evaporate, for reasons unrelated to Hawking radiation. It's apparently just a thermodynamic matter, presumably due to the internal thermal energy of the body periodically causing particles on the surface to randomly get enough kinetic energy to achieve escape velocity and escape the body (the wiki article here mentions this is known as 'Jeans escape'). Here's the full discussion:




Okay, so now we have a bunch of isolated black dwarfs, neutron stars, and black holes together with atoms and molecules of gas, dust particles, and of course planets and other crud, all very close to absolute zero.



As the universe expands these things eventually spread out to the point where each one is completely alone in the vastness of space.



So what happens next?



Well, everybody loves to talk about how all matter eventually turns to iron thanks to quantum tunnelling, since iron is the nucleus with the least binding energy, but unlike the processes I've described so far, this one actually takes quite a while. About $10^{1500}$ years, to be precise. (Well, not too precise!) So it's quite likely that proton decay or something else will happen long before this gets a chance to occur.



For example, everything except the black holes will have a tendency to "sublimate" or "ionize", gradually losing atoms or even electrons and protons, despite the low temperature. Just to be specific, let's consider the ionization of hydrogen gas — although the argument is much more general. If you take a box of hydrogen and keep making the box bigger while keeping its temperature fixed, it will eventually ionize. This happens no matter how low the temperature is, as long as it's not exactly absolute zero — which is forbidden by the 3rd law of thermodynamics, anyway.



This may seem odd, but the reason is simple: in thermal equilibrium any sort of stuff minimizes its free energy, E - TS: the energy minus the temperature times the entropy. This means there is a competition between wanting to minimize its energy and wanting to maximize its entropy. Maximizing entropy becomes more important at higher temperatures; minimizing energy becomes more important at lower temperatures — but both effects matter as long as the temperature isn't zero or infinite.




[I'll interrupt this explanation to note that any completely isolated system just maximizes its entropy in the long term, this isn't true for a system that's in contact with some surrounding system. Suppose your system is connected to a much bigger collection of surroundings (like being immersed in a fluid or even a sea of cosmic background radiation), and the system can trade energy in the form of heat with the surroundings (which won't appreciably change the temperature of the surroundings given the assumption the surroundings are much larger than the system, the surroundings being what's known as a thermal reservoir), but they can't trade other quantities like volume. Then the statement that the total entropy of system + surroundings must be maximized is equivalent to the statement that the system alone must minimize a quantity called its "Helmholtz free energy", which is what Baez is talking about in that last paragraph--see this answer or this page. And incidentally, if they can trade both energy and volume, maximizing the total entropy of system + surroundings is equivalent to saying the system on its own must minimize a slightly different quantity called its "Gibbs free energy" (which is equal to Helmholtz free energy plus pressure times change in volume), see "Entropy and Gibbs free energy" here.]




Think about what this means for our box of hydrogen. On the one hand, ionized hydrogen has more energy than hydrogen atoms or molecules. This makes hydrogen want to stick together in atoms and molecules, especially at low temperatures. But on the other hand, ionized hydrogen has more entropy, since the electrons and protons are more free to roam. And this entropy difference gets bigger and bigger as we make the box bigger. So no matter how low the temperature is, as long as it's above zero, the hydrogen will eventually ionize as we keep expanding the box.



(In fact, this is related to the "boiling off" process that I mentioned already: we can use thermodynamics to see that the stars will boil off the galaxies as they approach thermal equilibrium, as long as the density of galaxies is low enough.)



However, there's a complication: in the expanding universe, the temperature is not constant — it decreases!



So the question is, which effect wins as the universe expands: the decreasing density (which makes matter want to ionize) or the decreasing temperature (which makes it want to stick together)?



In the short run this is a fairly complicated question, but in the long run, things may simplify: if the universe is expanding exponentially thanks to a nonzero cosmological constant, the density of matter obviously goes to zero. But the temperature does not go to zero. It approaches a particular nonzero value! So all forms of matter made from protons, neutrons and electrons will eventually ionize!



Why does the temperature approach a particular nonzero value, and what is this value? Well, in a universe whose expansion keeps accelerating, each pair of freely falling observers will eventually no longer be able to see each other, because they get redshifted out of sight. This effect is very much like the horizon of a black hole - it's called a "cosmological horizon". And, like the horizon of a black hole, a cosmological horizon emits thermal radiation at a specific temperature. This radiation is called Hawking radiation. Its temperature depends on the value of the cosmological constant. If we make a rough guess at the cosmological constant, the temperature we get is about $10^{-30}$ Kelvin.



This is very cold, but given a low enough density of matter, this temperature is enough to eventually ionize all forms of matter made of protons, neutrons and electrons! Even something big like a neutron star should slowly, slowly dissipate. (The crust of a neutron star is not made of neutronium: it's mainly made of iron.)


Thursday, 19 January 2012

planet - How do kepler orbits account for planetary migration?

Kepler orbits apply only to 2-body systems.



They apply only, if the masses aren't too large and the orbits aren't too close. Otherwise relativistic effects occur, as for the Sun - Mercury system.



Kepler orbits only apply, if the bodies are sufficiently spherical.



As soon as a third body comes into play, the system can become chaotic.



Too many conditions, especially for our early solar system.



Strange orbits like those around Lagrangian points, or horseshoe orbits can occur. Orbital resonance can stabilize as well as destablize orbits.

cosmology - Strong force and metric expansion

Excepting a Big Rip scenario, there is no eventual 'clash'.



Consider a Friedmann–Lemaître–Robertson–Walker universe:
$$mathrm{d}s^2 = -mathrm{d}t^2 + a^2(t)left[frac{mathrm{d}r^2}{1-kr^2} + r^2left(mathrm{d}theta^2 + sin^2theta,mathrm{d}phi^2right)right]text{,}$$
where $a(t)$ is the scale factor and $kin{-1,0,+1}$ corresponds to a spatially open, flat, or closed cases, respectively. In a local orthonormal frame, the nonzero Riemann curvature components are, up to symmetries:
$$begin{eqnarray*}
R_{hat{t}hat{r}hat{t}hat{r}} = R_{hat{t}hat{theta}hat{t}hat{theta}} = R_{hat{t}hat{phi}hat{t}hat{phi}} =& -frac{ddot{a}}{a} &= frac{4pi G}{3}left(rho + 3pright)text{,} \
R_{hat{r}hat{theta}hat{r}hat{theta}} = R_{hat{r}hat{phi}hat{r}hat{phi}} = R_{hat{theta}hat{phi}hat{theta}hat{phi}} =& frac{k+dot{a}^2}{a^2} &= frac{8pi G}{3}rhotext{,}
end{eqnarray*}$$
where overdot denotes differentiation with respect to coordinate time $t$ and the Friedmann equations were used to rewrite them in terms of density $rho$ and pressure $p$. From the link, note in particular that $dot{rho} = -3frac{dot{a}}{a}left(rho+pright)$.



If dark energy is described by a cosmological constant $Lambda$, as it is in the standard ΛCDM model, then it contributes a constant density and pressure $rho = -p = Lambda/8pi G$, and so no amount of cosmic expansion would change things. Locally, things look just the same as they ever did: for a universe dominated by dark energy, the curvature stays the same, so the gravitational tidal forces do too. Through those tiny tidal forces, the dark energy provides some immeasurably tiny perturbation on the behavior of objects, including atomic nuclei, forcing a slightly different equilibrium size than would be otherwise. But it is so small that that it has no relevance to anything on that scale, nor do those equilibrium sizes change in time. The cosmological constant holds true to its name.



On the other hand, if dark energy has an equation of state $p = wrho$, then a flat expanding universe dominated by dark energy has
$$dot{rho} = -3frac{dot{a}}{a}left(rho+pright) = -3sqrt{frac{8pi G}{3}}left(1+wright)rho^{3/2}text{,}$$
and immediately one can see that there is something special about $w<-1$, leading to an accumulation of dark energy, while the cosmological constant's $w = -1$ leads to no change. This leads to a Big Rip more generally, as zibadawa timmy's answer explains.





? If the metric expands surely objects get further away from one another and that would include the stars inside galaxaies as well as the galaxies themselves?




Not at all. It wouldn't even make any sense: if you have an object like an atom or a star in gravitational freefall, by the equivalence principle only tidal forces across it are relevant. The tidal forces stretch the object until the internal forces balance them. But for a Λ-driven accelerated expansion, dark energy contribution to tidal forces is constant. Hence, an object already in equilibrium has no reason to further change its size, no matter how long the cosmic acceleration occurs. This also applies to galaxies, only that the internal forces are also gravitational and balance the dark energy contribution.



Looking at this in more detail, imagine a test particle with four-velocity $u$, and a nearby one with the same four-velocity, separated by some vector $n$ connecting their worldlines. If they're both in gravitational freefall, then their relative acceleration is given by the geodesic deviation equation, $frac{D^2n^a}{dtau^2} = -R^alpha{}_{mubetanu}u^mu u^nu n^beta$. The gravitoelectric part of the Riemann tensor, $mathbb{E}^alpha{}_beta = R^alpha{}_{mubetanu}u^mu u^nu$, represents the static tidal forces in a local inertial frame comoving with $u$, which will drive those particles apart (or together, depending). Hence, keeping those particles at the same distance would require a force between them, but it's not necessary for this force to change unless the tidal forces also change.



Galaxies don't change size through cosmological expansion. Stars don't either, nor do atoms. Not for Λ-driven expansion, at least. It would take an increase in tidal forces, such as those provided by a Big Rip, for them to do so.



A related way of looking at the issue is this: according to the Einstein field equation, the initial acceleration of a small ball of initially comoving test particles with some four-velocity $u$ (in the comoving inertial frame), which is given by the Ricci tensor, turns out to be:
$$lim_{Vto 0}left.frac{ddot{V}}{V}right|_{t=0}!!!!= -underbrace{R_{munu}u^mu u^nu}_{mathbb{E}^alpha{}_alpha} = -4pi G(rho + 3p)text{,}$$
where $rho$ is density and $p$ is the average of the principal stresses in the local inertial frame. For a positive cosmological constant, $rho = -p>0$, and correspondingly a ball of test particles will expand. That's cosmic expansion on a local scale; because of uniformity of the FLRW universe, the same proportionality works on the large scale as well, as we can think of distant galaxies as themselves test particles if their interactions are too minute to make a difference.



Again, we are led to the same conclusion: if the ball has internal forces that prevent expansion, then those forces don't need to change in time unless the dark energy density also changes, which doesn't happen for cosmological constant. If they're not 'clashing' now, then they won't need to in the future either.

Tuesday, 17 January 2012

Meteors made of differing substances. Will they all burn up?

I can give a partial answer to this. In general, larger objects are more likely to break up than smaller ones, and comets are more likely to break up than meteors. A feather would likely burn up too, here's why:



30,000 KM above the surface, or roughly 4.7 earth radii, the escape velocity at that height would be the square root of that or roughly 2.17 times less or 46% of the escape velocity from the surface, so a feather dropped from that height would be traveling at nearly 54% of earth's escape velocity when it hit the atmosphere, or about 6 KM per second. That's about 7 times as fast as a bullet and I'd wager, that would burn up a feather pretty quick.



Speed of space collisions is faster than free-fall velocity can reach cause anything that hits the earth from space is also in free fall and you can add to that any relative velocity between the objects. The slowest meteors hit the earth at about 11 KM per second, not surprisingly, the Earth's escape velocity.



http://www.amsmeteors.org/fireballs/faqf/#12



Now, if you drop a feather from the highest hot air balloon, then it probably would float down to the earth, accelerating faster at first in the thinner atmosphere, then more slowly as the air got thicker.



Small point to add, angle of approach matters too. A glancing blow and the object can effectively bounce off the atmosphere, not burn up in it.

Questions about time and space (from beginner)

A)




Is it right that what we see/feel in every moment is projection of
some 4D reality into our 3D world noticed by our senses.




It's more similar to a slice, hyperplane or hypermanifold, less to a projection.




If my eyes were able to "somehow" see in 4D and I pointed my eyes to a
person, would I see that person in all stages of his/her life from
birth to death as a single "entity"?




In a classical world, yes.



B)




Can I put infinite number of 3D spaces into a single 4D space?




Yes, a possible mathematical description of such a space-time is a 3+1-Minkowski space-time.




If we considered that 5th dimension exists and that we could put
infinite number of 4D spaces into 5D space, then we would end up with
infinite number of parallel realities, in which Caesar did not crossed
Rubicon, or did not even exist. Is it right or complete nonsense? Can
be somehow proved that this is right/nonsense?




That's close to the many-world interpretation of quantum theory.
But it's thought of as an infinite-dimensional Hilbert space, as one of the simpler ways of thinking.



Thinking of a 5D space-time as a collection of all possible 4D space-times is less suitable, since it would be a mere set of 4D space-times without meaningful structure. Take Ceasar crossing Rubikon a day earlier or day later or a meter to the left or a meter to the right with or without a leaf falling from a tree, or a huge number of other versions. Which would be the correct order to pile up these versions to a 5D world? The infinite-dimensional Hilbert space model solves this ambiguity, and provides a meaningful metric for the set of all possible worlds.



The 10-dimensional models of some string theories are something else: The 6 extra-dimensions are thought to be curled up, and tiny.



We have no access to parallel worlds, so it cannot be proven by observational evidence. It's a mathematical model. It's possible to prove theorems within the mathematical theory, but that's usually based on axioms, or assumptions. Mathematical theories may, but don't need to match with observation, depending on how the axioms match to physical "reality".

Sunday, 15 January 2012

How fast does Venus move as seen from the earth?

The Earth rotates on its axis once every 24 hours. (Well, actually a little less, by about 17 minutes, but close enough.) That's $360^circ$ in 24 hours, or about $15^circ$ per hour. $15^circ$ is 30 times the width of the Sun or Moon. The Sun moves exactly that fast (on average), the stars a smidgen faster, and the Moon a tad slower. (The Moon lags behind the stars by approximately its own width every hour.)



Since Venus is never more than about $45^circ$ from the Sun, it always sets at most 3 hours after the Sun does, and rises at most 3 hours before.

solar system - Is it possible to disprove geocentrism without telescopes?

The development of telescopes enabled the discovery of moons orbiting around Jupiter, and the existence of a full set of phases of Venus, as opposed to a limited set which would be predicted by geocentrism.



But is it possible to disprove, or at least make implausible, geocentrism without a telescope? Preferably without calculus or more advanced maths.



I'm curious for two reasons. One is knowing whether it was lack of technology, or humanity's attitude towards the universe, that was responsible for geocentrism. The other is that I've heard that some astronomical facts, such as the size of the earth, were calculated before the advent of telescopes, and I'd find it interesting if more facts could have been deduced even with a limited set of technology.

Saturday, 14 January 2012

earth - Inclination of planets

While the other answer here and the link provided by Jeremy give excellent explanations, I believe a bit more nuanced reasoning is required.



Although the theory of planet formation is currently still incomplete, it is generally accepted that planets form in a so-called proto-planetary disk as a part of stellar formation process. This is backed up by several observation of such disk and even by
directly observed planets in systems where remnants of the disk are still present (most notably Fomalhaut-b).



However, exactly how these planets form is still unclear. A popular theory is that these planets form in place and therefore keep the orientation and inclination of the disk. This beautifully explains the Solar System and specifically why all planets have roughly the same orbital inclination. But that's not surprising as the model was specifically designed to do just that! Yet the Solar System does not have to be the only possible type of planetary system, or even the most common for that matter. Indeed the study of exoplanets has confirmed that very different planetary systems are possible.



An alternative theory is that planets are created far away from the star and migrate inwards. During this migration multiple planets can interact with one-another and be pushed out of the disk plane to higher inclinations. They could even flip over completely and become retrograde.



Now to come back to the actual question; the Solar planets probably have about the same inclination simply because they were created that way.



You are also right in saying that gravity will tend to align the planets. How this works physically is that the star can exerts a torque on the planet, which produces a tidal effect (like how the Moon creates tides on Earth). Over time this aligns the angular momentum of the planet to that of the star by pushing the planet towards the equatorial plane of the star.



As for your second question, yes, there are probably planetary systems which are misaligned with respect to the star. At least there are certainly planets that are not aligned with the equatorial plane of the star. The well named HAT-P-11b is a known example, but there are a lot more.



Note that for exoplanets the important parameter is not the inclination (which for exoplanets is the angle between the normal of the orbit and the line of sight), but rather the obliquity or tilt of the planet rotation with respect to the rotation of the star.



As for a third twist to this story, very recently there was a discovery of an entire proto-planetary disk that is misaligned with respect to the star, which opens the door to even stranger configurations.

star - What is the difference between LMC and SMC?

The LMC has an apparent size of about 645x550 arc mins, the SMC 320x205.



Both contain several hundred million stars each. The LMC is about 14000ly in size, and is about 10 billion solar masses; the SMC is 7000ly in size, and is about 7 billion solar masses.



The visual magnitude of the LMC is +0.28, the SMC is +2.23.



Both feature a number of interesting clusters, nebulae, and supernova remnants. Notably, LMC is home to NGC 2070, the Tarantula Nebula, and Supernova 1987A.
L
The LMC is usually considered an irregular galaxy, though it has a prominent bar, somewhat warped, and a spiral arm. The SMC is a dwarf irregular galaxy, or maybe a barred disc.



LMC is the fourth largest galaxy in our local group, and the third closest to us. SMC is the fifth? largest galaxy in the local group, and the fourth closest to us.



{In a previous version of this response I said: A number of websites proclaim that the LMC and SMC are gravitationally bound to the Milky Way. This has been known to be incorrect since 2007 when Hubble observations showed they are travelling too fast to be orbiting the Milky Way. See also.}



A number of websites proclaim that the LMC and SMC ARE gravitationally bound to the Milky Way. This was based on old information. More recent measurements have challenged this assumption, though further analysis has left open the door for the possibility that the LMC and SMC are gravitationally bound to the Milky Way. We can, however, safely say that the LMC and SMC may be gravitationally bound to the MW.

Friday, 13 January 2012

stellar - about H$_alpha$ and H$_beta$ lines in astronomical objects' spectra

In a normal stellar atmosphere, where the temperature decreases with height, I think this is not possible.



H-alpha absorption arises from the $n=2$ to $n=3$ transition; H-beta from the $n=2$ to $n=4$ transition. Thus both transitions are governed by the number of atoms in the $n=2$ lower level.



The absorption coefficient for a transition can be written in proportionality terms by (in local thermodynamic equilibrium)
$$ alpha_nu propto nu B_{lu} n_l ( 1 - exp[-hnu/kT]),$$
where $B_{ul}$ is the Einstein absorption coefficient, $n_l$ is the population of the lower energy level and $nu$ is the photon frequency corresponding to the transition.



Using the well-known relationship between the Einstein emission and absorption coefficients and values for the emission coefficients found here, we can estimate a ratio of absorption coefficients for a given temperature.



$$ frac{alpha_{Hbeta}}{alpha_{Halpha}} = frac{g_{n=4}}{g_{n=3}} frac{A_{Hbeta}}{A_{Halpha}} left(frac{nu_{alpha}}{nu_{beta}}right)^2 left(frac{1 - exp(-hnu_{beta}/kT)}{1-exp(-hnu_{alpha}/kT}right),$$
where $A_X$ are the Einstein emission coefficients and the statistical weights $g_n$ are given by $2n^2$.



Thus if we take $nu_alpha = 4.57times10^{14}$ Hs, $nu_{beta}=6.17times10^{14}$ Hz, $A_{Halpha} simeq 10^{8}$ s$^{-1}$, $A_{Hbeta} simeq 3times 10^{7}$ s$^{-1}$, then
$$frac{alpha_{Hbeta}}{alpha_{Halpha}} = 0.29 left( frac{1 - exp(-29642/T)}{1 - exp(-21956/T)}right)$$



So when $T$ (in Kelvin) is small, the ratio is about 0.3. When $T$ becomes large then the ratio increases, but above $T sim 12,000$ K, all the Hydrogen is ionised.



Thus in thermodynamic equilibrium is it very difficult to manufacture a circumstance where the optical depth in the H$beta$ line is larger than that in the H$alpha$ line.



[I'd be grateful if someone can tell me whether the ratio I calculated above is correct, since I could not find it in any reference and need to work it out from scratch].

time dilation - Age of the universe vs. its contents

Models are uncertain



I'm guessing that you are referring to age estimates of very old stars. The age of a star can be determined from certain observables, among one is its absolute luminosity, which in turn requires a precise measurement of its distance. Even today, the uncertainties involved yield estimates with mean values longer than the age of the Universe, which of course is impossible. For instance, the age of the star HD 140283 was found as late as in 2013 to be $14.46pm0.8$ billion years (Gyr). This is not really in conflict with the estimated age of the Universe, $13.798pm0.037$ Gyr; all this tells us is that our methods are still not perfect, but that HD 140283 was formed shortly after the Big Bang.



Time dilation is small



General relativity predicts that time evolves more slowly in a graviational field. This is with respect to an observer outside the gravitational field, and since there is no external observer wrt. the Universe, the global time of the Universe can be said to be the "right" time. You're right that in principle the total mass of the Universe makes time run slower than in a hypothetical, empty universe, but almost anywhere in the Universe, the gravitational field is way too weak to cause any noticeable dilation. Close to massive objects, time does run slow. On the surface of a star, the time dilation factor is roughly $(1 - GM/rc^2)^{-1/2} = 1.000002$. Inside the star, the factor is even closer to unity, because only the mass between the center and you contributes, so the farther you dive into the star (kids, don't try this at home), the less the time dilation. Only if you compress the star, so as to squezze it into a neutron star of a black hole, will you get a larger time dilation. For instance, at the "surface" of a black hole, time stops (but again, only wrt. an external observer; for the person at the surface, time evolves as expected)

Thursday, 12 January 2012

amateur observing - How to measure the altitude and azimuth of a star?

For the second question: If you know the coordinates of the Zenith, your latitude is exactly Zenith's declination. For your longitude you can not rely on the Zenith: the same star will be at the Zenith at the same sidereous time for every place in the same latitude, so you need a clock besides the telescope. cf. http://en.wikipedia.org/wiki/History_of_longitude#Problem_of_longitude



For the first question: The altitude of a star that is crossing the local meridian is also quite easy.



For Northern Hemisphere, a star that is crossing the North local meridian (that is, between the Zenith and the North Horizon) the latitude is
$90-alt=dec-lat$



For Northern Hemisphere, a star that is crossing the South local meridian (that is, between the Zenith and the South Horizon) the latitude is
$90-alt=lat-dec$



Same goes for Southern Hemisphere if you change also North Horizon and South Horizon.



You did not asked specifically for the Azimuth, but again you need a clock for that.

the moon - Historical astronomical lunar tables

I hope this is the right place to ask this question, and I apologise if not.



I've come across a date written in an old book, but it is written using what seems to be a lunar/zodiac system.
Below is a picture of the date (ignore the text above it):



enter image description here



I'm trying to work out what this date would have been in our modern calendar, but I can't seem to find any tables or almanacs that go back that far.



Are there any publicly available tables that do go back to 1587? If not, how exactly could I go about calculating the answer for myself? I am a mathematician, so a bit of wacky maths should be within my capabilities.
This is the answer I'm probably expecting from this site: some kind of program/algorithm/formula to be able to work out on what date the moon would have been in Taurus in



If not, any ideas as to where best for me to ask/look next?

Wednesday, 11 January 2012

orbit - How can one explain the apparent motion of the Sun from a heliocentric point of view?

From a horizontal frame of reference, the path of the Sun looks like a part of a circle in a tilted plane, as shown in this picture:



                             enter image description here



If you go to a heliocentric system, one would say that this is because the Earth is rotating around it's axis. Is there any more detailed explanation of this change of reference?



So how can one explain in detail the following facts (seen from a geocentric point of view - horizontal reference system) starting from a heliocentric system and the fact that the Earth is rotating around it's axis:



  • Why is the path of the Sun a part of a circle?

  • Why is this path in a plane?

  • Why is this plane tilted?

  • How to get this "tilt-angle"?

Why does this motion seem to be uniform from an equatorial point of view, and can I neglect the ecliptic to understand the points above?



I tried to model it using a ball as Earth, attached a camera to it and used another ball as the Sun, but I didn't succeed in reproducing the effect of the apparent motion in this model. Perhaps there is any animation which makes the points above clear.

Tuesday, 10 January 2012

Can a person at a photon sphere of a black hole decide where the black hole is?

For an observer hovering [I say hover because by definition there are no orbits for an observer at the photon sphere] at the photon sphere for a non-maximal de Sitter-Schwarzschild black hole, a simple accelerometer will tell an observer which horizon is which. The reason for this is that in order to hover at the photon sphere the observer must take up a non-inertial local frame with a directional bias decided by the direction of the BH event horizon.



In the maximal de Sitter-Schwarzschild spacetime the BH singularity disappears and the observer will not be able to tell the difference between the two horizons as there is an automorphism of that spacetime which will map one horizon to the other.

Sunday, 8 January 2012

Why does the face of the moon 'sync' with the earth?

It is called "Tidal locking", or "gravitational locking" or "captured rotation". According the the Wikipedia page on "tidal locking" (check the references for sources), it is due to Earth's gravity causing a small tidal bulge on the Moon, which affects its rotation. Over time, the Moon's rotation, affect by Earth's gravity, makes it's orbital rotation synchronise with its lunar rotation.

Friday, 6 January 2012

How are the newly discovered Janus/Epimetheus rings different from the other rings of Saturn?

What are the newly discovered ring systems of Saturn, and the circumstances relating to the discovery? Is there something that makes them different from the old well-known rings, like their formation?



The ring in question is the Janus/Epimetheus ring system:



enter image description here

Thursday, 5 January 2012

gravity - If 50 tons or more of debris falls to earth everyday, is Earth getting heavier?

The Earth loses mass because hydrogen and helium (plus other elements in trace amounts compared to hydrogen and helium) escapes the Earth's atmosphere. The Earth gains mass because incoming asteroids (most of them very small) impact the Earth's atmosphere; a few make it all the way to the surface of the Earth. Whether those incoming asteroids burn up in the atmosphere or make it all the way to the surface is irrelevant; the Earth gains mass.



Whether the net result is a mass gain or mass loss is a bit up in the air; the uncertainties on both are rather large, and they overlap. The Earth might be gaining or losing a tiny bit of mass every year. That said, most research leans toward atmospheric losses being greater than mass gain from impacting asteroids, comets, and dust.



Whichever is the case, it's a bit irrelevant. Even the most extreme upper estimates on mass loss or mass accumulation are incredibly tiny compared to the mass of the Earth itself.

Tuesday, 3 January 2012

orbit - Leonid meteor showers and the Tempel-Tuttle comet

When material is shed from a comet, it mostly continues to follow the same orbit as the comet, but drifts out ahead or behind the comet in a complex way due to interactions with solar wind and the gravity of other objects.



If you imagine the elliptical orbit of 55P/Tempel-Tuttle, it will have patches of denser dust in clumps spaced out around the ellipse corresponding to different perihelion approaches. The 1466 clump will be in a different place than the 1500 clump, and they will both be travelling more or less around the comet's orbit.



Since the comet's orbit only crosses Earth's at one place, and we only cross that spot once a year, we will only encounter whatever clumps of dust happen to be at that point at the same time the Earth is.



Calculating exactly where these clumps of comet dust are is non-trivial. And that is a big reason why predicting how good a meteor shower will be is a rather tricky business.

Monday, 2 January 2012

general relativity - What are the exact physical parameters used to calculate Mercury precession with Einstein theory?

In very a general post-Newtonian metric for a two-body system with the first body oblate, where $Mequiv m_1+m_2$ is the total mass, $muequiv m_1m_2/M$ is the reduced mass, and $pequiv a(1-e^2)$ is the semi-latus rectum of the orbit, the perihelion advance per orbit is
$$smalldeltavarpi = 6pifrac{GM}{pc^2}left[underbrace{frac{2-beta+2gamma}{3}}_{text{GTR}=1} + underbrace{frac{2alpha_1-alpha_2+alpha_3+2zeta_2}{6}}_{text{GTR}=0}frac{mu}{M} + frac{J_2R^2c^2}{2GMp}right]text{.}$$
According to GTR, $beta=gamma=1$ exactly. The second term contains various parameters relating to preferred-frame effects and energy-momentum non-conservation. In GTR, they are all identically zero, while experimentally, $alpha_1lesssim10^{-4}$ and the rest are many orders of magnitude less than that. Finally, in the third term, $J_2$ is the quadrupole moment of the first body; $J_2R^2 = (C-A)/m_1$ where $C$ and $A$ are moments of inertia about rotational and equatorial axes, respectively. A derivation can be found Will §7.3.




I try to find out what the parameters' values such as $G$, $M_{sun}$, $omega_{min}$ at aphelion radius, etc. used to get this exact 42,98.




This is just the front coefficient in the above: if $T$ is the orbital period of Mercury, then
$$6pifrac{GM/c^2}{a(1-e^2)}frac{1}{T} = frac{42.98''}{mathrm{century}}text{,}$$
and we can restate the GTR prediction of Mercury as
$$dotvarpi_{text{☿}}^{scriptsizetext{GTR}} = left.frac{42.98''}{mathrm{century}}right.left[1+2958J_2right]text{,}$$
where $J_2$ is the gravitational quadrupole moment of the Sun.




NASA measured 43,13 arc seconds per century. General relativity predicts 42,98 arc seconds per century.




If you're referring to one of Anderson et al.'s analyses, the value is $43.13pm0.14$. Under GTR, this value is compatible with any $J_2$ up to about
$$begin{eqnarray*}frac{J_2R^2c^2}{2GMp}lesssim 0.0067 &Longleftrightarrow & J_2lesssim 2.3times10^{-6}text{.}end{eqnarray*}$$
But what is the Sun's quadrupole moment? This is a somewhat sticky question, but the results of lunar laser ranging experiments are incompatible with $J_2$ of more than about $3times10^{-6}$, while using helioseismology, Pijpers (1998) estimates $J_2 = (2.18pm0.06)times 10^{-7}$. Using Pijper's value yields a GTR prediction of $43.01''$ per century, which is compatible with Anderson et al.'s analysis.




With a new model developed by R. Plamondon, we get exactly 43,13 and also it matched perfectly with Mercury's period of revolution around Sun.




I have no idea what Plamondon's model is, but I would be rather suspicious of any model that gives "exactly" $43.13$ when that's just the middle of an interval of statistical uncertainty according to one of the many prior analyses of Mercury's orbit.




When I put my parameters into Einstein metric, I got something unexpected.




If we ignore the post-Newtonian parameters other than $beta$ and $gamma$, the static post-Newtonian metric is
$$mathrm{d}s^2 = -left(1+2Phi+2betaPhi^2right)mathrm{d}t^2 + left(1-2gammaPhiright)mathrm{d}S_text{Euclid}^2text{,}$$
where the Newtonian potential of the Sun is modeled up to the quadrupole term
$$Phi = -frac{M_odot}{r}left[1-J_2frac{R_odot^2}{r^2}frac{3cos^2-1}{2}right]text{.}$$
Deriving the perihelion advance of Mercury with this metric is Exercise 40.5 in MTW, and it matches the general result referenced above with $Mequiv M_odot+M_{☿} approx M_odot$ and sans the middle term. For the Sun-Mercury system, $mu/Msim 10^{-7}$, so the middle term is even more irrelevant than bounds on the PPN parameters would suggest.




References:



  1. Will, C. M., Theory and experiment in gravitational physics, (Cambridge University Press, Cambridge, 1993), 2nd edition.

  2. Anderson, J. D., Campbell, J. K., Jurgens, R. F., Lau, E. L., Newhall, X. X., Slade, M. A., Standish Jr, E. M., in Proceedings of the Sixth Marcel Grossmann Meeting on General Relativity, ed. by Sato, H. and Nakamura, T., (World Scientific, Singapore, 1992).

  3. Pijpers, F. P., Mon. Not. R. Astron. Soc. 297, L76 (1998). [arXiv:astro-ph/9804258]

  4. Misner, C. W., Thorne, K. S., and Wheeler, J. A., Gravitation, (Freeman,
    San Francisco, 1973).