Friday, 27 April 2012

astrophotography - Are there are any photographs online that approximate what the Milky Way looks like to the unaided eye?

It may be difficult to find proper photos that emulate this, both because given the tools and the subject, most people would opt for longer exposures, and also because of the general difficulty in emulating how our brain processes visual information into an image, especially in low-light or high contrast situations like this. I think the photo at the bottom of this article may be a decent recreation of it, though it's a bit fuzzy and not so wide. Note: There are star trails forming in the image, which is indicative of it being a longer exposure. The image (unless cropped) doesn't appear to be from a wide-angle lens - I would guess somewhere in the 35-50mm range. Perhaps a 10 secondish exposure, but my judgement is more based on result than technique.



What you can see will depend a lot upon the conditions of the night. You're probably not expecting a lot of light pollution, but you also have to keep in mind that your eyes will adjust to light as well. A cell phone or a full moon can be enough to prevent you from being able to see well. If conditions are perfect - little to no light pollution, no light from cars, flashlights, etc, a new moon, low humidity/no clouds, etc, and the milky way is in a good position, you will be able to pick out the galaxy quite well. You should be able to notice both the high concentration of stars as well as the 'milky' lighter shade caused by the many stars too dim or distant to appear bright.

Thursday, 26 April 2012

universe - Can there be a 2nd possibility deduced from the fact that galaxies are moving away from us?

Galaxies are moving away from us in every direction.



If it were like your race, galaxies at our same distance from the central point will be slowly approximating. That is, we will note that galaxies nearer to the center than us will move away from us (since they are faster), galaxies in the opposite side of the sky will move away from us (since they are slower), and galaxies in the plane that lies between these two point will be slowly approximating to us (since we converge with them to the same point at the same speed).



So the answer is no, that is not a possibility.

Wednesday, 25 April 2012

How would vehicles travel through the interstellar medium with its such low density?

I've heard from a number of different places that the density in the interstellar medium can have an average of 1 atom per centimeter cubed. Perhaps I have a wrong understanding of what 'nothing' is (I am very knew to astronomy/astrophysics), but how would a vehicle, like the Voyager for example, be able to travel through it?



My (perhaps very flawed) thinking behind this is that if there are so few atoms not in close proximity, there couldn't be any (meaningful) forces interacting between them to provide any sort of way to travel through it.

Tuesday, 24 April 2012

Is there a theoretical maximum size limit for a star?

According to current knowledge, yes. If the gas cloud is too massive, the pressure of the radiation prevents the collapse and the star formation.



The article Stars Have a Size Limit by Michael Schirber, it's about 150 Solar Masses. However, there's the Pistol Star, which is speculated to be 200 SM.



In the article 'Das wechselhafte Leben der Sterne' by Ralf Launhard (Spektrum 8/2013) there's a diagram with information that when the mass is over 100 SM, the star can't form because of radiation pressure. The exact value of the limit is not speculated in the article.

Monday, 23 April 2012

solar system - Why do (most of) the planets rotate counterclockwise, i.e. the same way the Sun does?

Referring to the mechanisms explaining the solar system formation and to the initial rotation of the gaseous cloud that collapsed, I understand easily why the planets orbit the Sun the same way this one rotate (say counterclockwise) but I can't figure out why this apply to planets rotation too. Thinking about that from Kepler's laws and angular momentum conservation point of view, I might conclude that the planets should rotate clockwise because the velocity of the particles that aggregated during the planets formation was higher closer to the Sun...



Apart from a short explanation, I would like to have a good reference from the literature if possible.



Edit, to make my reasoning more explicit: following Kepler's laws, the particles that aggregate on the "day side" of the proto-planets in the east-west direction relative to the ground are faster than the ones hitting on the "night side" in the west-east direction. If we add all of these contributions, the planets should rotate in the opposite direction relative to the initial cloud (i.e. relative to the actual Sun rotation). I guess something is wrong or missing there (to counterbalance the phenomenon I just described) but I can't see what it is...



New edit: References I found some published articles dealing with this kind of question but I don't have the time right now to read them carefully. If someone is motivated to do so, do not hesitate ;-) If I find the answer to my question amongst these papers, I will post it there later. Of course, you may need to use the network of an institution with a subscription to these editors to access them:



R.T. Giuli (1968a) in Icarus: http://www.sciencedirect.com/science/article/pii/0019103568900821



R.T. Giuli (1968b) in Icarus: http://www.sciencedirect.com/science/article/pii/0019103568900122



A.W. Harris (1977) in Icarus: http://www.sciencedirect.com/science/article/pii/0019103577900793



J.J. Lissauer, D.M. Kary (1991) in Icarus: http://www.sciencedirect.com/science/article/pii/001910359190145J

telescope - Calculate Dec and RA of a star from Euler angles and GPS data

Obtain from the publisher Willmann-Bell the book ''Astronomical Algorithms'' by Jean Meeus. If obtaining elsewhere, be sure to obtain the 2nd ed. with corrections as of August 10, 2009. The equations you want are in Chapter 13, "Transformations of Coordinates".



Some variables must be defined:



$alpha$ = right ascension, if obtained from formula it is in radians
$delta$ = declination, positive north, negative south
$h$ = altitude, positive above the horizon, negative below horizon
$A$ = azimuth, measured westward from the South, other sources often measure from the North
$psi$ = observer's latitude
$H$ = local hour angle
$theta$ = local sidereal time



The first step is to transform horizon coordinates (azimuth and altitude) to equatorial coordinates (local hour angle and declination).



$$
tan H = frac{sin A}{cos A sin psi + tan h cos psi}\
sin delta = sin psi sin h - cos psi cos h cos A
$$



Then the local hour angle H is transformed to right ascension $alpha$:



$$
alpha = theta - H
$$

Sunday, 22 April 2012

star gazing - Is the location that published calendars use to calculate the seasons one specific location on the earth?

Astronomical events such as the solstices and equinoxes occur at a specific instance in time on a specific day, often given in Universal Time.



For example, this winter's solstice occurs on 21 Dec 2014 at 23:03 Universal Time. Since most of Europe is an hour ahead, this may occur on 22 Dec 2014 at 00:03 (and subsequent hours) given in local time based on the time zone. So British and American calendars will show the solstice occurring on 21 Dec, and in countries east of Greenwich on 22 Dec.

What is the difference between solar wind and solar radiation?

From Wikipedia's article on solar winds:




The solar wind is a stream of charged particles (a plasma) released
from the upper atmosphere of the Sun. It mostly consists of electrons
and protons with energies usually between 1.5 and 10 keV.




Solar radiation would be everything in general, including solar wind. Everything else is electromagnetic radiation (Radio, Microwave, Infra-red, Visible, Ultraviolet, X-ray, Gamma Ray)

Saturday, 21 April 2012

Long term development of Comets

Ice chunks would be melted off during closer approaches, potentially blasting non-ice bits off of the exocomet as it goes. This could affect the trajectory of the exocomet both by the inertial force of the separation and because the mass would be reduced.



Eventually it would run out of water, which means it would no longer be a comet--the streaming tail of gas wouldn't exist anymore, even on closer approaches to the sun. What would remain--assuming anything remained at all--would be a hard rocky core.



From there, the orbit would either stabilize or decay to a point that it crashed into the sun. Even a fairly stable orbit could eventually bring it close to other gravitational bodies in the solar system which could throw it into an unstable orbit, or alternatively, it could collide with a planet or asteroid.

Friday, 20 April 2012

distances - Why is the observable Universe larger than its age would suggest?

The easiest explanation for why the maximum distance one can see is not simply the product of the speed of light with the age of the universe is because the universe is non-static.



Different things (i.e. - matter vs. dark energy) have different effects on the coordinates of the universe, and their influence can change with time.



A good starting point in all of this is to analyze the Hubble parameter, which gives us the Hubble constant at any point in the past or in the future given that we can measure what the universe is currently made of:



$$ H(a) = H_{0} sqrt{frac{Omega_{m,0}}{a^{3}} + frac{Omega_{gamma,0}}{a^{4}} + frac{Omega_{k,0}}{a^{2}} + Omega_{Lambda,0}} $$
where the subscripts $m$, $gamma$, $k$, and $Lambda$ on $Omega$ refer to the density parameters of matter (dark and baryonic), radiation (photons, and other relativistic particles), curvature (this only comes into play if the universe globally deviates from being spatially flat; evidence indicates that it is consistent with being flat), and lastly dark energy (which as you'll notice remains a constant regardless of how the dynamics of the universe play out). I should also point out that the $,0$ subscript notation means as measured today.



The $a$ in the above Hubble parameter is called the scale factor, which is equal to 1 today and zero at the beginning of the universe. Why do the various components scale differently with $a$? Well, it all depends upon what happens when you increase the size of a box containing the stuff inside. If you have a kilogram of matter inside of a cube 1 meter on a side, and you increase each side to 2 meters, what happens to the density of matter inside of this new cube? It decreases by a factor of 8 (or $2^{3}$). For radiation, you get a similar decrease of $a^{3}$ in number density of particles within it, and also an additional factor of $a$ because of the stretching of its wavelength with the size of the box, giving us $a^{4}$. The density of dark energy remains constant in this same type of thought experiment.



Because different components act differently as the coordinates of the universe change, there are corresponding eras in the universe's history where each component dominates the overall dynamics. It's quite simple to figure out, too. At small scale factor (very early on), the most important component was radiation. The Hubble parameter early on could be very closely approximated by the following expression:



$$H(a) = H_{0} frac{sqrt{Omega_{gamma,0}}}{a^{2}}$$



At around:



$$ frac{Omega_{m,0}}{a^{3}} = frac{Omega_{gamma,0}}{a^{4}} $$
$$ a = frac{Omega_{gamma,0}}{Omega_{m,0}} $$
we have matter-radiation equality, and from this point onward we now have matter dominating the dynamics of the universe. This can be done once more for matter-dark energy, in which one would find that we are not living in the dark energy dominated phase of the universe. One prediction of living in a phase like this is an acceleration of the coordinates of universe - something which has been confirmed (see: 2011 Nobel Prize in Physics).



So you see, it would a bit more complicating to find the distance to the cosmological horizon than just multiplying the speed of light by the age of the universe. In fact, if you'd like to find this distance (formally known as the comoving distance to the cosmic horizon), you would have to perform the following integral:



$$ D_{h} = frac{c}{H_{0}} int_{0}^{z_{e}} frac{dz}{sqrt{Omega_{m,0}(1+z)^{3} + Omega_{Lambda}}} $$



where the emission redshift $z_{e}$ is usually taken to be $sim 1100$, the surface of last scatter. It turns out this is the true horizon we have as observers. Curvature is usually set to zero since our most successful model indicates a flat (or very nearly flat) universe, and radiation is unimportant here since it dominates at a higher redshift. I would also like to point out that this relationship is derived from the Friedmann–Lemaître–Robertson–Walker metric, a metric which includes curvature and expansion. This is something that the Minkowski metric lacks.

Thursday, 19 April 2012

history - Was the progress of astronomy in the 1800s surprisingly slow, and if so why?

I feel this question is so broad, it might well be the topic of an interesting book on the history of astronomy, should someone be inclined to write it. :)



Anyway, I think a few points could be made briefly.



1. Collecting data



In astronomy, that means observing the cosmos. That means using an instrument of some kind, typically a telescope, and gathering information through it. Telescope performance is dictated by many factors, but the most important one is size (or aperture).



Telescope size grew rapidly through 1600s and 1700s, from Galileo's 1.5 cm refractor in early 1600s, surpassing the 1 meter aperture in early 1800s - Herschel's 40 foot reflector. There was a steady stream of improvements regularly throughout that 200 year period. One could say that the first golden age of the telescope aperture race both culminated and ended with Herschel and his giant telescopes.



Then there was a lull, briefly interrupted by Lord Rosse's 1.83 meter telescope, the Leviathan of Parsonstown, in mid-1800s. Then nothing again.



The aperture race was resumed only in early 1900s, with the 2.5 meter reflector on Mt. Wilson, the Hooker telescope. Afterwards, throughout the 20th century, and now in early 21st, the race is going strong, with the 10.4 meter Gran Canarias segmented reflector currently in the lead, and the 39 meter E-ELT reflector being under construction at Cerro Armazones.



https://en.wikipedia.org/wiki/List_of_largest_optical_telescopes_historically



2. Interpreting data



The year 1900 marks the boundary between classic physics and the new physics. After that year, relativity and quantum mechanics took off. This is what enabled the new cosmology to emerge in the 20th century.



In other words, with the 1800s science, even with tons of data, there would have been no way to figure out, basically, everything. Supernovas? The expansion of the universe? Dark matter and the rotation of galaxies? This is all based on 20th century physics. 19th century physics would have been clueless.



Astronomy used classic physics to derive interpretations from data pretty quickly, and that process had achieved great success already well into the 1700s. That's when the structure of the solar system was figured out, way back to Kepler in the 1600s. Herschel found Uranus in late 1700s.



There are some exceptions here. Stellar parallax was detected in early 1800s, which enabled an estimate to nearest stars. Spectroscopy showed that distant stars are made of the same elements like the Earth in the 1850s. Around that same time, Neptune was discovered.



So the 1800s was not quite a completely dry period, in terms of theoretical progress.



In any case, a limit was reached anyway in late 1800s, because what was needed was new paradigms in physics to give new life to the interpretation process. That boost occurred after 1900, with relativity and quantum mechanics.



Cosmology is highly dependent on physics (and vice-versa).

Monday, 16 April 2012

the moon - What method is used to calculate the 'quality' of a solar/planetary image?

A little further research, and I found out that the basic principle, from which several variations derive, is to take the sum of the squares of the difference between adjacent pixels, to get a score.



The principle is that a higher quality picture has a higher probability that there will be significant differences in adjacent pixel values, i.e. that there can be a significant variation from pixel to pixel, whereas a lower quality picture will have similar valued adjacent pixels, as they have "blurred" together. Therefore, when taking the square of the difference of adjacent pixels and summing them together, higher quality pictures will show a significantly higher score, or summation.



I'm going to demonstrate this with a simplified example below, but I'm interpreting the meaning of "adjacent pixels" in a particular way, and I am hoping that if I am incorrect, someone will tell me and I can modify this answer appropriately.



Imagine we are imaging an object that exactly fills the frame of our small 16-pixel camera. In the middle of this object is a very dark square, that coincidentally happens to exactly fill the central 4 pixels of our camera. If the seeing for this object is perfect, then the camera will be exposed with a value of 255 for each of the light pixels, and 0 for each of the dark pixels.
A perfect image of our target object



This is how the differences between adjacent pixels would be calculated (I'm assuming only horizontal and vertical differences, not diagonal):
Differences between adjacent pixels



However, movement of the camera or poor seeing resulted in the light being spread more between the pixels, the calculations of the differences between adjacent pixels would be different:
Differences between adjacent pixels when the image is blurred



The score, that is the sum of the squares of the differences, for the first (perfect) image is 520200.



The score for the blurred image is 304200. Therefore the sum of the squares of the differences between adjacent pixels suggests that the first picture is the better one.



This strategy isn't perfect, obviously. The presumption is that better images will exhibit a better score due to larger differences between adjacent pixels, but this is an assumption.



The squaring of the difference is intended to provide greater weight to larger differences, but I came across a comment on the PIPP support website that says that PIPP has modified the basic algorithm to give still greater weight to sharper contrast changes over lower contrast changes - so maybe squaring the differences isn't enough sometimes.



Apparently this quality measurement strategy doesn't work when the object is small in size on the sensor, and may be bright or even over-exposed. The above methods may select blurred and smeared frames over frames that are actually better. In this case, a strategy that looks at the histogram of values for the frame and selects the one with the higher peak, may produce a better selection of appropriate 'best quality' frames. It assumes that the histogram will have a higher count of frames sharing a pixel value across a smaller range, rather than spreading the light out across a wider range of pixel values.



An alternative quality measurement strategy simply adds up the values of all the pixels in an image. A higher score will indicate more light got through to the sensor, which implies that less cloud (for instance) got in the way.

universe - What would we find if we could travel 786 trillion light years in any direction

There is one principle that says "The universe is isotropic". It means it looks the same (on large scales) from any point you look from.



This means, in turn, that if you travel 786 trillion light years instantaneously in some direction, you'll find a very differnt sky but with exactly the same general structure: stars, galaxies, clusters of galaxies, and attractors.



Altough you can NOT travel so far so fast, you can make this mind experiment:



Send a spaceship so far at (almost) speed of light with a photo camera, take a 360º photo, come back, arrive 1572 triilion years afterwards.
While awaiting for the spaceship return, wait 786 trillion light years, take a 360º photo, wait the othe 768 trillion years.



Compare the two photos. They were taken at the same "Cosmic time", one of them 768 trillion years ago, here, and the other one maybe two or three years of spaceship time ago, 768 trillion light years afar.



The photos will look the the same on the rough scale.

Sunday, 15 April 2012

earth - Does a week represent something in astronomy?

The concept of a month derived from the amount of time it takes for the moon to cycle from new moon to new moon (which is roughly 29.5 days). The modern month has experienced changes from this original concept due to trying to fit a standard number of months within a solar year.



If you divide the lunar month into quarters, each quarter is approximately 7 days long. This is believed to be the origin of the seven-day week.

Saturday, 14 April 2012

light - Why are (Type II) supernovae so bright?

The difference is based on the different efficiency of the processes.
We can describe the luminosity by:



$L = eta m c^2$



where $eta$ is the conversion efficiency, and describes how much matter can be converted into luminosity (photons).
Main sequence stars (if you mean this by "normal operation") extract energy from matter by nuclear fusion.
The efficiency of nuclear fusion in stars is of the order of $eta = 1%$.



The other competitive mechanism, much more efficient than nuclear fusion, is conversion from gravitational potential.
In core-collapse SNe, this is the mechanism that brings to the explosion.
Gravitational potential energy is defined as $U=-frac{GMm}{r}$ where m is the mass of a particle, at a distance r, from a large object of mass M; G is the gravitation constant.



If you consider the SN scenario, $r$ is the distance of a particle that is falling from the external layer towards the center (center at $r=0$). So the particle goes from $r_1$ to $r_2$ with $r_2 < r_1$. Then the system lost part of the potential energy, converted to kinetic energy.
All the external matter collapse in a very brief time (order of seconds), from the external shell to the compressed nucleus, bringing more compression and then generating energetic shock in the explosion.



The virial theorem says that half the change in gravitational energy stays with the star, while the other half is emitted as luminosity.
The efficiency coefficient for gravitational conversion is $etasim10%$, i.e., 10 times larger than the nuclear fusion efficiency.



From this reference, you can check that the ratio between the invisible against visible emission is of the order $sim10^7$ (I don't know what kind of reference is that, but the authors are totally trusty).



Something more here and here.

Wednesday, 11 April 2012

protostar - Accretion of in-falling material for a young main sequence star

I'm reading material that is seemingly contradictory. Some sources indicate that the evolution of a protostar to a main sequence star is characterised by a stellar wind that precludes the accretion of further in-falling material. That is, the (young) star now has a constant mass. However, other sources suggest that material may continue to accrete for a (brief) period after the protostar has become a main sequence star.



Can someone please confirm the actual process?

Tuesday, 10 April 2012

space - If your near a black hole and your time is slowed down, would a supernova be observable to both you and someone outside of the blackhole's pull?

Let's say that Person A is on a planet orbiting a blackhole(Like the one from interstellar) and time is slowed for Person A. Person B is back home on Earth. Now let's say a supernova happens and it's possible for both parties to see it happen at the same time(Ignoring the time it would take for the light to get there). Would Person A see the supernova at the same time as Person B? And if so, if Person A is in 1,000 B.C and Person B is in 2,000 A.D, would the supernova then happen at two different times at the same time?

black hole - What are the biggest problems about the numerical, finite-element GR models?

If you can provide examples of numerical methods in GR you've seen/heard of that would help focus the question.



From the article you linked to: "The technique keeps track of a vast number of quarks and gluons by describing the space and time inside a proton with a set of points that make up a 4D lattice". This almost gets to the main issue with Numerical Relativity. There is no natural computational grid on which to simulate space-time. The whole game with GR is that gravity is space-time so first you have to simulate the space-time and then you have to simulate the objects (neutron stars, black holes, gravitational waves) on top.



As the links below go into, its very difficult to create a consistent computational grid since the physical space-time your trying to simulate for a black hole has "funny" things in it like singularities, or an event horizon pas which we can't really know what's going on.



I think this article: http://astronomy.com/magazine/2016/02/putting-einstein-to-the-test?page=1



does a good job of summing up the field, and its quite accessible.



For something more rigorous please see: http://arxiv.org/pdf/1010.5260v2.pdf
That paper gets into some of the math behind the article linked to above.

Sunday, 8 April 2012

galaxy - Are Gamma Ray Bursts of galactic or extragalactic origin?

Shortly after their discovery, astronomers realized there were at least two classes of GRB: short events (<2 seconds) and long events (>2 seconds). The long GRB are widely believed to by hypernova, the explosions of massive black holes in very distant galaxies. In fact, they are much further away than even Paczynski and his followers believed at the time of the debate. Consensus on short GRB has yet to be reached, though the neutron star-neutron star merger theory, no longer applicable to long GRB, is still in play for the short ones. Also, some small percentage of short GRB are certainly magnetar flares (such as the famous March 7, 1979 event) but in other nearby galaxies.

Saturday, 7 April 2012

geology - Would it be possible to calculate the expected frequency of impact craters of all sizes on Earth

The simple answer to your question is, simply, yes.



All objects in the inner solar system is generally assumed to have been impacted by the same population of impactors which is mostly comprised of asteroids, and perhaps up to 10% comets. The outer solar system likely has a much larger percentage of cometary impactors.



Of the five main inner solar system terrestrial bodies, the Moon and Mercury have the most preserved crater record, idealized by a death of volcanism early in their histories, no atmosphere to speak of, and therefore craters are best preserved there. Mars is second-best (some atmosphere, localized volcanism over the last few billion years, and water in the first ~billion years). Venus is a mess for cratering (entire planet resurfaced ~400-800 million years ago plus a thick atmosphere that prevents craters <3-5 km from forming), while Earth has the most modified crater record.



With the moon next door, it is our best historical record for what also likely hit Earth.



The factors that will alter the lunar crater population from what would have formed on Earth are primarily gravity, surface type, and atmosphere. Atmosphere will not only screen out the smallest impactors (and hence not make craters), but it will also fragment less competent objects, changing what could have made a single large impact on the moon into something that will more likely make numerous smaller, clustered craters on Earth (from the surviving fragments). A larger surface gravity will tend to decrease the size of the final crater caused by a given impactor, but the dependence is small (difference of around 35% between Earth and the Moon despite the factor of 6 difference in gravity).



Surface type is much less well understood in the cratering community. Earth has stronger, denser crust than the Moon, but it's also ~70% covered by water -- the surface of Venus is like being under 1 km of water on Earth, so any ocean >1 km deep is going to prevent craters $lesssim 3-5$ km from forming.



These all combine for the much longer and more complicated answer of, "yes, but it's hard to figure out." There are lots of knobs in the models that we still don't know how to correctly turn, but we can, to first-order, use the lunar crater population and rate to estimate the population of craters that should have formed on Earth. At the time of writing this, there are 184 confirmed terrestrial impact craters, which is certainly a tiny fraction of a percent the number that have formed over Earth's history.

Thursday, 5 April 2012

How clearly would I be able to see a galaxy with the naked eye if viewed from a "close" distance?

Well, I didn't vote you down but it's a strange question in that there are some uncertainties, but it's also not hard to do the math.



For example, which Galaxy? - they vary quite a bit, and from what angle? 10,000 light years from the side or from the "top" for lack of a better word, above the brighter galactic core or 10,000 light years from the tip of a spiral arm?



Andromeda is 10 times brighter than the Milky way, and galaxies get a good deal larger than Andromeda and a good deal smaller than the Milky way.



But, the short answer to your question is, from 10,000 light years, a galaxy wouldn't appear that bright . . . er, mostly but it would appear quite large.



We can see the Milky way as kind of a white smudge across the southern hemisphere, but much of that bright smudge that we see is stars that are closer than 10,000 light years, so that's not really apples to apples, as much of that smudge is alot closer to us than your example.



Lets take our sun, from 10,000 light years away, lets, just for fun, say you had 10 billion of our suns, each 10,000 light years away. 10 billion suns is a lot, but light decreases by the square of the distance and there's 63,000 AUs in a light year so each sun at that distance would be 1/(397 million billion) less bright than the sun is from the earth. 10 billion of them, it's still 1/40 million times less bright than the sun is form the earth, roughly 1/100th the brightness of the full moon - that's what 10 billion of our suns would look like from that distance. All together, that's still 10 times brighter than Venus, so if they were close together they'd shine clear as day, but spread out, like a galaxy would be at 10,000 light years away - not so much.



Now a galaxy is more complicated cause some of the stars light would be blocked by dust and some stars would be much brighter than the sun, others much dimmer and a single supernova can outshine an entire galaxy for a couple of weeks. There's no precise answer, but in general, a galaxy would look very large but not very bright from 10,000 light years away.



Another, and perhaps better way to look at this is to take a look at Andromeda, which is about 10 times the brightness of the Milky way and we have a pretty clear view of it, but it's also about 2.5 million light years away from us. If it was, in your scenario, 250 times closer it would be 62,500 times brighter and much larger. It would spread out over most of the sky, though, like the Milky way, we'd see a smudge of brightness, or we might be able to make out the spiral arms depending on the angle.



Andromeda's apparent brightness or apparent magnitude is 3.4, and each number in the brightness scale is 2.5 times brighter, so 62,500 times as bright, would be between 12 and 12.1 scales brighter, or -8.6 to -8.7, which for an object in the sky, is fairly bright, (yeah, negative numbers = brighter), but it's still quite a bit less bright than the full moon, and at the same time, spread out over a much larger area - so, if Andromeda was that close, the galactic center would probably be a fair bit more visible than Venus, brighter in total but also more spread out, and much less bright than the full moon.



Short answer - the photos are probably brighter by a healthy amount.



Here's a very cool and loosely related article. http://www.slate.com/blogs/bad_astronomy/2014/01/01/moon_and_andromeda_relative_size_in_the_sky.html

Tuesday, 3 April 2012

gravity - What is the difference between our time and space time?

What is the difference between time and space-time?



time: is a change of position and/or energy of matter.



space-time: is a configuration between change of energy and/or position storage into a space , ie. an atom from (Fe) Iron vs foton, gallaxy andromeda vs via lactea, black hole vs star.



How does gravity affect the passage of time?



a set from space-time ( as black hole or star) can change other sets space- time, in this change acelerate position and energy (time).



What is the speed of light and how does it relate to time?



the speed of light is a specific set space-time (configuration) that have a position/energy with a limit into space.



How do scientists deal with timescales on the order of billions of years if time is not constant for all observers in the universe?



it is a ilussion because a speed light or energy emmision as gamma radiation or infrared (space-time sets) has different configurations and interactions, it is only a reference.



How is time, or for example the age of the universe, actually measured experimentally?



they shall create a relation parent set space-time between children sets space-time (Im not sure if they do it).

Monday, 2 April 2012

Could dark matter particles be unstable?

Dark matter is actually matter. Real matter, just that we can not observe it.



Part of it is actually cold hydrogen, and even cold dust.



Other parts of it are actually different particles that may decay. But that does not mean formation of new Hydrogen. To form Hydrogen you need both a proton and an electron to get bound together, and this implies they meeting, which is very difficult in low density regions like those of extragalactic dark matter.

galaxy - what is the highest throughput astronomy project in pixels?

I was having a discussing about big scientific projects with colleagues, and I postulated that astronomy is still one of the scientific fields with biggest projects per amount of data dealt with, after the high-energy physics.



I would like to know what is the highest throughput astronomy project as defined by the number of pixels in the images analysed in the project.



Something like "project XYZ" with an estimated "1.2 trillion pixels" worth of pictures taked and analysed over 5 years.

Is Earth's 1g solid surface gravity unusually high for exoplanets?

Ultimately we don't know enough about exoplanets to be sure; for now all our data is skewed toward more massive planets which are easier to detect using Doppler wobble, or large diameter planets (almost certainly gas giants) which are easy to detect by their host star dimming when they eclipse it relative to us. More data is coming in every day, and as fantastic as Keplar has been, I think we need to at least hold out for the James-Webb to be online before we draw any really hard conclusions form the data.



Without the data, all we can rely on is our theories of planet formation, which we're fairly good at.
Earth is probably more dense than an average planet of it's size, as a result of it colliding with a roughly mars-sized object (nicknamed Theia) early in it's development. Theia's core would have been absorbed into earth's core, but the outer layers of both were stripped away, creating a ring which would coalesce into our moon. This would leave earth with a higher mass core than a planet forming at its distance would have.



We can see this in the densities of the terrestrial planets;



--Object-------Density (g cm−3)-----Semi-major axis (AU)-



-------------Mean----Uncompressedm-----------------------



-Mercury-----5.4---------5.3-------------0.39------------



-Venus-------5.2---------4.4-------------0.72------------



-Earth-------5.5----------4.4--------------1.0-------------



-Mars--------3.9----------3.8--------------1.5-------------



Credit, Wikipedia



Planets closer to their star are naturally going to have higher densities as a result of mass differentiation; denser material settling to the core of a planet or the center of a solar accretion disk.



Looking at it from the perspective of habitability,
We know that density is positively correlated to surface gravity, so we can expect that earth would have a slightly higher than average surface gravity for a planet in the habitable zone around a star in the category of one solar mass.



That being said, most stars do not have one solar mass, most of the stars in the universe are red dwarfs, which are much dimmer and lighter than our sun, and would have a closer, narrower habitable zone. A habitable planet around a red dwarf would probably be smaller and lighter, but denser, due to its lower mass accretion cloud and closer proximity to it's star respectively.



I think we could expect the majority of exoplanets to be planets similar to mercury orbiting red dwarfs.
If this is the case we can expect that earth would have a high surface gravity relative to terrestrial planets (although FAR more massive terrestrial planets of similar diameter exist) and about average gravity when taking all planets into account.

Sunday, 1 April 2012

Mass, Radius, Colour, Size, Type of a Star from the Hipparcos Catalog

The apparent magnitude (in V ~ visual) is in column 42-46. Other magnitudes are BT magnitude (columns 218-223) and VT magnitude (columns 231-236). These are magnitudes recorded with different (colour) filters.
The colour ($B-V$) can be found in columns 246-251, expressed in magnitudes. This is the difference between V (visual) magnitude and B (blue) magnitude also known as the colour index. Blue stars have a low colour index ($B-V<0$) as the star will be brighter in the blue (B) passband (filter), and will therefore have a low B magnitude, while red stars will have a higher $B-V$ colour index.



Mass, Radius, and Temperature are not in the catalogue. Mass can only be directly determined for multiple star systems and requires a lot of observations per multiple star. Radius is determined from stellar models. And Temperature can be determined from the spectrum of the star. A crude indication can be gotten from the colour index of the star. (see for instance this webpage).



Finally the type can be found in the star's spectral type. The spectral type is in columns 436-447. These spectral types are not always easy to read but they consist of a spectral class (e.g. O, or F type stars) and the luminosity class (and some other information). The luminosity class gives the type of class in roman numerals. Below is the list from the wikipedia page on stellar classification.



  • 0 or Ia+ (hypergiants or extremely luminous supergiants). Example:
    Cygnus OB2#12 (B3-4Ia+)[12]

  • Ia (luminous supergiants). Example: Eta
    Canis Majoris (B5Ia)[13]

  • Iab (intermediate luminous supergiants).
    Example: Gamma Cygni (F8Iab)[14]

  • Ib (less luminous supergiants).
    Example: Zeta Persei (B1Ib)[15]

  • II bright giants. Example: Beta
    Leporis (G0II)[16]

  • III normal giants. Example: Arcturus (K0III)[17]

  • IV subgiants. Example: Gamma Cassiopeiae (B0.5IVpe)[18]

  • V main-sequence stars (dwarfs), Example: Achernar (B6Vep)[15]

  • sd (prefix) subdwarfs. Example: HD 149382 (sdB5)[19]

  • D (prefix) white
    dwarfs.[nb 2] Example: van Maanen 2 (DZ8)[20]

The Sun's spectral type is G2V, which means that the spectral class is G2 and the luminosity class is V, which means that the Sun is a main-sequence star.



An example:



The following record is for the star Arcturus (α Boötis):



H|       69673|H|14 15 40.35|+19 11 14.2|-0.05|1|G|213.91811403|+19.18726997|*|  88.85|-1093.45|-1999.40|  0.92|  0.42|  0.74|  0.60|  0.52|-0.52| 0.16|-0.13|-0.07| 0.05|-0.01| 0.03|-0.12|-0.03|-0.31|  7| 2.01| 69673| 1.629|0.010| 0.286|0.009|*| 1.239|0.006|G|1.22|0.02|A|*| 0.1114|0.0020|0.015| 70|*| 0.09| 0.14|       |U|2| |14157+1911|H| 1| 2|C| |A|AB|198|  0.255|0.039| 3.33|0.31|S| | |124897|B+19 2777 |          |          |1.22|K2IIIp      |X 


The apparent visual is(V) magnitude $-0.05$. The colour index is $B-V = 1.239$, which means that it is probably a red K type star with temperature $Tapprox 4000 textrm{K}$ (from figure 2 on this webpage). The spectral type is K2IIIp, i.e. the spectral class is K2 and the luminosity class is III, a normal giant star.



An indication for the Radius can be found from the Hertzsprung-Russel Diagram in which radii are added (see Figure here). The luminosity of Arcturus can be calculated as follows:
$$ d = frac{1}{varpi}$$
where $d$ is in parsec and $varpi$ is the parallax in arsceconds. The parallax is in columns 80-86 and is $varpi = 88.85mas = 0.08885^{primeprime}$ for Arcturus. The distance is, therefore, $d=11.25textrm{pc}$. You can then calculate the absolute magnitude ($M_V$):
$$ M_V = m_V -5(log{d}-1)$$
where $m_V$ is the apparent visual (V) magnitude. The absolute magnitude of Arcturus is: $M_V=-0.05 - 5(log{11.25}-1)=-0.31$.
The luminosity is then given by:
$$M-M_{textrm{sun}} = -2.5log{left( frac{L}{L_{textrm{sun}}}right)}$$
where the absolute magnitude of the Sun is $M_{textrm{sun}} = 4.83$ and $L$ is the luminosity of Arcturus. This gives:
$$log{left( frac{L}{L_{textrm{sun}}}right)} = 2.056 $$
$$L = 114 textrm{L}_{textrm{sun}}$$
From the Hertzsprung-Russel diagram we get a radius for Arcturus of about $R approx 10 textrm{R}_{textrm{sun}}$.

Is the universe simply a mass of atoms in 'void'; what does it look like?

No, beyond the universe does not lie anything. Not even space itself. You cannot be outside the universe because there is nothing to be in. The three (or four) dimensions you think of as space, do not exist outside of the Universe. According to the Big Bang theory not only matter was created during the big bang but most importantly space was created during the Big Bang.



From CERN:




There's another important quality of the Big Bang that makes it
unique. While an explosion of a man-made bomb expands through air, the
Big Bang did not expand through anything. That's because there was no
space to expand through at the beginning of time. Rather, physicists
believe the Big Bang created and stretched space itself, expanding the
universe.




That is also why an inflationary phase where space expanded faster than the speed of light was possible. Matter cannot travel faster than the speed of light, but space can expand faster than the speed of light.
So it is possible for two galaxies to 'appear' to have a relative velocity larger than the speed of light because in actual fact the space between those galaxies is expanding.



As to the shape of the Universe: You can only assign a shape to something relative to the space it is in. As there is no space outside the Universe, the answer to your question would be that the Universe has no shape.

How many planetary systems exist in our galaxy?

According to observations by the Kepler space telescope and other ground based observations, it seems that about 5% of the stars in our galaxy have giant gas planets, similar to Jupiter (but often larger). Smaller planets are difficult to detect, but it is estimated that 40% of the stars have small planets orbiting them.



All in all it is said that on average, 1.6 planets are orbiting each star. So almost all stars form a kind of solar system with planets. However, there is still a lot of uncertainty regarding the numbers, since small planets are very, very hard to detect. And other aspects of solar systems, like our asteroid belt, Kuiper belt and Oort cloud are hard to detect even in our own solar system.



But my own opinion is that it is reasonable to assume that we live in a pretty average solar system (because the odds are obviously suggesting this). So I think that almost every star will have a solar system with a couple of planets, some rocky, some gas giants, plus stuff like comets, asteroids and so on.



Source: Wikipedia Extrasolar planets