13 Oct 2009 2:45:50 UTC

Topic 194574

(moderation:

After a long break, I am back to Einstein. I installed boinc on my new pc and waited about a week just so I could post this topic! This is the only forum I could think of asking such questions.

My understanding of the expansion of the universe is that space itself is stretching. That brings up a couple of questions.

The most obvious is what is causing it. I can only come up with two possible answers. Perhaps string theory is right with regards to branes colliding. If this is true, then their is a brane hurling through something towards our brane. Perhaps that is causing our brane to stretch (is our brane = our universe?). My other answer is this. Perhaps the universe is flat, yet the mass of all the objects causes spacetime to warp. Would it be possible that this warping causes the universe to bend upon itself, like a cylinder? As the universe got closer and closer to itself, rate of expansion would increase until collision.

Here is another question I thought of. As the universe grows, what is being lost? Is there a friction of sorts of spacetime? Could the passing of time (if there is such a thing) have anything to do with this friction? If it does, then time cannot be constant, and time is speeding up with the rate of change in the growth of the universe. Wouldn't it make sense for time and space to be connected this way? The same as matter can be turned into energy. I think it would be elegant, dare I say symmetrical, if space could be converted to time.

Language

Copyright © 2021 Einstein@Home. All rights reserved.

## About the Expansion of the Universe

)

Welcome back!

Yes, that's the latest and greatest interpretation. Started with a bang and it's still going some ~ 14 billion years later.

You're question is right on the bullseye! There is a model which seems to be agreed upon by many cosmologists. It's not necessarily right, it's just what most cosmologists seem to be able to agree upon, given known observations. But it tends to raise as many questions as it answers. As Rocky Kolb says : we don't need a consensus model, we need a correct one. According to this the current stretch is apparently from two components - a sort of momentum from the big bang ( thus from whatever caused that ) plus this thing they call 'dark energy'. Dark energy really means 'a mechanism of which we know little' and refers to the possible presence of a 'negative pressure' in space. Also known as the 'cosmological constant' because of how this effect is entered into the accounting within Einstein's GR equations.

Alas this is where it gets squirrelly in everyday terminology. A positive pressure is like blowing up a balloon so that it pushes things apart, a negative one is like a rubber band that pulls things together. The statement is that Einstein's General Relativity predicts a solution for the Universe's expansion such that expansion is caused by a negative pressure. If you find that confusing, you're certainly not alone! Personally I think 'dark energy' is an elaborate way of saying "don't know". An epicycle. I've always blown up balloons by adding in air, not sucking it out - so we have a disconnect in the meanings here. One suspects that either we need a better theory than GR for cosmology and/or we need to find a better way to interpret it.

The brane business is a way of saying "where is the energy to do all this coming from?", so a brane is a dimensional model of that external influence. We like conservation of energy, you see.

The universe is overall very flat or barely different from flat. Meaning that as far as observations permit we believe that very large triangles have 180 degrees as the sum of their internal angles ( at least that is one way of expressing the results ). However on 'smaller' scales there's alot of bumpiness of spacetime due to particular concentrations of matter. Black holes et al.

The thinking is that the dark energy is causing the expansion to accelerate, in a way that makes it impossible for it to bend back and converge in the future. While GR allows universes to do that ( that behaviour is a solution to the equations ) it would seem we are not in one of those. GR doesn't state what a universe must be, only how it evolves. Possibilities abound.

'Dunno' by four times there! Excellent queries. :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## I ought add that if one

)

I ought add that if one thinks about the dark energy 'explanation' closely one could conclude the following :

- we have repulsive behaviour

- the GR equations say that goes in with a minus sign

- if I call that a negative energy ( as I can because the pressure is negative )

- then I can also say it has an equivalent negative mass

- so I have 'antigravity' ?

To a certain extent we are just playing with words, but the effect is real. The idea of a repulsive aspect to gravity just doesn't sit right with many. But it's not like we can do local experiments to measure it's exact value. The proposed actual amount of energy per volume is very very small, but since space is so big it all adds up to affect the overall behaviour of the universe. Here on Earth the effect upon us personally, the dark energy in your living room so to speak, is a teensy tiny acceleration which is swamped by much larger effects.

[ Note the phrase 'dark energy density' is also used in the sense of what fraction of 'everything' in the universe does it represent - currently said to be about 75% plus or minus a bit depending upon who you ask. That's not the same meaning as an energy per some volume. ]

So while not knowing what the heck dark energy is as an exact form ( a particle? a wave? a Caesar salad? all those lost ballpoint pens? ) the requirement is that it must be uniformly distributed around the universe. Otherwise it would have contributed to formation of structures ( by being locally concentrated and thus causing local variations - which is the province of that other major unknown : 'dark matter' or unseen attractive mass ).

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## RE: The idea of a repulsive

)

Many of these topics don't sit right with me. Dark matter, dark energy, mysterious acceleration of space, unknown cause of inflation, singularities (I have said previously that I believe infinities are theoretical). I think these are shots in the dark to answer questions that we have no clue how to answer.

A while back there was a science channel show with a guy who said that if the speed of light changes over time, that would explain away the need for inflation. He says the speed of light used to be much faster(slower?). To me, it seems much more plausible that c can be a variable (over great periods of time) than the universe unexpectedly growing astronomically fast. What say you?

I thought this percentage pertained to dark matter, not dark energy?

## RE: Many of these topics

)

Yep, I hear that! I guess as one tries to explain things that are further and further from our immediate experiences there is the risk of loss of meaning. So the various words we have been using ( 'matter', 'dark', 'energy' .... ) have had traditional meanings which have more-or-less an everyday correspondence. So is this 'dark energy' really the same sort of physical quantity as the 'kinetic energy' of your car, or the 'electrical energy' that heats the water for your coffee? Just because one uses the same units of description ( Joules or whatever ) and places them similiarly in the mathematical expressions doesn't make them 'the same' in my view. The 'negative pressure' label is purely based upon where it appears in Einstein's GR equations and for no other reason : specifically within the stress/energy tensor, but you can put it on the other side of the master equation and call it a cosmological constant too. It depends on how one chooses to express the maths and interpret the terms in descriptive physical language. I think that these difficulties are a signal that we need a pretty fundamental re-think of the basic definitions that we use in science.

A terrific thought. It's this sort of thing that is needed - having a crack at the basics. The idea of accelerating expansion actually comes from a fairly tedious chain of logic that seeks to explain the appearances of some fairly remote events. Supernovae that are seen in distant galaxies, which thus have occurred a long, long time ago and we are only seeing now because the universe has been expanding while the light has been traveling towards us from those sources. But the logic is roughly like this : "IF I assume everything that is believed to be valid in science so far THEN there must be this effect which I label as 'dark energy' to account for what is seen".

So the issue then becomes, for the theorist trying to explain, of what to keep and what to throw out. If I keep 'science to date' then I have to accept dark energy. If I throw something out of science to date ( like the constancy of the speed of light over time ) then the dark energy may no longer be required.

One can also say if supernovae from long ago don't behave the way we think the more recent ones do ( which we reckon we understand ) then the acceleration disappears too. But then I have to chuck out a raft of astrophysics - the bit that tells me how stars work and why some finally blow up.

Or I can say that the GR framework needs a re-jig. It might be fine for the scales we've studied and feel it works well for those observations. But this fudging that we are clearly doing with the equations suggests a broader solution is required ( of which GR then becomes a narrower special case, in a similiar vein that special relativity is a specialisation of GR ).

Hence these dark issues are also the turf for competing areas of study. So when you examine the various arguments from this or that speaker you need to find out from what assumptions they are starting from, as much as you need to understand what final deduction they have reached.

It's all pretty rough, and depends whose talking. Dark energy is generally said to be about three times dark matter. I've seen 70:25:5 for dark-energy : dark-matter : the-rest. So one can pick a source and quote a pie-slicing scheme. :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## RE: A while back there

)

That was Dr. Michio Kaku on his "SciQ" on the Science Channel on sundays.

Cosmic_Time

Dr_Michio_Kaku

## RE: That was Dr. Michio

)

I quite like Michio - when I can understand him! He tilts at windmills. :-)

I think the easiest way to 'escape' the dark energy issue is the earlier/later supernova behaviour question. I overstated before : you don't have to chuck out the astrophysics but you need to modify a fair bit. The early universe had different starting constituents for stars, alot simpler than now, hydrogen/deuterium/helium/lithium and precious little else. But later stars like our Sun form from clouds which have the 'ash' of many heavier elements from fusion reactions within those prior stars. Those early stars didn't have those heavier nuclei, they made them. That's got to change the entire energetics of the stellar evolutionary pathways right through to blowing up.

So whatever calibration(s) we have of distance versus intensity ( intrinsic luminosity if you like ) with supernovae are based on nearer, and thus more recent generations of stars. You see the argument of the cosmic expansion accelerating leans heavily on a belief of what light curves/intensity these most distant ( hence earlier ) supernovae should have. If we're calibrating wrong, the distance estimates to them are wrong, and the rate of change of distance ( ie. expansion ) is out as well.

So that's essentially no change to the basics of physics. But then you have to try to sort out the astrophysics of those stars which, by definition, we get way fewer photons from on account of their extreme distance. One can distinguish a bright supernova standing out alone in galaxy far, far away. It's radiating by a factor of a billion above the background stars in the galaxy. But it's hard to catalog the progenitors - as you can't discriminate them individually. It's the progenitors you need to measure and characterise to get their evolution right.

There is also an expectation, on separate grounds, that the really early stars also began their fusion lifetimes with much bigger masses. Like 50 to 100 solar masses, or even more, I think.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## RE: RE: A while back

)

The one I saw was by Joao Magueijo. His theory is called vsl.

http://www.weeklyscientist.com/ws/articles/constants.htm

Another interesting article.

http://www.newscientist.com/article/dn6092-speed-of-light-may-have-changed-recently.html

Edit-If light escapes our atmosphere, before it leaves it is traveling less than c. When it leaves the atmosphere, will it accelerate to c? I presume that it does, but how is newton's (second?) law preserved? Would the light push back on earth? The thought of a massless particle pushing earth, wow... Does newton's law not apply since mass=0?

## RE: Edit-If light escapes

)

Ah, what is quoted as the speed of light is really from some starting point A to some final point B. In 'free space' or 'vacuum' this is the textbook value of ~ 3 x 10^8 meters/second. Basically it goes in a straight line and there's nothing in the road.

If you fire light into some material then it's path is not 'straight' - it gets absorbed/re-emitted/bounces about by interacting ( mainly ) with the outer electrons of the atoms within. It's a messy business. So there is a wiggly path, plus some delay between absorption and re-emission to slow the process. So if you fire a hockey puck directly from one goal to the other you'd get a certain speed ( distance / time ), but if it ricocheted off the players/sticks/umpire/walls then the measured distance over time is less - it takes longer to get to the other end. So the speed of light is then always less than in a vacuum.

There's a quantity called the 'refractive index' ( RI ) which describes this. The refractive part comes from it's historical origin in the study of light as it passes from one material to a different one - light bends. Stick a pencil in a glass of water and you will see that effect clearly. The index is simply the ratio of the speed of light in vacuum to the speed of light in the substance of interest. This number will be greater than one. So if I have a material with RI = 1.5 then the speed of light in it is 2/3 ( flip 1.5 = 3/2 over ) that of in the vacuum. So that would be 2 x 10^8 m/s.

The photon leaving the atmosphere would be like a car leaving a congested city road onto the intercity freeway. You can speed up and makes tracks in some consistent direction rather than the stop/start/left/right/collision character of city traffic. Actually the space between the atoms in a material is vacuum, so light going from one to the next will go at 3 x 10^8 m/s! It just doesn't get a free run. Light going through is likely to heat the material up, adding energy to the atoms, due to the collisions. Note the frequency/energy of the light as it goes in generally may not be what comes out.

We saw this outstanding rainbow coming home near sundown recently, drizzle to the east, coming over a hill we saw this amazing sight. There was the primary rainbow so bright, and other secondary ones ( in the same shape but further out with different color schemes ) that you can get if the conditions are right. We watched in awe for 20 minutes until sundown. I mark that down as a once in a lifetime - like I saw the light/dark diffraction bands streaking across the landscape due to the total solar eclipse I was watching in 1975 from a mountaintop!

Newton's Law does apply, though in this case between the light and the atoms of the atmosphere. Light does exert a push/pressure! Einstein's original derivation of E = mc^2 used this property. Not a lot of force per photon in everyday terms, but it's there. This effect has to be taken into account and adjusted for with the mirrors in the LIGO interferometers.

Cheers, Mike.

( edit ) A subtle point, but in some discussions you need to be careful of the distinction between phase velocity ( = c ) of a single photon/frequency and the group velocity ( roughly an average ) of a gaggle of them.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

## RE: If you fire light into

)

So what keeps the light in a straight line?...

Does not your description suggest where light 'appears' to be slowed by being forced to take an extremely turgid path through a material? Similar to how it can take something between 10,000 and 170,000 years for light to escape from the centre of our sun (and then only 8 minutes to get onwards to earth)!

To assume that from assuming a change in propagation velocity between two distinct mediums, does that not require a light beam to have a "width"? How does a single photon come to be deflected from it's straight line course or must a "diameter" be assumed for the photon?

Would refraction occur at all if a particle could be "infinitely thin"?

(To be particular (particles) about it all, that is...)

Regards,

Martin

(For a little illumination! :-) )

See new freedom: Mageia Linux

Take a look for yourself: Linux Format

The Future is what We all make IT (GPLv3)

## RE: So what keeps the light

)

By cheating. Define the path that light takes ( when not interacting with stuff ) to be 'straight'! :-)

In everyday terms the idea is to think of three points, and how they can be viewed in certain ways. If I look directly from the first point to the second point ( so the second point lies exactly behind the first ) and see the more distant third point precisely overlapped also, then I say they all lie along one and the same line. If I see the third point offset ( above/below/left/right ) then they aren't. We do this when we measure things 'by eye' ie. close one eye and position our head to look in a given direction at some objects/features of interest. A rifle scope does much the same. At each end of the optics the junction of the perpendicular lines ( crosshairs ) make the first and second points as above ( there's a crosshair at each end ). Your aiming point on the target becomes the third.

Einstein basically said the shortest distance between two points is the path light takes. Always. However Pythagorus' Theorem and similiar derived traditional geometric properties of triangles don't always apply.

- I call a region 'flat' if the sum of a triangle's three interior angles is 180 degrees. Roughly this means the sides are 'straight' in our usual sense. All points along a given side will overlap by a suitable choice of viewing point. That is, a line viewed 'end on' looks like a single point.

- I call a region 'positively curved' if the sum of the interior angles of a triangle is greater than 180 degrees. Any line forming a triangle's side however will still be able to be seen as a single point if I choose the right viewing point. If I could take this triangle and put it in flat space, while preserving angles and lengths, then it's sides would have to bow outwards. The sides would not look 'straight' in our everyday sense.

- I call a region 'negatively curved' if the sum of the interior angles of a triangle is less than 180 degrees. Any line forming a triangle's side however will still be able to be seen as a single point if I choose the right viewing point. If I could take this triangle and put it in flat space, while preserving angles and lengths, then it's sides would have to bow inwards. The sides would not look 'straight' in our everyday sense.

Note that if I'm in these curved regions myself I won't be 'seeing' the curvature! If I am at one corner of a triangle and look from that point along to another corner - all the points on that side will overlap in my sight! I won't see some curved/bowed shape.

Quite right.

Ah, this is a deep one. There are several ways of answering. Try these .....

The classical way, really done properly first by Huygen's, was to have the width idea - but of a wavefront. Consider some straight wavefront approaching some ( also straight ) boundary between two materials. To be definite, let's say the medium it is going towards/into has the slower speed ( thus higher refractive index ). The first portion of the wavefront to hit the boundary will cross it and thus slow down, while other portions haven't yet. If I take that part of the wavefront that hasn't yet met the boundary, and project it into the second material, that projection will lie ahead of the actual portion of the wavefront which is travelling slower in that second material. So the portions of the wavefront entering earlier will always lag that line defined by those that enter later.

The effect is to turn the wavefront to become more parallel to the boundary than it was initially. Another way to say that is : if one considers an arrow perpendicular to the wavefront to be it's direction of propagation, and the line perpendicular to the boundary to be the 'normal', then the propagation vector turns towards the normal. If the second medium had the faster speed, the portion of the wavefront already in it speeds up, thus travelling ahead of the projection of the portion yet to enter, and the wavefront becomes less aligned with the boundary, or I can say the propagation vector turns away from the normal.

[ Snell's Law expresses the geometry of this, relating ( sines of ) those angles with the normal to the two speeds. ]

OK ... the second explanation probably comes out of the depths of quantum mechanics, but also has the classical Fermat's Principle of Least Time too. I like Richard Feynman's explanation so I'll shamelessly lift it. Have a point A in the first medium and a point B in the second, thus with the boundary in between. A photon is going to take a straight path from A to some point C on the boundary, then another straight path from C to B. Each path in their respective media will be straight for a given constant speed, as that's what takes the least time. Now suppose I look at a point nearby C and consider what is the time of travel from A to this point and then on to B. It turns out that the most probable overall path for the photon is the one that takes the least time ( all nearby paths take longer ) and this path yields angles corresponding to the relationship that is Snell's Law.

This least time principle can be related to The Principle of Least Action, where 'action' is a quantity having units of momentum times distance ( or energy times time ). Planck's constant has these units and is thus an amount of action. What action 'is' no-one really knows, nor why minimising is a key issue. You can express many particle propagation problems in terms of finding for any conceivable path the sum of the action along it, and then find which path out of the whole set of conceivable paths has the lowest value of that sum.

Well, we can't have you in the dark! :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal