Here We Discuss Different Science Related Stuffs... Chapters & Contains. Innovations in Science world And Some Knowledge Stuff ... Come And See & Let Us Know How You Feel

Better Batteries | Acquisition of Maxwell Technologies by Tesla

Tesla has grown rapidly over the past decade when it became the first American automotive company to go public since Ford in 1956. The attraction towards Tesla is undeniable. Their cars are slick, their acceleration is insane and perhaps most importantly, their brand represents a movement towards renewable energy. Tesla has attracted thousands of well-intentioned people who want to play their part in saving the world, but there have been niggling questions on the minds of many EV owners and EV naysayers. When is that expensive battery going to need to be replaced, and at what cost?

As existing Teslas begin to age and more exotic and demanding models of Teslas come to the fore, like the Tesla Truck and the Roadster 2. These issues are going to become more prominent, These batteries do NOT come cheap, but they are getting cheaper. The cost per kilowatt hour for Tesla power packs and the market average are dropping dramatically as technology advanced and manufacturing volumes increased. But that storage capacity slowly creeps away as the battery is used, slowly degrading the range of your electric vehicle.



Tesla currently offers a warranty to all Model 3 owners that cover it below 8 years or 160,000 kilometres, whichever comes first. Guaranteeing retention of a capacity of at least 70% when used under normal use. If it falls below that, they will replace your battery for free. Finding out what is considered normal use is pretty difficult, but they seem to be reasonable with it going by customer satisfaction reports.

It is estimated that Tesla is achieving a cost of 150$ per kWh of battery packs so the 50 kWh battery pack of the base model would cost around 7,500 dollars to replace, so they must be pretty confident on those numbers. As a massive recall of the approximately 193 thousand Model 3s currently shipped would ruin Tesla.



Ultimately these batteries are unlikely to drop below the warranties guarantee in those 160,000 kilometres, but even so, improving batteries is obviously just a wise business decision to retain those customers in future. This is just one of a myriad of factors that influenced Tesla’s recent landmark acquisition of Maxwell Technologies for $218 million dollars. A rare Tesla acquisition that sets Tesla up for not just cheaper batteries, but better batteries which will be lighter and will have greater range and also live a longer life.

It wouldn’t be the first time an automotive company underestimated its battery degradation. When the Nissan Leaf debuted in 2010, the battery production they needed simply did not exist and neither did the technical expertise required to design battery packs. In those days lithium-ion batteries cost about 400 dollars per kWh for laptop grade batteries and up to 1000 dollars per kWh for ones with the longevity needed for an electric vehicle. To minimise costs Nissan decided to start production of their own batteries and opted for a small 24 kWh battery, giving it a range of just over 100 kilometres. Suitable for city driving and that’s about it.

But customers soon realised that this paltry range was dwindling quickly. Within just 1-2 years of driving, the Leafs battery capacity was dropping up to 27.5% under normal use. Despite careful in-house testing, Nissan overlooked some crucial test conditions when developing their battery and because of this, they made some crucial design errors.



To learn why this degradation happens, we first need to understand how lithium-ion batteries work. A lithium-ion battery, like all batteries, contains a positive electrode, the anode and a negative electrode, the cathode, separated by an electrolyte. Batteries power devices by transporting positively charged ions between the anode and cathode, creating an electric potential between the two sides of the battery and forcing electrons to travel through the device it is powering to equalise the electric potential.

Critically, this process is reversible for lithium-ion batteries, as the lithium ions are held loosely, sitting into spaces in the anode and cathodes crystal structure. This is called intercalation. So, when the opposite electric potential is applied to the battery it will force the lithium ions to transport back across the electrolyte bridge and lodge themselves in the anode once again.



This process determines a huge amount of the energy storage capabilities of the battery. Lithium is a fantastic material for batteries, with an atomic number of 3, it is the 3rd lightest element and the lightest of the metals. Allowing its ions to provide fantastic energy to weight characteristics for any battery. But, the energy capacity of the battery is not determined by this, it is determined by how many lithium ions can fit into these spaces in the anode and cathode. For example, the graphite anode requires 6 carbon atoms to store a single lithium ion, to form the molecule LiC6. This gives a theoretical maximum battery capacity of 372 mAh per gram.

Silicon, however, can do better. A single silicon atom can bind 4.4 lithium ions, giving it a theoretical maximum battery capacity 4200mAh per gram. This seems great and can provide increases in battery capacity, but it also comes with drawbacks. As those 4.4 lithium ions lodging themselves into the silicon crystal lattice causes a volume expansion of 400% when charging from empty to full. This expansion creates stress within the battery that damages the anode material, that will eventually destroy its battery capacity over repeated cycles.



Battery designers are constantly looking for ways to maximise this energy density of their batteries while not sacrificing the longevity of the battery. So what exactly is being damaged in the batteries that cause them to slowly wither away?

When researchers began investigating what caused the Nissan Leaf’s rapid battery degradation, they began by opening the battery and unrolling the batteries contents. They found that the electrode coatings had become coarse over their life, clearly, a non-reversible reaction was occurring within the cell, the change was expected. In fact, the chemical process that caused it is vital to the operation of the battery. When a battery is charged for the very first time a chemical reaction occurs at the electrolyte-electrode interface, where electrons and ions combine. This causes the formation of a new layer between the electrode and electrolyte called the solid electrolyte interphase.



The name is exactly what it suggests, it’s a layer formed by the liquid electrolyte reacting with electrons to form a solid layer. Thankfully, this layer is permeable to ions, but not electrons. So it initially forms a protective layer over the electrode that allows ions to enter and insert themselves via intercalation, but it is impermeable to electrons. Preventing further reaction with the electrolyte. At least that’s the idea under normal conditions. The problem is, under certain conditions, this layer can grow beyond just a thin layer of protective coating, and result in the permanent lodgement of the lithium that provides the battery with its energy storage.

This process is not entirely well understood, but we can identify some factors that increase the rate of this formation. The expansion of the silicon electrode battery we mentioned earlier causes the fracture of the SEI layer, exposing fresh layers of the electrode to react with the electrolyte. Charging rate and temperature can also accelerate the thickening of this layer.



NASA performed their own in-depth study of this effect and released a report in 2008 titled “Guidelines on Lithium-ion Battery Use in Space Applications” sharing their findings. The temperature that the battery is charged and discharged at plays a massive role in the batteries performance.

Lowering the temperature lowers chemical activity, but this is a double-edged sword. Lowering the chemical activity negatively affects the batteries ability to store energy. Which is why batteries have lower ranges in cold countries, but lowering the chemical activity also decreases the formation rate of that SEI layer. 



This is one of the reasons that the Nissan Leaf’s battery lost a huge amount of capacity over just 2 years in many countries. Nissan performed most of its testing in stable laboratory conditions, not over a range of possible temperatures. Because of this, they failed to realise the disastrous effect temperature would have on the life of the battery and failed to include a thermal management system, which is commonplace in any Tesla.

This, of course, reduces the energy density of the battery. Adding tubing, the glycol needed to exchange heat, along with the heat pumps and valves needed to make a thermal management system, not only adds weight, but it draws energy away from the battery to operate. But it plays a vital part in maintaining the performance of the battery. Nissan’s choice to not include a thermal management system, even in the 2019 version, makes it a poor choice for anyone living in anything but a temperate climate.



Of course, just cycling the battery though it’s charged and discharged states are one of the biggest factors in degrading the battery. Every time you cycle the battery you are giving the SEI layer opportunities to grow. Minimising the number of times a cell is cycled will increase its life, and maintaining an ideal charge and discharge voltage of about 4 volts minimises any resistive heating that may cause an increase in chemical activity.

This is where Maxwell technologies come into play. Maxwell has two primary technologies that Tesla will be taking advantage of. The first is what Maxwell is known for, their ultracapacitors. Ultracapacitors serve the save fundamental job as batteries, to store energy, but they function in an entirely different way and are used for entirely different purposes. The fundamental difference between a capacitor and a battery is that a battery stores energy through chemical reactions, as we saw for lithium-ion batteries earlier this is done through insertion into the crystal lattice. Capacitors instead store their energy by ions clinging onto the surface of the electrode.



On each side of a standard ultracapacitor, we have an aluminium current collector with thin graphite electrodes on each, separated by an electrolyte and an insulating separator to prevent the passage of electrons. In an uncharged state, ions float in the electrolyte. When a voltage is applied during charging, ions drift towards their opposite charge and cling to the surface, holding the charge in place. When a device is then connected to the capacitor this charge can quickly leave while the ions drift back into the electrolyte.

The key limiting factor for ultracapacitors is the surface area available for this to happen and nanotechnology has allowed for amazing advances in the field. The inside of an ultracapacitor contains hundreds of layers of these electrode pairs. But even with this enormous surface area, ultracapacitors simply can't compete with batteries when it comes to energy density. Even Maxwell’s best ultracapacitors have an energy density of just 7.4 Wh/kg while the best guess for Tesla’s current energy density is about 250 Wh/kg. 



At this point of time ultracapacitors are not intended to be a replacement for batteries. They are intended to work in conjunction with batteries. Ultracapacitors strength is their ability to quickly charge and discharge without being worn down.  This makes them a great buffer to place between the motors and the battery. Their high discharge rate will allow them to give surges of electricity to the motors when rapid acceleration is needed, and allow them to charge quickly when breaking. Saving the battery from unnecessary cycles and boosting its ability to quickly provide current when needed for acceleration.

This is going to be a massively important technology for two upcoming Tesla vehicles. The Tesla Roadster, which will boast an acceleration of 0-60 in just 1.9 seconds, which a normal battery would struggle to achieve the discharge rate needed without damaging itself. The second vehicle is the Tesla Truck. The Tesla Truck is going to be limited in its range and cargo hauling ability as a result of the heavy batteries it will need, as trucks are limited in weight to about 40 metric tonnes in most countries. This ultracapacitor technology will boost its ability to regain energy from breaking significantly and thus allow its battery capacity to decrease, in turn allowing the truck to swap batteries for cargo.



The second technology Maxwell has been toting as their next big breakthrough is dry coated batteries. This is a manufacturing advancement that Maxwell claims will reduce the cost of manufacturing. A factor Tesla has been working fervently to minimize, with the growth of the Gigafactory. So, what are dry coated batteries? 

Currently in order to coat their current collectors with the electrode material Tesla, in partnership with Panasonic’s patented technology, must use first dissolve the electrode material in a solvent which is then spread over current collector, both are then passed through an oven for drying, where the solvent evaporates leaving just the electrode material behind. This adds the cost of the manufacturing procedure as the solvent is lost in the process and the baking process takes energy. On top of this the solvent is toxic, so removing it from the process would benefit the environment.



Maxwell instead uses a binding agent and conductive agent, which I assume will work similarly to electrostatic painting. Where a metal being painted will be given a negative charge, while the paint will be given a positive charge as it is sprayed attracting it to the metal where it will cling to it. This painting process also eliminates the solvents needed in the paint.

In this paper, published by Maxwell technologies, they detail how their dry coating manufacturing techniques could result in a high energy storage capacity of the electrodes, due to a denser and thicker coating. Resulting in a potential increase in battery capacity to 300 Watt-hours per kilogram, 20% up from our best estimates of Tesla’s current specs. Only time will tell if this claim can be realised at an industrial scale. Perhaps, more importantly to Tesla, they now own this manufacturing technique.



Currently, Panasonic owns the manufacturing process for Tesla, there is a literally a line of demarcation in the Gigafactory separating Panasonic and Tesla, denoting the point at which the ownership of batteries transfers hands. Having to buy their batteries from Panasonic adds cost, that Tesla will want to avoid in future and this step could allow for full vertical integration of their battery manufacturing. Thereby making electronic vehicles more affordable to the everyday consumer.




Share:

Why It Is So Hard To Predict An Earthquake

San Francisco has been hit by a big earthquake at least once every hundred years going back as far as we know. So the people of San Francisco know that sometime in the next 100 years, they are likely to get shaken again by a big quake. But we can't say exactly when the quake might hit. Right now, all we can do is construct shake-proof buildings and put out seismic sensors. That way, when an earthquake sends out underground waves, which travel from its epicentre eight times faster than the destructive surface waves, we can detect the underground waves with enough time to give a warning like: “Uh oh! An earthquake is about to hit us!”... which is, surprisingly, enough time to turn off gas pipelines and stop trains and find cover.

But it doesn’t help people get out of town. For people to evacuate safely from natural disasters, it’s not helpful to give a really short warning or a really big window within which a disaster might happen. According to disaster experts, 2 days is just right. But if we want to be able to predict earthquakes with that amount of precision, we need to understand a LOT more about how they work.



We have tried looking backwards at quakes that have already happened and identifying events that occurred in the days before they hit, like multiple mini-quakes, big releases in radon gas, changes in magnetism, and even weird animal behaviour, to see if any of these were predictors of an impending quake. But lots of times these things happen without accompanying earthquakes and lots of times earthquakes happen without these things, so, so far we have not been able to find any reliable predictors.

Another approach is to build an accurate model of the earth beneath our feet. We know that over time, as tectonic plates rub against each other, the stress that builds up is sometimes violently released as an earthquake. If we had a really good model and reliable measurements of the forces on the plates, maybe then we could predict when and where an earthquake was going to happen. But plates are often more than 15-miles thick. That’s twice as deep as humans have ever dug so it would be pretty difficult to get monitoring equipment deep enough. So, we are creating mini-faults in the lab, to better understand the forces on moving plates, and to help identify reliable ways to measure the forces from the surface of the earth.



But in order to test our models, we need to be able to compare them to actual gigantic earthquakes, which, as we mentioned, do not happen that often. Luckily for researchers, a few ocean faults are more productive and frequently cause large but relatively harmless quakes, giving us a regular way to calibrate and fine-tune our models.

One big thing they have helped us learn is that the interactions between fault segments are really important: for example, when this particular segment slips, it increases the chances its neighbour will slip, letting us predict where the next quake will happen. In some faults, we can even say that it will happen within a couple of years. Compared to a hundred-year window, that’s really precise, but there are still two big problems.



First, these ocean faults are relatively simple, so we still have to figure out how to apply what we have learned from them to more complicated faults, like the ones near San Francisco. And second, even if we could do that, we would still be a long way away from the ideal two-day notice. And unfortunately, our existing methods probably are not going to help us get there. What we need is an earth-shattering breakthrough.

Thanks to Matt Wei, a professor in URI’s Graduate School of Oceanography. Dr. Wei uses seismic data and simulations to study the physics of plate tectonics and earthquakes. His research on fast-spreading oceanic transform faults - like the Discovery fault in the East Pacific - has helped us start to understand the importance of earthquake cycles as we work to crack the code of earthquake physics.

Also Read:-Are we Ready To predict another Carrington like event Accurately?


Share:

Internet

Today about 4.2 billion people have access to a world of information never before seen. Such an extraordinary level of connectedness has revolutionized everything from science and technology to commerce and romance, and virtually every aspect of our lives. 

Above all the technological innovations in history, few have made as strong of an impact as the internet. Comprised of a global network of computers, the internet allows for the transmission of information and connectivity at an unprecedented speed and scale. Some of the first computer networks began in the 1950s and 60s, but unlike today's global network these early networks were centralized within certain businesses and agencies.  It wasn't until 1969 when centralized computer networks became connected.


Funded by the US Department of Defense and developed by universities this host to host network connection called ARPANET. A direct ancestor of the internet ARPANET was the first of its kind. The network grew and by the 1980s incorporated networks at research institutions and other US federal agencies such as the National Science Foundation or NSF. The NSF connected these disparate networks into one large one NSFNET, which shifted over from being a federally run network to a commercial enterprise for Internet service providers.

By the late 1990s, this shift along with the rise of personal computers the world wide web and web browsers allowed the general public to access the Internet for the very first time. Today computers, smartphones, televisions, video game, consoles and other devices all tap into the network and transmit and receive data almost instantly. 


By clicking send in a messaging apps text audio and video are converted into pieces of electronic data called packets. These packets are then tagged with a port number and IP address, much like the mailing address on an envelope. The port number and IP address direct the packets to a given destination on the internet. From there the packets may travel over Wi-Fi cellular data or an Ethernet or phone line, through a series of routers modems and servers then through fibre optic cables or satellites. And through a similar process in Reverse to reach the packet's destination. Once the packets arrive their data is reassembled into the text audio or video that was originally sent. 

Since the days of the earliest computer networks, the internet has undergone a tremendous transformation, while also transforming the world that created it. From a closed off network to one that covers the globe, the Internet has provided access to information to every continent connecting people and ideas like never before.



Share:

Mystery Around Hypatia Stone

From snazzy banded agates to volcanic elephants, there are some pretty weird rocks out there. But the weirdest ones geologists find might be those that fall from space. One of them called the Hypatia stone, it might be, the strangest of them all. In fact, all signs currently suggest that this rock’s origin story is older than the solar system itself and if it is not well, we are going to have to rethink what we know about our cosmic neighbourhood.

The Hypatia stone was found in 1996 by a geologist in the southwest Egyptian Sahara. It is named after the first female astronomer and mathematician who managed to make the history books.



The stone was discovered in fragments no bigger than a centimetre across and in total, the pieces added up to a volume only about 20 cubic centimetres. Technically, it isn’t considered a meteorite, because the Meteoritical Society requires 20% of a rock’s original mass to be present to earn that title. And scientists have chipped this thing apart and sent pieces to so many labs that it no longer fits the bill.

But it’s definitely from space. When its extraterrestrial origins were confirmed in 2013, scientists assumed it was the very first comet nucleus or the rocky, central part of a comet, to be found on Earth. But its story is a bit more complicated and interesting.

First, exactly when the stone struck our planet is hard to pin down. It was found in an area of the Sahara which is full of these special rocks called Libyan Desert Glass, which are believed to have been created by a meteorite impact 28 million years ago. But the relationship between the Hypatia stone and this glass is far from certain. We are also not positive how big this rock was when it initially entered Earth’s atmosphere.



Based on its amount of a certain type of neon, we think it could not have been more than several meters in diameter. Or if it were bigger, the Hypatia stone itself had to have come from the upper few meters. These basic details are important to figure out, but what’s really strange about the Hypatia stone is what researchers discovered once they started analysing its composition. Because from what we can tell, Hypatia’s chemical makeup isn’t just out of this world.

It’s out of the entire solar system! See, everything in our neighbourhood formed out of the same cloud of dust and gas. And since astronomers believe that the cloud was relatively homogeneous, the rocky bits that formed should all have roughly the same chemical makeup.



But in 2015, scientists revealed that the Hypatia stone is different. It has a composition unlike any other solar system object studied in a lab. For example, its amount of the isotope nitrogen-15, a type of nitrogen with an extra neutron, was way off for it to be from a standard Comet.

Astronomers also found a type of xenon that’s created when one iodine isotope, one that predates the solar system, undergoes radioactive decay. So something about this thing totally isn’t normal. And in 2018, we got an even deeper analysis.

In February 2018, a team of astronomers announced that they would identify two separate, yet intermingled, matrices in the Hypatia stone, kinda like finding two different batters in the same cake. The matrices themselves had to have formed when the solar system did because Hypatia needed a cloud of dense interstellar dust to form. But they had the opposite composition of carbon and silicon that common meteorites do.



The ones we normally see, called chondritic meteorites, are low in carbon and high in silicon, but Hypatia has lots of carbon and basically no silicon. So again, not normal. But what was even more surprising about this analysis is that one of those matrices was also chock-full of deposits or inclusions. And each of them likely existed before the entire solar system! This includes moissanite grains, which are commonly a small part of some meteorites but are considered to be mostly pre-solar.

They also found a nugget of pure metallic aluminium in Hypatia, which is super rare in solar system rocks. And there were also a lot of these organic molecules called polycyclic aromatic hydrocarbons or PAHs, which are a big part of interstellar dust. PAHs are also inside certain comets and asteroids, so finding them in the Hypatia stone wasn’t unusual, but the abundance of them was. Conveniently, these PAHs were also a big reason we are able to study the stone today.



Many of them were turned into a crust of tiny diamonds, likely when Hypatia crashed into the Earth, and they protected and preserved the inside of the rock for millions of years. But that doesn’t explain where they came from. And there were other compounds found that haven’t been observed in any studied space rock, too. So the Hypatia stone is still completely unique.

At least as far as we know. Although it’s a pretty compelling case, we will still need further analysis of certain isotopes before we can definitively say that parts of this rock existed before the Sun. But the exciting news is, the authors of that 2018 paper hope to get that research out ASAP.

So, even if Hypatia turns out not to be pre-solar, that might be even weirder. That would imply that the early solar system wasn’t homogeneous after all, despite the generally accepted view. So we would have to change the way we think about our neighbourhood’s history.



Based on what we know so far, astronomers can at least tell that the stone had to have formed in a super cold environment, one below about -200°C. So if it is from around here after all, that likely means Hypatia had to have formed out in the Kuiper Belt where Pluto lives, or even farther away, like in the distant, mysterious Oort cloud.

We don’t actually know a lot about the composition of all the bodies that far out there, so it could totally turn out that there are other Hypatia-like space rocks. Mostly, all this means means we just have to keep looking. But no matter what the answer to this mystery is going to a cool one.

Also Read:-Pulsed Plasma Thrusters


Share:

Pulsed Plasma Thrusters

While travelling in space, one of the hardest things to do is to stop or change direction. Without anything to push against or friction to slow things down, spacecraft need to do all the hard work of changing their speed or path by there thrusters. And sometimes they do that in ways you would never expect: like by vaporizing Teflon. They are called pulsed plasma thrusters and they can use the same stuff that’s on your frying pan to make spacecraft zoom around the universe. And they have been doing it since the 1960s.

To make basically any move in space, satellites rely on Isaac Newton’s famous Third Law of Motion, which is probably on a poster in every high school physics classroom: For every action, there is an equal and opposite reaction. Put another way: throw stuff backwards and you will go forward. In fact, you can boil down every rocket design, no matter how complicated, to this basic idea. When thinking of a rocket, you might normally imagine the chemical propulsion. That’s the “fire-coming-out-the-end” kind, which uses a controlled explosion to hurl material out the back of the rocket. 



But once in space, another kind, electromagnetic or EM propulsion, also becomes available. They are not strong enough to get rockets off the ground, but they are great once you are past most of Earth’s atmosphere. These rockets work kind of like railguns, accelerating charged particles or ions, out the back with electric or magnetic fields. Today, we have all kinds of EM thrusters, but pulsed plasma thrusters, or PPTs, were the first ones ever flown in space.

They were used in 1964 on the Soviet Zond 2 mission to Mars. Like some other engines, PPTs specifically use plasma to generate thrust, instead of a random collection of ions. Plasma is a super hot substance made of charged ions and it is the fourth state of matter. In some ways, it behaves kind of like gas, because its atoms are pretty spread out. But unlike the other states of matter, plasmas can be shaped and directed by electric and magnetic fields.



To generate its plasma, PPTs eat Teflon! Which is pretty awesome. A pulsed plasma thruster places a block of Polytetrafluoroethylene what we know as Teflon between a pair of metal plates. Then, connected wires charge up those plates with electricity until it arcs through the Teflon block, set off by a spark plug. That arc delivers thousands of volts into the block, vaporizing the nearby Teflon and ionizing it into a plasma. The sudden burst of plasma effectively creates a circuit connecting the metal plates, which allows electricity to flow like it is travelling through a wire.

One neat side effect of flowing electricity is that it generates a magnetic field. And everything in the thruster is already arranged so that this field pushes the plasma out into space. At this point Newton’s third law springs into action, pushing the spacecraft in the opposite direction of the departing particles.



Well, this kind of thruster produces a very tiniest bit of motion. A pulsed plasma thruster deployed by NASA in 2000 produced an amount of force equal to the weight of a single Post-it Note sitting on your hand. Which might not seem that exciting, but it has some big implications. Like other forms of electromagnetic propulsion, these engines require a lot of electricity to run, but in exchange, they offer incredible efficiency with their fuel.

Pulsed plasma thrusters can produce up to five times more impulse or change in momentum for every gram of fuel than a typical chemical rocket. They do it very, very slowly, but they get the job done. PPTs also offer exceptional simplicity and safety. The only “moving part” is a spring that constantly pushes the Teflon block forward and without the need to store pressurized liquid or gas fuel, there is no chance of explosion. So it makes sense then that pulsed plasma thrusters were so useful back in the 1960s. Since then, their lack of power has meant that most spacecraft main engines have remained chemical. And when companies really need some kind of EM drive like for the Dawn mission to the asteroid belt they will tend to choose more sophisticated designs. But that doesn’t mean we are done with these thrusters just yet.


Recently, their extreme simplicity has made them a natural fit for the most up-and-coming field of exploration: CubeSats. CubeSats are tiny, shoebox-sized satellites designed for simple missions and built on the smallest of budgets often by research labs or universities. Earth-orbiting CubeSats seem almost tailor-made for the strengths of pulsed plasma thrusters. Lots of sunlight gives them ample electric power, but since they are so small, space and weight are at an absolute minimum. And right now, most CubeSats typically don’t have any kind of propulsion system of their own.

So one solution is micro pulsed plasma thrusters, which can weigh just a few hundred grams and measure under 10 centimetres on a side. That might not sound like much, but even a tiny amount of thrust could double the useful life of some kinds of CubeSats. They will likely need to undergo more testing and development before they are ready for primetime, but someday, we could have a whole fleet of Teflon-eating satellites.

Also Read:-  Let's Understand Black Hole


Share:

Let's Understand Black Hole

Black holes are among the most fascinating objects in our universe and also the most mysterious. A black hole is a region in space where the force of gravity is so strong not even light the fastest known entity in our universe can escape. 

The boundary of A black hole is called the event horizon a point of no return beyond which we truly can not see. When something crosses the event horizon it collapses into the black hole's singularity an infinitely small infinitely dense point for space-time and the laws of physics no longer apply.


Scientists have theorized several different types of black holes with stellar and supermassive black holes being the most common. Stellar black holes form when massive stars die and collapse the roughly 10 to 20 times the mass of our Sun and scattered throughout the universe. There could be millions of these Stellar black holes in the Milky Way alone. 

Supermassive black holes are giants by comparison measuring millions even billions of times more massive than our Sun. Scientists can only guess how they form but we do know they exist at the centre of just about every large galaxy including our own. Sagittarius a the supermassive black hole at the centre of the Milky Way has a mass of roughly 4 million Suns and has a diameter about the distance between the earth and our Sun.


Because black holes are invisible the only way for scientists to detect and study them is to observe their effect on nearby matter. This includes accretion disks, a disk of particles that form when gases and dust fall toward a black hole. And quasars Jets of particles that blast out of supermassive black holes.

Black holes remained largely unknown until the 20th century. In 1916 using Einstein's general theory of relativity a German physicist named Karl Schwarzschild calculated that any mass can become a black hole if it were compressed tightly enough. But it wasn't until 1971 when theory became reality. Astronomers studying the constellation Cygnus discovered the first black hole. 


An untold number of black holes are scattered throughout the universe, Constantly warping space and time altering entire galaxies. And endlessly inspiring both scientists and our collective imagination



Share:

Can Gravity Beat Dark Energy?

Although it might not seem obvious when you look at the night sky, but we live in a universe that is expanding faster by the instant. Every day, stars fall over the horizon of what we can see, as the space between us stretches faster than their light can reach us. And we can never know what exists past that horizon. So you might imagine or you might have heard about, a far-off future, where space is stretching faster and faster and where all of the stars and galaxies are over that edge. A future where Earth will be left with a dark, empty sky. But luckily for us, or, at least, for hypothetical future earthlings, that’s not actually the case. Because the universe is expanding but not all of it.

We have known that the universe is expanding since the 1920s, but we only discovered that the expansion is accelerating in the 1990s, thanks to the Hubble Space Telescope. Hubble was the first tool to measure really precise distances to supernovas out near the edge of the observable universe. And it showed us that out there, ancient galaxies and the supernovas in them are zooming away from us faster than anywhere else. In fact, astronomers realized that they were flying away even faster than expected. Which, at first, didn’t make sense.


At the time, we thought the universe was dominated by gravity, which pulls things together. So seeing everything accelerate apart was weird. It would kind of be like if you kicked a ball uphill and saw it speed up instead of coming back down to you. Because of this, scientists concluded that there had to be something else going on, something pushing these galaxies apart. They came to call that thing dark energy. Decades later, dark energy is still really mysterious and there is a lot we don’t understand about it.

One explanation is that it’s a property of empty space. This means that space itself, with no stuff in it at all, has dark energy. And that energy pushes space apart, creating new space, which in turn has dark energy, which pushes space apart, creating new space, which in turn has dark energy, which You get it. If dark energy is a property of space, that also means you can’t dilute it. Its density will always be the same, no matter how much space expands.


Of course, that density is also pretty small. If you borrow Einstein’s “E=mc2” trick and express energy as mass, it is equivalent to about one grain of sand in a space the size of the entire Earth. But if you average that over the whole universe, which is mostly empty space, there is more dark energy than anything else. So it dominates and the universe as a whole expands. That’s why the most ancient galaxies are also moving away faster than before.

It has taken a long time for their light to reach us, so the universe has had more time to stretch. Now, this might all make dark energy seem super strong. After all, it makes up more than two-thirds of all the stuff in the universe and it’s pushing apart entire galaxies. But it is only powerful because there is a lot of it.


Within small spaces, especially those full of planets and stars, dark energy is actually pretty weak. Like, the gravity between the Sun and the Earth or the Earth and the Moon is more than enough to overpower the repulsive dark energy between them. In fact, most of the universe’s mass is concentrated in galaxy clusters and these pockets of matter are completely immune to dark energy. They are simply not expanding. 

It doesn’t mean the expansion is negligible, like how technically your gravity pulls ever-so-slightly on Earth but it is not enough to actually notice. It means that, as far as we know, dark energy is truly not stretching our galaxy at all. This is because it’s not a force like gravity, so it works a little differently.


To understand how, think about pushing on a heavy door. If you push lightly, it won’t open. Push a little harder and it still won’t. But if you push hard enough, once you cross a certain threshold of pushing, it will open. That door is gravity and within a galaxy, there’s just not enough dark energy to push it open. 

In other words, gravity is too strong. So our galaxy will never expand, because if it can’t stretch even a little, then it can’t create more space. And that means the amount of dark energy inside will never grow. Of course, this isn’t something we have been able to directly observe, like by looking at other galaxies. But multiple observations have shown us what dark energy is like and they all suggest this should be true.


Eventually, in the really distant future, fewer and fewer galaxies will be visible from Earth. And in 100 billion years or so deep space will be almost empty. But if Earth were still around by then, which, admittedly, is pretty unlikely, we would still have a beautiful night sky. Even as the universe stretches, the glow of our galaxy will still be overhead and we will have stars, constellations and even a handful of galaxies bound by gravity to ours. All because dark energy just can’t get a foothold around here. Of course, this will only last until the heat death of the universe but that’s another story.



Share:

The Big Rip Due To Dark Energy

Even though nobody else will be around to see it, scientists are fascinated by the end of the universe. It is kind of like the Big Bang there's just something so interesting about knowing where your atoms came from and where they are ultimately going to go in billions of years. Right now, there are a few ideas about how everything could end, where everything is spread so thin that activity basically stops.

Except, based on the results from a paper published in Nature Astronomy, that might not actually be true. Instead, there is a chance that everything in existence will eventually be ripped apart. And it would all be thanks to dark energy. Scientists think it makes up about 70% of the stuff in the universe and that it is the reason the expansion of the universe is accelerating. But there is a lot they are still figuring out.


Some of their research into dark energy has involved tools called standard candles. Standard candles are objects or events of known brightness that are used to measure distance in the far-off universe. Essentially, if you know how bright something should be up close, then how bright it actually looks indicates how far away it is.

For decades, the most important standard candle has been a special kind of exploding star called a type a supernova. These events always have the same brightness and in the 1990s, they allowed scientists to discover that the universe’s rate of expansion was accelerating. But what is really important for this recent study is that all the estimates provided by type 1a supernovas also indicate that the density of dark energy is fixed.



There is a lot of math involved, but this fact is a big reason they believe the Big Freeze is most likely. The problem is, you can only see so far with any given candle before it gets too dim and type 1a can’t take us back to the beginning of the universe. Because light can only move so fast, looking deep into space is like looking back in time. And these supernovas only allow us to see what things were like 4.5 billion years or so after the Big Bang.

Admittedly, there are some data sources like one called the Cosmic Microwave Background, that can tell us what things were like around 400 thousand years after the Big Bang. But that Background actually seems to disagree with what supernovas say about the expansion rate, which has had astronomers debating different options for years. There is also been a 4-billion-year gap between the two data sources, so it has been hard to figure out what’s going on.



That’s where last week’s news comes in. In their paper, a pair of astronomers proposed a new kind of standard candle, one that can let us peer back to that sweet spot just 1-2 billion years after the Big Bang. Their idea relies on quasars, rapidly-growing black holes that are among the universe’s brightest objects. Although quasars vary a lot in brightness, the authors claim that the ratio of ultraviolet brightness to X-ray brightness is not only more predictable but also reliable enough to indicate a quasar’s distance.

They point out that, at distances where both type 1a supernovas and quasars are visible, they provide comparable results, too. But the key is, farther from Earth, and further back in time, only quasars are visible. And after looking at some of those super-distant objects, the authors claim to have made a surprising observation: In the first couple billion years after the Big Bang, the growth rate of the universe didn’t match the predictions made by the supernova-based models. Back then, things seemed to be getting bigger more slowly than expected.



That implies that the amount of dark energy driving that expansion hasn’t been constant after all. Instead, it has been increasing over time. It sounds like a wild idea, but it would help explain why there isn’t a perfect match between the expansion rate we see from supernovas and that of Cosmic Microwave Background. So it is not like there is no foundation for it. But still, before they rewrite your Astronomy textbook, it is important to remember two things. One, these results will need a lot of confirmation before they are accepted into the mainstream theory. And two, scientists have effectively no clue what dark energy actually is.

So it is not even worth asking questions like what would be generating more and more of it, because we don’t even know what IT is. But if these results are true, there is one thing we do know, instead of ending in the Big Freeze, the universe would eventually end in the so-called Big Rip, where ever-increasing dark energy tears apart every particle until there’s nothing left and no one to see it. But the assumption is that it’s not such a big deal, because there is no way we would be around by then.

Reference:- Quasars as standard candles
Also Read:- Let's Understand Wormholes


Share:

Let's Understand Wormholes

Whether it’s Star Trek, Stargate or Babylon 5 wormholes have been showing up in science fiction for a long time. They are just a super convenient tunnel to another part of the universe, a way for sci-fi writers to send their characters across huge distances in the blink of an eye. And it turns out that they are not just science fiction: wormholes could really exist. But if they do, they are much weirder than anything we could make up.

In physics, a wormhole is known as an Einstein-Rosen bridge. It is named after Albert Einstein and another physicist, Nathan Rosen. They came up with the idea together in 1935 and showed that according to the general theory of relativity, wormholes are a definite possibility. A wormhole acts like a tunnel between two different points in spacetime, which is just the continuum of space and time that makes up the fabric of the universe.


According to general relativity, gravity works by bending spacetime. Planets and Stars act like a weight in the fabric of the universe, creating a curve. It can be kind of hard to picture what spacetime is, let alone what it would mean for it to bend, so physicists often talk about it by using weights on a stretched bedsheet as an analogy. Earth would be like a big bowling ball making a big dip in the sheet and when something gets too close to the planet and it’s pulled in by the gravity, it’s like it’s falling into that dip in the sheet. 

But if spacetime can be curved, it can also be twisted and shaped in other ways, like by connecting two different places with a tunnel. It’s kind of like poking two holes into that bedsheet, folding it over and then stretching the fabric so that the edges of the holes can get together and you just sew them into a tunnel. That’s a wormhole in a bedsheet. But because wormholes don’t seem to violate the laws of physics does not mean that they actually exist; they are just technically possible. And unfortunately, we haven’t yet detected any and we aren’t even sure how they would form.


If wormholes do exist, one reason we might not have spotted them is that they could be hiding behind black holes. A black hole is what happens when there is so much mass squeezed into an object that it ends up with such a strong force of gravity that even light can’t escape its pull. Once you get too close to a black hole, you are toast: there is no escaping from being smashed into oblivion. In the bedsheet model, black holes and wormholes look very similar, they both have a steep falloff that seems to go on forever. Except, with a wormhole, the steep drop actually leads somewhere.

According to general relativity, wormholes could have black holes at each end, meaning that after diving into a black hole on one end, the energy that was once your body could get spewed out somewhere totally different in the universe. Of course, you would not survive that trip. All that would be left is radiation and subatomic particles. Then there are white holes, which are the opposite of black holes: They spew out matters with such force that it would be impossible to enter them. If black holes are infinite weights on a bedsheet, white holes would be like hills: objects pushing up on the bedsheet.


Like wormholes, these are a thing that could exist, the math does check out, we are just not sure how they would form. But we know that if they exist, they could be found at either end of a wormhole, too. So, maybe if there was a black hole at one end of the wormhole and a white hole at the other, we could go in the black hole end and be blasted out the white hole end, Maybe. But you would still probably be crushed by the black hole in the process. Not to mention it would definitely be a one-way trip.

There are a few other problems with wormholes. For one thing, they would probably be dangerous. Sudden unexpected collapse, weird exotic particles, a ton of radiation. In fact, travelling through a wormhole could instantly collapse it, because they would probably be unstable. And then there is the fact that wormholes might not be a shortcut at all. A random wormhole could easily be a longer-than-normal path. Size is also a problem. A real-life wormhole could be too small for us to travel through. Not to mention the travel time, which could be millions or billions of years, making some wormholes pretty useless.


So, that’s a lot of problems. The biggest hope actually comes from how little we know. A lot of this depends on physics that we haven’t quite worked out yet or on facts about our universe’s history and geometry that we just don’t know for sure. Once we have all that figured out, the final barrier would be technology and opportunity. Right now, we definitely don’t know how to make a wormhole and we would have to be super lucky to find one that is useful to us if they exist at all.

So, it’s pretty clear that we won’t be sliding through any wormholes anytime soon. But we know that they could be out there, hiding in some of the most extreme places in the universe. And who knows? Maybe our ideas about wormholes will be totally different in the future. People living just a few hundred years ago couldn’t have even imagined particle accelerators or internet. Until we find one or build one Let's will keep exploring the universe.



Share:

Recent Posts

Better Batteries | Acquisition of Maxwell Technologies by Tesla

Tesla has grown rapidly over the past decade when it became the first American automotive company to go public since Ford in 1956. The ...

Total Pageviews