Here We Discuss Different Science Related Stuffs... Chapters & Contains. Innovations in Science world And Some Knowledge Stuff ... Come And See & Let Us Know How You Feel

Better Batteries | Acquisition of Maxwell Technologies by Tesla

Tesla has grown rapidly over the past decade when it became the first American automotive company to go public since Ford in 1956. The attraction towards Tesla is undeniable. Their cars are slick, their acceleration is insane and perhaps most importantly, their brand represents a movement towards renewable energy. Tesla has attracted thousands of well-intentioned people who want to play their part in saving the world, but there have been niggling questions on the minds of many EV owners and EV naysayers. When is that expensive battery going to need to be replaced, and at what cost?

As existing Teslas begin to age and more exotic and demanding models of Teslas come to the fore, like the Tesla Truck and the Roadster 2. These issues are going to become more prominent, These batteries do NOT come cheap, but they are getting cheaper. The cost per kilowatt hour for Tesla power packs and the market average are dropping dramatically as technology advanced and manufacturing volumes increased. But that storage capacity slowly creeps away as the battery is used, slowly degrading the range of your electric vehicle.



Tesla currently offers a warranty to all Model 3 owners that cover it below 8 years or 160,000 kilometres, whichever comes first. Guaranteeing retention of a capacity of at least 70% when used under normal use. If it falls below that, they will replace your battery for free. Finding out what is considered normal use is pretty difficult, but they seem to be reasonable with it going by customer satisfaction reports.

It is estimated that Tesla is achieving a cost of 150$ per kWh of battery packs so the 50 kWh battery pack of the base model would cost around 7,500 dollars to replace, so they must be pretty confident on those numbers. As a massive recall of the approximately 193 thousand Model 3s currently shipped would ruin Tesla.



Ultimately these batteries are unlikely to drop below the warranties guarantee in those 160,000 kilometres, but even so, improving batteries is obviously just a wise business decision to retain those customers in future. This is just one of a myriad of factors that influenced Tesla’s recent landmark acquisition of Maxwell Technologies for $218 million dollars. A rare Tesla acquisition that sets Tesla up for not just cheaper batteries, but better batteries which will be lighter and will have greater range and also live a longer life.

It wouldn’t be the first time an automotive company underestimated its battery degradation. When the Nissan Leaf debuted in 2010, the battery production they needed simply did not exist and neither did the technical expertise required to design battery packs. In those days lithium-ion batteries cost about 400 dollars per kWh for laptop grade batteries and up to 1000 dollars per kWh for ones with the longevity needed for an electric vehicle. To minimise costs Nissan decided to start production of their own batteries and opted for a small 24 kWh battery, giving it a range of just over 100 kilometres. Suitable for city driving and that’s about it.

But customers soon realised that this paltry range was dwindling quickly. Within just 1-2 years of driving, the Leafs battery capacity was dropping up to 27.5% under normal use. Despite careful in-house testing, Nissan overlooked some crucial test conditions when developing their battery and because of this, they made some crucial design errors.



To learn why this degradation happens, we first need to understand how lithium-ion batteries work. A lithium-ion battery, like all batteries, contains a positive electrode, the anode and a negative electrode, the cathode, separated by an electrolyte. Batteries power devices by transporting positively charged ions between the anode and cathode, creating an electric potential between the two sides of the battery and forcing electrons to travel through the device it is powering to equalise the electric potential.

Critically, this process is reversible for lithium-ion batteries, as the lithium ions are held loosely, sitting into spaces in the anode and cathodes crystal structure. This is called intercalation. So, when the opposite electric potential is applied to the battery it will force the lithium ions to transport back across the electrolyte bridge and lodge themselves in the anode once again.



This process determines a huge amount of the energy storage capabilities of the battery. Lithium is a fantastic material for batteries, with an atomic number of 3, it is the 3rd lightest element and the lightest of the metals. Allowing its ions to provide fantastic energy to weight characteristics for any battery. But, the energy capacity of the battery is not determined by this, it is determined by how many lithium ions can fit into these spaces in the anode and cathode. For example, the graphite anode requires 6 carbon atoms to store a single lithium ion, to form the molecule LiC6. This gives a theoretical maximum battery capacity of 372 mAh per gram.

Silicon, however, can do better. A single silicon atom can bind 4.4 lithium ions, giving it a theoretical maximum battery capacity 4200mAh per gram. This seems great and can provide increases in battery capacity, but it also comes with drawbacks. As those 4.4 lithium ions lodging themselves into the silicon crystal lattice causes a volume expansion of 400% when charging from empty to full. This expansion creates stress within the battery that damages the anode material, that will eventually destroy its battery capacity over repeated cycles.



Battery designers are constantly looking for ways to maximise this energy density of their batteries while not sacrificing the longevity of the battery. So what exactly is being damaged in the batteries that cause them to slowly wither away?

When researchers began investigating what caused the Nissan Leaf’s rapid battery degradation, they began by opening the battery and unrolling the batteries contents. They found that the electrode coatings had become coarse over their life, clearly, a non-reversible reaction was occurring within the cell, the change was expected. In fact, the chemical process that caused it is vital to the operation of the battery. When a battery is charged for the very first time a chemical reaction occurs at the electrolyte-electrode interface, where electrons and ions combine. This causes the formation of a new layer between the electrode and electrolyte called the solid electrolyte interphase.



The name is exactly what it suggests, it’s a layer formed by the liquid electrolyte reacting with electrons to form a solid layer. Thankfully, this layer is permeable to ions, but not electrons. So it initially forms a protective layer over the electrode that allows ions to enter and insert themselves via intercalation, but it is impermeable to electrons. Preventing further reaction with the electrolyte. At least that’s the idea under normal conditions. The problem is, under certain conditions, this layer can grow beyond just a thin layer of protective coating, and result in the permanent lodgement of the lithium that provides the battery with its energy storage.

This process is not entirely well understood, but we can identify some factors that increase the rate of this formation. The expansion of the silicon electrode battery we mentioned earlier causes the fracture of the SEI layer, exposing fresh layers of the electrode to react with the electrolyte. Charging rate and temperature can also accelerate the thickening of this layer.



NASA performed their own in-depth study of this effect and released a report in 2008 titled “Guidelines on Lithium-ion Battery Use in Space Applications” sharing their findings. The temperature that the battery is charged and discharged at plays a massive role in the batteries performance.

Lowering the temperature lowers chemical activity, but this is a double-edged sword. Lowering the chemical activity negatively affects the batteries ability to store energy. Which is why batteries have lower ranges in cold countries, but lowering the chemical activity also decreases the formation rate of that SEI layer. 



This is one of the reasons that the Nissan Leaf’s battery lost a huge amount of capacity over just 2 years in many countries. Nissan performed most of its testing in stable laboratory conditions, not over a range of possible temperatures. Because of this, they failed to realise the disastrous effect temperature would have on the life of the battery and failed to include a thermal management system, which is commonplace in any Tesla.

This, of course, reduces the energy density of the battery. Adding tubing, the glycol needed to exchange heat, along with the heat pumps and valves needed to make a thermal management system, not only adds weight, but it draws energy away from the battery to operate. But it plays a vital part in maintaining the performance of the battery. Nissan’s choice to not include a thermal management system, even in the 2019 version, makes it a poor choice for anyone living in anything but a temperate climate.



Of course, just cycling the battery though it’s charged and discharged states are one of the biggest factors in degrading the battery. Every time you cycle the battery you are giving the SEI layer opportunities to grow. Minimising the number of times a cell is cycled will increase its life, and maintaining an ideal charge and discharge voltage of about 4 volts minimises any resistive heating that may cause an increase in chemical activity.

This is where Maxwell technologies come into play. Maxwell has two primary technologies that Tesla will be taking advantage of. The first is what Maxwell is known for, their ultracapacitors. Ultracapacitors serve the save fundamental job as batteries, to store energy, but they function in an entirely different way and are used for entirely different purposes. The fundamental difference between a capacitor and a battery is that a battery stores energy through chemical reactions, as we saw for lithium-ion batteries earlier this is done through insertion into the crystal lattice. Capacitors instead store their energy by ions clinging onto the surface of the electrode.



On each side of a standard ultracapacitor, we have an aluminium current collector with thin graphite electrodes on each, separated by an electrolyte and an insulating separator to prevent the passage of electrons. In an uncharged state, ions float in the electrolyte. When a voltage is applied during charging, ions drift towards their opposite charge and cling to the surface, holding the charge in place. When a device is then connected to the capacitor this charge can quickly leave while the ions drift back into the electrolyte.

The key limiting factor for ultracapacitors is the surface area available for this to happen and nanotechnology has allowed for amazing advances in the field. The inside of an ultracapacitor contains hundreds of layers of these electrode pairs. But even with this enormous surface area, ultracapacitors simply can't compete with batteries when it comes to energy density. Even Maxwell’s best ultracapacitors have an energy density of just 7.4 Wh/kg while the best guess for Tesla’s current energy density is about 250 Wh/kg. 



At this point of time ultracapacitors are not intended to be a replacement for batteries. They are intended to work in conjunction with batteries. Ultracapacitors strength is their ability to quickly charge and discharge without being worn down.  This makes them a great buffer to place between the motors and the battery. Their high discharge rate will allow them to give surges of electricity to the motors when rapid acceleration is needed, and allow them to charge quickly when breaking. Saving the battery from unnecessary cycles and boosting its ability to quickly provide current when needed for acceleration.

This is going to be a massively important technology for two upcoming Tesla vehicles. The Tesla Roadster, which will boast an acceleration of 0-60 in just 1.9 seconds, which a normal battery would struggle to achieve the discharge rate needed without damaging itself. The second vehicle is the Tesla Truck. The Tesla Truck is going to be limited in its range and cargo hauling ability as a result of the heavy batteries it will need, as trucks are limited in weight to about 40 metric tonnes in most countries. This ultracapacitor technology will boost its ability to regain energy from breaking significantly and thus allow its battery capacity to decrease, in turn allowing the truck to swap batteries for cargo.



The second technology Maxwell has been toting as their next big breakthrough is dry coated batteries. This is a manufacturing advancement that Maxwell claims will reduce the cost of manufacturing. A factor Tesla has been working fervently to minimize, with the growth of the Gigafactory. So, what are dry coated batteries? 

Currently in order to coat their current collectors with the electrode material Tesla, in partnership with Panasonic’s patented technology, must use first dissolve the electrode material in a solvent which is then spread over current collector, both are then passed through an oven for drying, where the solvent evaporates leaving just the electrode material behind. This adds the cost of the manufacturing procedure as the solvent is lost in the process and the baking process takes energy. On top of this the solvent is toxic, so removing it from the process would benefit the environment.



Maxwell instead uses a binding agent and conductive agent, which I assume will work similarly to electrostatic painting. Where a metal being painted will be given a negative charge, while the paint will be given a positive charge as it is sprayed attracting it to the metal where it will cling to it. This painting process also eliminates the solvents needed in the paint.

In this paper, published by Maxwell technologies, they detail how their dry coating manufacturing techniques could result in a high energy storage capacity of the electrodes, due to a denser and thicker coating. Resulting in a potential increase in battery capacity to 300 Watt-hours per kilogram, 20% up from our best estimates of Tesla’s current specs. Only time will tell if this claim can be realised at an industrial scale. Perhaps, more importantly to Tesla, they now own this manufacturing technique.



Currently, Panasonic owns the manufacturing process for Tesla, there is a literally a line of demarcation in the Gigafactory separating Panasonic and Tesla, denoting the point at which the ownership of batteries transfers hands. Having to buy their batteries from Panasonic adds cost, that Tesla will want to avoid in future and this step could allow for full vertical integration of their battery manufacturing. Thereby making electronic vehicles more affordable to the everyday consumer.




Share:

Why It Is So Hard To Predict An Earthquake

San Francisco has been hit by a big earthquake at least once every hundred years going back as far as we know. So the people of San Francisco know that sometime in the next 100 years, they are likely to get shaken again by a big quake. But we can't say exactly when the quake might hit. Right now, all we can do is construct shake-proof buildings and put out seismic sensors. That way, when an earthquake sends out underground waves, which travel from its epicentre eight times faster than the destructive surface waves, we can detect the underground waves with enough time to give a warning like: “Uh oh! An earthquake is about to hit us!”... which is, surprisingly, enough time to turn off gas pipelines and stop trains and find cover.

But it doesn’t help people get out of town. For people to evacuate safely from natural disasters, it’s not helpful to give a really short warning or a really big window within which a disaster might happen. According to disaster experts, 2 days is just right. But if we want to be able to predict earthquakes with that amount of precision, we need to understand a LOT more about how they work.



We have tried looking backwards at quakes that have already happened and identifying events that occurred in the days before they hit, like multiple mini-quakes, big releases in radon gas, changes in magnetism, and even weird animal behaviour, to see if any of these were predictors of an impending quake. But lots of times these things happen without accompanying earthquakes and lots of times earthquakes happen without these things, so, so far we have not been able to find any reliable predictors.

Another approach is to build an accurate model of the earth beneath our feet. We know that over time, as tectonic plates rub against each other, the stress that builds up is sometimes violently released as an earthquake. If we had a really good model and reliable measurements of the forces on the plates, maybe then we could predict when and where an earthquake was going to happen. But plates are often more than 15-miles thick. That’s twice as deep as humans have ever dug so it would be pretty difficult to get monitoring equipment deep enough. So, we are creating mini-faults in the lab, to better understand the forces on moving plates, and to help identify reliable ways to measure the forces from the surface of the earth.



But in order to test our models, we need to be able to compare them to actual gigantic earthquakes, which, as we mentioned, do not happen that often. Luckily for researchers, a few ocean faults are more productive and frequently cause large but relatively harmless quakes, giving us a regular way to calibrate and fine-tune our models.

One big thing they have helped us learn is that the interactions between fault segments are really important: for example, when this particular segment slips, it increases the chances its neighbour will slip, letting us predict where the next quake will happen. In some faults, we can even say that it will happen within a couple of years. Compared to a hundred-year window, that’s really precise, but there are still two big problems.



First, these ocean faults are relatively simple, so we still have to figure out how to apply what we have learned from them to more complicated faults, like the ones near San Francisco. And second, even if we could do that, we would still be a long way away from the ideal two-day notice. And unfortunately, our existing methods probably are not going to help us get there. What we need is an earth-shattering breakthrough.

Thanks to Matt Wei, a professor in URI’s Graduate School of Oceanography. Dr. Wei uses seismic data and simulations to study the physics of plate tectonics and earthquakes. His research on fast-spreading oceanic transform faults - like the Discovery fault in the East Pacific - has helped us start to understand the importance of earthquake cycles as we work to crack the code of earthquake physics.

Also Read:-Are we Ready To predict another Carrington like event Accurately?


Share:

Internet

Today about 4.2 billion people have access to a world of information never before seen. Such an extraordinary level of connectedness has revolutionized everything from science and technology to commerce and romance, and virtually every aspect of our lives. 

Above all the technological innovations in history, few have made as strong of an impact as the internet. Comprised of a global network of computers, the internet allows for the transmission of information and connectivity at an unprecedented speed and scale. Some of the first computer networks began in the 1950s and 60s, but unlike today's global network these early networks were centralized within certain businesses and agencies.  It wasn't until 1969 when centralized computer networks became connected.


Funded by the US Department of Defense and developed by universities this host to host network connection called ARPANET. A direct ancestor of the internet ARPANET was the first of its kind. The network grew and by the 1980s incorporated networks at research institutions and other US federal agencies such as the National Science Foundation or NSF. The NSF connected these disparate networks into one large one NSFNET, which shifted over from being a federally run network to a commercial enterprise for Internet service providers.

By the late 1990s, this shift along with the rise of personal computers the world wide web and web browsers allowed the general public to access the Internet for the very first time. Today computers, smartphones, televisions, video game, consoles and other devices all tap into the network and transmit and receive data almost instantly. 


By clicking send in a messaging apps text audio and video are converted into pieces of electronic data called packets. These packets are then tagged with a port number and IP address, much like the mailing address on an envelope. The port number and IP address direct the packets to a given destination on the internet. From there the packets may travel over Wi-Fi cellular data or an Ethernet or phone line, through a series of routers modems and servers then through fibre optic cables or satellites. And through a similar process in Reverse to reach the packet's destination. Once the packets arrive their data is reassembled into the text audio or video that was originally sent. 

Since the days of the earliest computer networks, the internet has undergone a tremendous transformation, while also transforming the world that created it. From a closed off network to one that covers the globe, the Internet has provided access to information to every continent connecting people and ideas like never before.



Share:

Mystery Around Hypatia Stone

From snazzy banded agates to volcanic elephants, there are some pretty weird rocks out there. But the weirdest ones geologists find might be those that fall from space. One of them called the Hypatia stone, it might be, the strangest of them all. In fact, all signs currently suggest that this rock’s origin story is older than the solar system itself and if it is not well, we are going to have to rethink what we know about our cosmic neighbourhood.

The Hypatia stone was found in 1996 by a geologist in the southwest Egyptian Sahara. It is named after the first female astronomer and mathematician who managed to make the history books.



The stone was discovered in fragments no bigger than a centimetre across and in total, the pieces added up to a volume only about 20 cubic centimetres. Technically, it isn’t considered a meteorite, because the Meteoritical Society requires 20% of a rock’s original mass to be present to earn that title. And scientists have chipped this thing apart and sent pieces to so many labs that it no longer fits the bill.

But it’s definitely from space. When its extraterrestrial origins were confirmed in 2013, scientists assumed it was the very first comet nucleus or the rocky, central part of a comet, to be found on Earth. But its story is a bit more complicated and interesting.

First, exactly when the stone struck our planet is hard to pin down. It was found in an area of the Sahara which is full of these special rocks called Libyan Desert Glass, which are believed to have been created by a meteorite impact 28 million years ago. But the relationship between the Hypatia stone and this glass is far from certain. We are also not positive how big this rock was when it initially entered Earth’s atmosphere.



Based on its amount of a certain type of neon, we think it could not have been more than several meters in diameter. Or if it were bigger, the Hypatia stone itself had to have come from the upper few meters. These basic details are important to figure out, but what’s really strange about the Hypatia stone is what researchers discovered once they started analysing its composition. Because from what we can tell, Hypatia’s chemical makeup isn’t just out of this world.

It’s out of the entire solar system! See, everything in our neighbourhood formed out of the same cloud of dust and gas. And since astronomers believe that the cloud was relatively homogeneous, the rocky bits that formed should all have roughly the same chemical makeup.



But in 2015, scientists revealed that the Hypatia stone is different. It has a composition unlike any other solar system object studied in a lab. For example, its amount of the isotope nitrogen-15, a type of nitrogen with an extra neutron, was way off for it to be from a standard Comet.

Astronomers also found a type of xenon that’s created when one iodine isotope, one that predates the solar system, undergoes radioactive decay. So something about this thing totally isn’t normal. And in 2018, we got an even deeper analysis.

In February 2018, a team of astronomers announced that they would identify two separate, yet intermingled, matrices in the Hypatia stone, kinda like finding two different batters in the same cake. The matrices themselves had to have formed when the solar system did because Hypatia needed a cloud of dense interstellar dust to form. But they had the opposite composition of carbon and silicon that common meteorites do.



The ones we normally see, called chondritic meteorites, are low in carbon and high in silicon, but Hypatia has lots of carbon and basically no silicon. So again, not normal. But what was even more surprising about this analysis is that one of those matrices was also chock-full of deposits or inclusions. And each of them likely existed before the entire solar system! This includes moissanite grains, which are commonly a small part of some meteorites but are considered to be mostly pre-solar.

They also found a nugget of pure metallic aluminium in Hypatia, which is super rare in solar system rocks. And there were also a lot of these organic molecules called polycyclic aromatic hydrocarbons or PAHs, which are a big part of interstellar dust. PAHs are also inside certain comets and asteroids, so finding them in the Hypatia stone wasn’t unusual, but the abundance of them was. Conveniently, these PAHs were also a big reason we are able to study the stone today.



Many of them were turned into a crust of tiny diamonds, likely when Hypatia crashed into the Earth, and they protected and preserved the inside of the rock for millions of years. But that doesn’t explain where they came from. And there were other compounds found that haven’t been observed in any studied space rock, too. So the Hypatia stone is still completely unique.

At least as far as we know. Although it’s a pretty compelling case, we will still need further analysis of certain isotopes before we can definitively say that parts of this rock existed before the Sun. But the exciting news is, the authors of that 2018 paper hope to get that research out ASAP.

So, even if Hypatia turns out not to be pre-solar, that might be even weirder. That would imply that the early solar system wasn’t homogeneous after all, despite the generally accepted view. So we would have to change the way we think about our neighbourhood’s history.



Based on what we know so far, astronomers can at least tell that the stone had to have formed in a super cold environment, one below about -200°C. So if it is from around here after all, that likely means Hypatia had to have formed out in the Kuiper Belt where Pluto lives, or even farther away, like in the distant, mysterious Oort cloud.

We don’t actually know a lot about the composition of all the bodies that far out there, so it could totally turn out that there are other Hypatia-like space rocks. Mostly, all this means means we just have to keep looking. But no matter what the answer to this mystery is going to a cool one.

Also Read:-Pulsed Plasma Thrusters


Share:

Recent Posts

Better Batteries | Acquisition of Maxwell Technologies by Tesla

Tesla has grown rapidly over the past decade when it became the first American automotive company to go public since Ford in 1956. The ...

Total Pageviews