Here We Discuss Different Science Related Stuffs... Chapters & Contains. Innovations in Science world And Some Knowledge Stuff ... Come And See & Let Us Know How You Feel

Fat Can Be Healthy

Maybe you have been pouring skim milk on your cereal and spritzing non-fat dressing on your salad for years.  But it turns out, eating fat won’t make you fat. 

In fact, research shows that low-fat diets don’t seem to aid in weight loss or in reducing the risk of disease compared to higher fat diets. And all those refined carbs you have been eating to replace that fat might be the real issue. 



To understand how fat can be healthy, it’s first helpful to understand what’s going on with carbs in your body. 

When you eat a simple carbohydrate, like a slice of bread, enzymes in your saliva immediately start breaking that food down into sugar. That surge of sugar triggers a hormone called insulin, which tells your body to store available energy in the bloodstream in fat tissue and other forms. And the later surge-crash makes you feel hungry, encouraging you to eat more.



But fats are another story. Fat isn’t processed the same way as carbs. It can’t be broken down with saliva, or fully digested by stomach acid.

Instead, your small intestines, with the aid of bile secreted by your liver, break it down. This happens much later in the digestive process, so fat digestion is much slower.



The different fats interact with your hormones in complex ways that, unlike carbs, don’t cause a massive spike in insulin.  And good fats are really important for your body to function properly.

Monounsaturated fats can be found in olive oil and avocados. This good fat helps reduce inflammation and levels of LDL or “bad” cholesterol in the blood. 



Polyunsaturated fats in foods like sunflower seeds, walnuts, and fish also have significant health benefits. Fish oil, for example, consists of one type of polyunsaturated fat called omega-3 fatty acids -- which have been found to decrease blood pressure, increase HDL or “good” cholesterol, and may also protect against heart disease. 

But saturated fats found in red meat and dairy are a different story. An extensive study found that replacing a small percentage of calories coming from saturated fats with calories from unsaturated fats reduced the risk of death, heart disease, and a number of neurodegenerative diseases.



At the same time, studies show full-fat dairy is healthier than reduced fat dairy. One recent study found that drinking full-fat dairy was associated with a lower risk of diabetes. 

So while unsaturated fats are better, saturated fats aren't entirely useless. Not only are unsaturated fats essential for your body, avoiding them in the name of weight loss isn’t actually a helpful way to shed unwanted pounds. 



A study by the Women’s Health Initiative assigned women to low-fat diets for eight years. They found the participants didn’t seem to gain protection against breast cancer, colorectal cancer, or cardiovascular disease. And their weights were generally the same as those of women following their usual diets.

And in the carb vs fat debate, an extensive 2017 study found no association between dietary fat and heart disease. In fact, the researchers found that high-carb diets were linked to a higher risk of death. 



So, if studies show that fat doesn’t make us fat or increase our risk of heart disease… and carbs make us hungry and are linked to a higher risk of death, should we all just ditch carbs altogether?
Probably not. 

Recent research seems to advocate a balanced diet that includes a combination of healthy fats and complex carbs.

Researchers found that diets high in fibre and low in refined grains, meat, and sugars resulted in less weight gain. 



So what should you eat? The good news is that you can find healthy fats and complex carbs in a variety of tasty foods. 

You can find unsaturated fats in fish, olives, nuts, and seeds, and still, have a place on your plate for so-called “good carbs.”

Although you should probably avoid eating lots of refined carbs like white bread and rice.

Foods like sweet potatoes, raw apples, and legumes are a different story., though. These foods don't cause the same sudden peaks in blood sugar. 



And like healthy fats, they contribute to a balanced diet to keep your body running. So go forth and toss some oil on that salad!




Share:

Nuclear Battery

Batteries are necessities of modern life, powering everything from cell phones to vehicles. Most batteries operate via chemical reactions that convert stored chemical energy into electrical energy. These electrochemical reactions are triggered when a load (e.g. a light bulb) is connected to a battery, causing electrons to flow.

Though traditional batteries are ubiquitous and can be very cheap, future batteries may harness the power of radioactive isotopes for electricity generation. Batteries that use the decay products of radioactive isotopes are known as atomic batteries or radioisotope generators. 



The first atomic battery was developed in 1913 by Henry Moseley. His battery consisted of a spherical glass globe with a silver lining the interior. Within the glass, the sphere was an emitter of a radioactive isotope of radium. The emitted charged particles deposited on the silver, causing a build-up of charge. This effectively created a radioactively powered capacitor, from which electric current could be extracted.

The overwhelming advantage of atomic batteries is their long life capabilities with minimal external maintenance. Therefore, radioisotope generators are ideal for space missions lasting several years or for power generation in remote locations. Furthermore, atomic batteries are lightweight, independent of sunlight unlike solar cells and unaffected by radiation belts in space (e.g. the Van Allen belts). Nonetheless, atomic batteries have only proliferated slowly primarily due to their prohibitive cost compared to traditional batteries and public health concerns regarding radioactivity.



Just as electrochemical batteries can be fueled by reactions between various chemicals, atomic batteries can be fueled by the emissions of many radioactive isotopes. However, the electric conversion principles employed distinguish atomic batteries into two categories: thermal and non-thermal. Power output from a thermal atomic battery is dependent on temperature whereas a non-thermal battery is independent of temperature.

NASA and the Department of Energy have extensively studied radioisotope power systems to produce heat and electricity for space missions lasting over a decade and extending into regions of the solar system where sunlight is too faint to permit solar energy conversion as a viable power source. As of 2005, the United States had launched 44 radioisotope thermoelectric generators (RTGs) on 25 different missions, including the Cassini mission to Saturn and the Galileo mission to Jupiter.



The radioisotope of primary interest to NASA is plutonium-238 (Pu238), a radioisotope not used for nuclear weapons. Plutonium decay products not only serve to generate electricity to power the spacecraft, but the heat generated by radioactive decay fuel radioisotope heater units which warm the instruments aboard the spacecraft. Keeping electronics and instruments warm is essential in the frigid climate of space since most electronic equipment has a relatively narrow operating temperature range. 

NASA and the DOE continue to fund research on RTGs with goals to increase power conversion efficiency above 35% while maintaining the reliability of electricity generation in space. More than ten government contracts have been awarded to private companies relating to various aspects of RTGs in an effort to produce more cost-effective long-term science missions for NASA.



A non-thermal atomic battery that generates electricity from electron/beta-particle emission is known as a betavoltaic. A viable radioisotope for use in betavoltaics is an isotope of hydrogen with two additional protons, known as tritium. Since tritium loses half its radioactive energy in 12.3 years (its half-life) betavoltaics made with tritium can last well over a decade. 

Furthermore, beta particles emitted from tritium do not penetrate human skin, alleviating potential public health concerns. These two characteristics of tritium make it well-suited for medical devices. For example, pacemaker recipients often outlive the pacemaker battery, requiring risky, invasive replacement surgery. Pacemakers using a tritium power source could have battery lifetimes on the order of decades, obviating the need for replacement surgery. 



Another advantage of tritium is that can be easily obtained from Canadian nuclear reactors that generate heavy water as a by-product. Though betavoltaics is still more expensive than other traditional batteries, they could provide a future power source for electronics requiring minimal power in poorly accessible locations.




Share:

What Is Radioisotope Power System?

Power is the one thing a spacecraft can't work without. Without the technology to reliably power, no long term space missions would be possible, thus our knowledge of the solar system would be only a fraction of what it is today. It might sound surprising, but there are currently only two practical options for providing a long-term source of electrical power for exploring space: the light of the sun or heat from a nuclear source such as a radioisotope.

Solar power is an excellent way to generate electricity for most Earth-orbiting spacecraft and for certain missions to the moon and places beyond that offer sufficient sunlight and natural heat. However, many potential Space missions given a high priority by the scientific community would visit some of the harshest, darkest,  and coldest locations in the solar system and these missions could be impossible or extremely limited without the use of nuclear power.


Radioisotope power systems (RPS) are a type of nuclear energy technology that uses heat to produce electric power for operating spacecraft systems and science instruments. That heat is produced by the natural radioactive decay of plutonium-238.

Choosing between solar and nuclear power for a space mission has everything to do with where a spacecraft needs to operate and what the mission must accomplish when it gets there. Radioisotope power is used only when it will enable or significantly enhance the ability of a mission to meet its science goals.


Radioisotope power systems (RPS) offer several important benefits. They are compact, rugged and provide reliable power in harsh environments where solar arrays are not practical. For example, Saturn is about ten times farther from the sun than Earth, and the available sunlight there is only one hundredth, or 1%, of what we receive at Earth. At Pluto, the available sunlight is only six hundredths of a percent of the amount available at Earth. The ability to utilize radioisotope power is important for missions to these and other incredibly distant destinations, as the size of solar arrays required at such distances is impractically large with current technology.

Radioisotope power systems (RPS) offer the key advantage of operating continuously over long-duration space missions, largely independent of changes in sunlight, temperature, charged particle radiation, or surface conditions like thick clouds or dust.



In addition, some of the excess heat produced by some radioisotope power systems can be used to enable spacecraft instruments and on-board systems to continue to operate effectively in extremely cold environments.

The latest RPS to be qualified for flight, called the Multi-Mission Radioisotope Thermoelectric Generator, provides both power and heat for the Mars Science Laboratory rover.

In 2011 the National Academy of Sciences completed a major study of the priorities for the next decade of U.S. exploration of the solar system, and several of the highest-ranked missions may require the use of an RPS.


As part of an ongoing partnership between NASA and the Department of Energy (DOE), NASA is conducting a mission-driven RPS program—a technology development effort that is strategically investing in nuclear power technologies that would maintain NASA's current space science capabilities and enable future space exploration missions.

NASA works in partnership with DOE to maintain the capability to produce the Multi-Mission Radioisotope Thermoelectric Generator (or MMRTG) and to develop higher-efficiency energy conversion technologies, such as more efficient thermoelectric converters as well as Stirling converter technology.


In the future, radioisotope power systems could continue to support missions to some of the most extreme environments in the solar system, probing the secrets of Jupiter's ocean moon Europa, floating in the liquid lakes of Saturn's moon Titan or touring the rings and moons of the ice giant planet Uranus. With this vital technological capability, the possibilities for exploration and discovery are limited only by our imaginations.



Share:

Space Garden

Lettuce, peas and radishes are just a few vegetables that are found in a summer garden. But did you know these same vegetables also can be grown in space? Crew members aboard the International Space Station have been growing such plants and vegetables for years in their "space garden." 

A space station study is helping investigators develop procedures and methods that allow astronauts to grow and safely eat space-grown vegetables. The experiment also is investigating another benefit of growing plants in space: the non-nutritional value of providing comfort and relaxation to the crew. 



"Growing food to supplement and minimize the food that must be carried to space will be increasingly important on long-duration missions," said Shane Topham, an engineer with Space Dynamics Laboratory at Utah State University in Logan. "We also are learning about the psychological benefits of growing plants in space -- something that will become more important as crews travel farther from Earth." 

The experiment, known as Lada Validating Vegetable Production Unit -- Plants, Protocols, Procedures and Requirements -- uses a very simple chamber similar to a greenhouse. Water and light levels are controlled automatically. 



The experiment has four major objectives: (1)To find out if the produce grown in space can be consumed safely. (2)What types of microorganisms might grow on the plants and what can be done to reduce the threat of microorganisms in the hardware prior to launch? (3) What can be done to clean or sanitize the produce after it has been harvested? (4) How to optimize production compared to the resources required to grow it? 

Since 2002, the Lada greenhouse has been used to perform almost continuous plant growth experiments on the station. Fifteen modules containing root media or root modules have been launched to the station and 20 separate plant growth experiments have been performed. 



The "crop" -- a type of Japanese lettuce called Mizuna -- returned to Earth. It was the first time two chamber experiments were conducted simultaneously for a side-by-side comparison of plants grown using different fertilizers and treatments. 

"The idea was to validate in space the results of ground tests, to show that minimizing water usage and salt accumulations would produce healthier plants in space," said Topham. "For years we have used the same method for packing root modules, so this was a comparison study between old and potential improvements and so far we have found a couple of surprising results."



First, a sensor failure in the traditional root module on the station caused the plants to receive higher than specified water levels. Investigators believed the overwatering would disrupt nutrients and oxygen in the traditional module, making the newer improved module look better in the comparison. 

Surprises in microgravity research are not unusual, though, and it turned out that overwatered traditional module sprouted and developed leaves about twice as fast. "This suggests the conservative water level we have been using for all our previous experiments may be below optimal for plant growth in microgravity," said Topham. 



The second surprising result was discovered when the root modules were unpacked on the ground. The new fertilizer being tested had a slower and more even release rate, which had helped lower the plants' accumulation of salts during ground studies. Investigators expected to see higher salt accumulation in the space modules, but the opposite occurred. 

"The current theory is that the extra water and larger plant uptake of fertilizer caused the root modules to remove nutrients faster and release fertilizer faster, thus preventing the salt accumulations that were observed in the slower-growing ground studies," said Topham. 



"The space station's ability to provide on-the-spot adjustments to experimental conditions or opportunities to quickly repeat microgravity experiments with new conditions are a big plus for researchers," said Julie Robinson, International Space Station program scientist at Johnson Space Center. "This work also shows the surprising results that investigators find when they take a well-understood experiment on Earth and reproduce it on the space station." 

Data from this investigation also will help advance Earth-based greenhouses and controlled-environment agricultural systems and help farmers produce better, healthier crops in small spaces using the optimum amount of water and nutrients. 



The experiment takes advantage of a 20-year-old cooperative agreement between the Space Dynamics Laboratory and the Institute for Biomedical Problems in Moscow, Russia. Each organization benefits from resources provided by their respective national space programs -- the Space Dynamics Laboratory with NASA, and the Institute for Biomedical Problems with the Russian Federal Space Agency. 

Root modules with seeds are launched to the space station on Russian Progress supply vehicles. Russian crew members water the plant seeds and perform maintenance. They also harvest the vegetables and place them in a station freezer before transferring them to a space shuttle freezer for there safe return to Earth for analysis by investigators at the Space Dynamics Laboratory.




Also Read:- How Solar Panels Convert Solar Energy To Electrical Energy ?
Share:

How Solar Panels Convert Solar Energy To Electrical Energy ?

The Earth intercepts a lot of solar power: 173 thousand terawatts. That's ten thousand times more power than the planet's population uses. So is it possible that one day the world could be completely reliant on solar energy?

To answer that question, we first need to examine how solar panels convert solar energy to electrical energy. Solar panels are made up of smaller units called solar cells. The most common solar cells are made from silicon, a semiconductor that is the second most abundant element on Earth. 



In a solar cell, crystalline silicon is sandwiched between conductive layers. Each silicon atom is connected to its neighbours by four strong bonds, which keep the electrons in place so no current can flow. 

Here's the key: a silicon solar cell uses two different layers of silicon. A n-type silicon has extra electrons and p-type silicon has extra spaces for electrons, called holes. Where the two types of silicon meet, electrons can wander across the p/n junction, leaving a positive charge on one side and creating a negative charge on the other. 



You can think of light as the flow of tiny particles called photons, shooting out from the Sun. When one of these photons strikes the silicon cell with enough energy, it can knock an electron from its bond, leaving a hole. The negatively charged electron and location of the positively charged hole are now free to move around. But because of the electric field at the p/n junction, they'll only go one way. The electron is drawn to the n-side, while the hole is drawn to the p-side. The mobile electrons are collected by thin metal fingers at the top of the cell. From there, they flow through an external circuit, doing electrical work, like powering a lightbulb, before returning through the conductive aluminium sheet on the back.

Each silicon cell only puts out half a volt, but you can string them together in modules to get more power. Twelve photo-voltaic cells are enough to charge a cellphone, while it takes many modules to power an entire house. Electrons are the only moving parts in a solar cell and they all go back where they came from. There's nothing to get worn out or used up, so solar cells can last for decades.



So what's stopping us from being completely reliant on solar power? There are political factors at play, not to mention businesses that lobby to maintain the status quo. But for now, let's focus on the physical and logistical challenges and the most obvious of those is that solar energy is unevenly distributed across the planet. Some areas are sunnier than others. It's also inconsistent. Less solar energy is available on cloudy days or at night. So a total reliance would require efficient ways to get electricity from sunny spots to cloudy ones and effective storage of energy.

The efficiency of the cell itself is a challenge, too. If sunlight is reflected instead of absorbed or if dislodged electrons fall back into a hole before going through the circuit, that photon's energy is lost. The most efficient solar cell yet still only converts 46% of the available sunlight to electricity and most commercial systems are currently 15-20% efficient.



In spite of these limitations, it actually would be possible to power the entire world with today's solar technology. We'd need the funding to build the infrastructure and a good deal of space. Estimates range from tens to hundreds of thousands of square miles, which seems like a lot, but the Sahara Desert alone is over 3 million square miles in area. Meanwhile, solar cells are getting better, cheaper and are competing with electricity from the grid. And innovations, like floating solar farms, may change the landscape entirely.

Thought experiments aside, there's the fact that over a billion people don't have access to a reliable electric grid, especially in developing countries, many of which are sunny. So in places like that, solar energy is already much cheaper and safer than available alternatives, like kerosene. For say, Finland or Seattle, though, effective solar energy may still be a little way off.




Also Read:- Better Batteries | Acquisition of Maxwell Technologies by Tesla
Share:

Better Batteries | Acquisition of Maxwell Technologies by Tesla

Tesla has grown rapidly over the past decade when it became the first American automotive company to go public since Ford in 1956. The attraction towards Tesla is undeniable. Their cars are slick, their acceleration is insane and perhaps most importantly, their brand represents a movement towards renewable energy. Tesla has attracted thousands of well-intentioned people who want to play their part in saving the world, but there have been niggling questions on the minds of many EV owners and EV naysayers. When is that expensive battery going to need to be replaced, and at what cost?

As existing Teslas begin to age and more exotic and demanding models of Teslas come to the fore, like the Tesla Truck and the Roadster 2. These issues are going to become more prominent, These batteries do NOT come cheap, but they are getting cheaper. The cost per kilowatt hour for Tesla power packs and the market average are dropping dramatically as technology advanced and manufacturing volumes increased. But that storage capacity slowly creeps away as the battery is used, slowly degrading the range of your electric vehicle.



Tesla currently offers a warranty to all Model 3 owners that cover it below 8 years or 160,000 kilometres, whichever comes first. Guaranteeing retention of a capacity of at least 70% when used under normal use. If it falls below that, they will replace your battery for free. Finding out what is considered normal use is pretty difficult, but they seem to be reasonable with it going by customer satisfaction reports.

It is estimated that Tesla is achieving a cost of 150$ per kWh of battery packs so the 50 kWh battery pack of the base model would cost around 7,500 dollars to replace, so they must be pretty confident on those numbers. As a massive recall of the approximately 193 thousand Model 3s currently shipped would ruin Tesla.



Ultimately these batteries are unlikely to drop below the warranties guarantee in those 160,000 kilometres, but even so, improving batteries is obviously just a wise business decision to retain those customers in future. This is just one of a myriad of factors that influenced Tesla’s recent landmark acquisition of Maxwell Technologies for $218 million dollars. A rare Tesla acquisition that sets Tesla up for not just cheaper batteries, but better batteries which will be lighter and will have greater range and also live a longer life.

It wouldn’t be the first time an automotive company underestimated its battery degradation. When the Nissan Leaf debuted in 2010, the battery production they needed simply did not exist and neither did the technical expertise required to design battery packs. In those days lithium-ion batteries cost about 400 dollars per kWh for laptop grade batteries and up to 1000 dollars per kWh for ones with the longevity needed for an electric vehicle. To minimise costs Nissan decided to start production of their own batteries and opted for a small 24 kWh battery, giving it a range of just over 100 kilometres. Suitable for city driving and that’s about it.

But customers soon realised that this paltry range was dwindling quickly. Within just 1-2 years of driving, the Leafs battery capacity was dropping up to 27.5% under normal use. Despite careful in-house testing, Nissan overlooked some crucial test conditions when developing their battery and because of this, they made some crucial design errors.



To learn why this degradation happens, we first need to understand how lithium-ion batteries work. A lithium-ion battery, like all batteries, contains a positive electrode, the anode and a negative electrode, the cathode, separated by an electrolyte. Batteries power devices by transporting positively charged ions between the anode and cathode, creating an electric potential between the two sides of the battery and forcing electrons to travel through the device it is powering to equalise the electric potential.

Critically, this process is reversible for lithium-ion batteries, as the lithium ions are held loosely, sitting into spaces in the anode and cathodes crystal structure. This is called intercalation. So, when the opposite electric potential is applied to the battery it will force the lithium ions to transport back across the electrolyte bridge and lodge themselves in the anode once again.



This process determines a huge amount of the energy storage capabilities of the battery. Lithium is a fantastic material for batteries, with an atomic number of 3, it is the 3rd lightest element and the lightest of the metals. Allowing its ions to provide fantastic energy to weight characteristics for any battery. But, the energy capacity of the battery is not determined by this, it is determined by how many lithium ions can fit into these spaces in the anode and cathode. For example, the graphite anode requires 6 carbon atoms to store a single lithium ion, to form the molecule LiC6. This gives a theoretical maximum battery capacity of 372 mAh per gram.

Silicon, however, can do better. A single silicon atom can bind 4.4 lithium ions, giving it a theoretical maximum battery capacity 4200mAh per gram. This seems great and can provide increases in battery capacity, but it also comes with drawbacks. As those 4.4 lithium ions lodging themselves into the silicon crystal lattice causes a volume expansion of 400% when charging from empty to full. This expansion creates stress within the battery that damages the anode material, that will eventually destroy its battery capacity over repeated cycles.



Battery designers are constantly looking for ways to maximise this energy density of their batteries while not sacrificing the longevity of the battery. So what exactly is being damaged in the batteries that cause them to slowly wither away?

When researchers began investigating what caused the Nissan Leaf’s rapid battery degradation, they began by opening the battery and unrolling the batteries contents. They found that the electrode coatings had become coarse over their life, clearly, a non-reversible reaction was occurring within the cell, the change was expected. In fact, the chemical process that caused it is vital to the operation of the battery. When a battery is charged for the very first time a chemical reaction occurs at the electrolyte-electrode interface, where electrons and ions combine. This causes the formation of a new layer between the electrode and electrolyte called the solid electrolyte interphase.



The name is exactly what it suggests, it’s a layer formed by the liquid electrolyte reacting with electrons to form a solid layer. Thankfully, this layer is permeable to ions, but not electrons. So it initially forms a protective layer over the electrode that allows ions to enter and insert themselves via intercalation, but it is impermeable to electrons. Preventing further reaction with the electrolyte. At least that’s the idea under normal conditions. The problem is, under certain conditions, this layer can grow beyond just a thin layer of protective coating, and result in the permanent lodgement of the lithium that provides the battery with its energy storage.

This process is not entirely well understood, but we can identify some factors that increase the rate of this formation. The expansion of the silicon electrode battery we mentioned earlier causes the fracture of the SEI layer, exposing fresh layers of the electrode to react with the electrolyte. Charging rate and temperature can also accelerate the thickening of this layer.



NASA performed their own in-depth study of this effect and released a report in 2008 titled “Guidelines on Lithium-ion Battery Use in Space Applications” sharing their findings. The temperature that the battery is charged and discharged at plays a massive role in the batteries performance.

Lowering the temperature lowers chemical activity, but this is a double-edged sword. Lowering the chemical activity negatively affects the batteries ability to store energy. Which is why batteries have lower ranges in cold countries, but lowering the chemical activity also decreases the formation rate of that SEI layer. 



This is one of the reasons that the Nissan Leaf’s battery lost a huge amount of capacity over just 2 years in many countries. Nissan performed most of its testing in stable laboratory conditions, not over a range of possible temperatures. Because of this, they failed to realise the disastrous effect temperature would have on the life of the battery and failed to include a thermal management system, which is commonplace in any Tesla.

This, of course, reduces the energy density of the battery. Adding tubing, the glycol needed to exchange heat, along with the heat pumps and valves needed to make a thermal management system, not only adds weight, but it draws energy away from the battery to operate. But it plays a vital part in maintaining the performance of the battery. Nissan’s choice to not include a thermal management system, even in the 2019 version, makes it a poor choice for anyone living in anything but a temperate climate.



Of course, just cycling the battery though it’s charged and discharged states are one of the biggest factors in degrading the battery. Every time you cycle the battery you are giving the SEI layer opportunities to grow. Minimising the number of times a cell is cycled will increase its life, and maintaining an ideal charge and discharge voltage of about 4 volts minimises any resistive heating that may cause an increase in chemical activity.

This is where Maxwell technologies come into play. Maxwell has two primary technologies that Tesla will be taking advantage of. The first is what Maxwell is known for, their ultracapacitors. Ultracapacitors serve the save fundamental job as batteries, to store energy, but they function in an entirely different way and are used for entirely different purposes. The fundamental difference between a capacitor and a battery is that a battery stores energy through chemical reactions, as we saw for lithium-ion batteries earlier this is done through insertion into the crystal lattice. Capacitors instead store their energy by ions clinging onto the surface of the electrode.



On each side of a standard ultracapacitor, we have an aluminium current collector with thin graphite electrodes on each, separated by an electrolyte and an insulating separator to prevent the passage of electrons. In an uncharged state, ions float in the electrolyte. When a voltage is applied during charging, ions drift towards their opposite charge and cling to the surface, holding the charge in place. When a device is then connected to the capacitor this charge can quickly leave while the ions drift back into the electrolyte.

The key limiting factor for ultracapacitors is the surface area available for this to happen and nanotechnology has allowed for amazing advances in the field. The inside of an ultracapacitor contains hundreds of layers of these electrode pairs. But even with this enormous surface area, ultracapacitors simply can't compete with batteries when it comes to energy density. Even Maxwell’s best ultracapacitors have an energy density of just 7.4 Wh/kg while the best guess for Tesla’s current energy density is about 250 Wh/kg. 



At this point of time ultracapacitors are not intended to be a replacement for batteries. They are intended to work in conjunction with batteries. Ultracapacitors strength is their ability to quickly charge and discharge without being worn down.  This makes them a great buffer to place between the motors and the battery. Their high discharge rate will allow them to give surges of electricity to the motors when rapid acceleration is needed, and allow them to charge quickly when breaking. Saving the battery from unnecessary cycles and boosting its ability to quickly provide current when needed for acceleration.

This is going to be a massively important technology for two upcoming Tesla vehicles. The Tesla Roadster, which will boast an acceleration of 0-60 in just 1.9 seconds, which a normal battery would struggle to achieve the discharge rate needed without damaging itself. The second vehicle is the Tesla Truck. The Tesla Truck is going to be limited in its range and cargo hauling ability as a result of the heavy batteries it will need, as trucks are limited in weight to about 40 metric tonnes in most countries. This ultracapacitor technology will boost its ability to regain energy from breaking significantly and thus allow its battery capacity to decrease, in turn allowing the truck to swap batteries for cargo.



The second technology Maxwell has been toting as their next big breakthrough is dry coated batteries. This is a manufacturing advancement that Maxwell claims will reduce the cost of manufacturing. A factor Tesla has been working fervently to minimize, with the growth of the Gigafactory. So, what are dry coated batteries? 

Currently in order to coat their current collectors with the electrode material Tesla, in partnership with Panasonic’s patented technology, must use first dissolve the electrode material in a solvent which is then spread over current collector, both are then passed through an oven for drying, where the solvent evaporates leaving just the electrode material behind. This adds the cost of the manufacturing procedure as the solvent is lost in the process and the baking process takes energy. On top of this the solvent is toxic, so removing it from the process would benefit the environment.



Maxwell instead uses a binding agent and conductive agent, which I assume will work similarly to electrostatic painting. Where a metal being painted will be given a negative charge, while the paint will be given a positive charge as it is sprayed attracting it to the metal where it will cling to it. This painting process also eliminates the solvents needed in the paint.

In this paper, published by Maxwell technologies, they detail how their dry coating manufacturing techniques could result in a high energy storage capacity of the electrodes, due to a denser and thicker coating. Resulting in a potential increase in battery capacity to 300 Watt-hours per kilogram, 20% up from our best estimates of Tesla’s current specs. Only time will tell if this claim can be realised at an industrial scale. Perhaps, more importantly to Tesla, they now own this manufacturing technique.



Currently, Panasonic owns the manufacturing process for Tesla, there is a literally a line of demarcation in the Gigafactory separating Panasonic and Tesla, denoting the point at which the ownership of batteries transfers hands. Having to buy their batteries from Panasonic adds cost, that Tesla will want to avoid in future and this step could allow for full vertical integration of their battery manufacturing. Thereby making electronic vehicles more affordable to the everyday consumer.




Share:

Why It Is So Hard To Predict An Earthquake?

San Francisco has been hit by a big earthquake at least once every hundred years going back as far as we know. So the people of San Francisco know that sometime in the next 100 years, they are likely to get shaken again by a big quake. But we can't say exactly when the quake might hit. Right now, all we can do is construct shake-proof buildings and put out seismic sensors. That way, when an earthquake sends out underground waves, which travel from its epicentre eight times faster than the destructive surface waves, we can detect the underground waves with enough time to give a warning like: “Uh oh! An earthquake is about to hit us!”... which is, surprisingly, enough time to turn off gas pipelines and stop trains and find cover.

But it doesn’t help people get out of town. For people to evacuate safely from natural disasters, it’s not helpful to give a really short warning or a really big window within which a disaster might happen. According to disaster experts, 2 days is just right. But if we want to be able to predict earthquakes with that amount of precision, we need to understand a LOT more about how they work.



We have tried looking backwards at quakes that have already happened and identifying events that occurred in the days before they hit, like multiple mini-quakes, big releases in radon gas, changes in magnetism, and even weird animal behaviour, to see if any of these were predictors of an impending quake. But lots of times these things happen without accompanying earthquakes and lots of times earthquakes happen without these things, so, so far we have not been able to find any reliable predictors.

Another approach is to build an accurate model of the earth beneath our feet. We know that over time, as tectonic plates rub against each other, the stress that builds up is sometimes violently released as an earthquake. If we had a really good model and reliable measurements of the forces on the plates, maybe then we could predict when and where an earthquake was going to happen. But plates are often more than 15-miles thick. That’s twice as deep as humans have ever dug so it would be pretty difficult to get monitoring equipment deep enough. So, we are creating mini-faults in the lab, to better understand the forces on moving plates, and to help identify reliable ways to measure the forces from the surface of the earth.



But in order to test our models, we need to be able to compare them to actual gigantic earthquakes, which, as we mentioned, do not happen that often. Luckily for researchers, a few ocean faults are more productive and frequently cause large but relatively harmless quakes, giving us a regular way to calibrate and fine-tune our models.

One big thing they have helped us learn is that the interactions between fault segments are really important: for example, when this particular segment slips, it increases the chances its neighbour will slip, letting us predict where the next quake will happen. In some faults, we can even say that it will happen within a couple of years. Compared to a hundred-year window, that’s really precise, but there are still two big problems.



First, these ocean faults are relatively simple, so we still have to figure out how to apply what we have learned from them to more complicated faults, like the ones near San Francisco. And second, even if we could do that, we would still be a long way away from the ideal two-day notice. And unfortunately, our existing methods probably are not going to help us get there. What we need is an earth-shattering breakthrough.

Thanks to Matt Wei, a professor in URI’s Graduate School of Oceanography. Dr. Wei uses seismic data and simulations to study the physics of plate tectonics and earthquakes. His research on fast-spreading oceanic transform faults - like the Discovery fault in the East Pacific - has helped us start to understand the importance of earthquake cycles as we work to crack the code of earthquake physics.

Also Read:-Are we Ready To predict another Carrington like event Accurately?


Share:

Internet

Today about 4.2 billion people have access to a world of information never before seen. Such an extraordinary level of connectedness has revolutionized everything from science and technology to commerce and romance, and virtually every aspect of our lives. 

Above all the technological innovations in history, few have made as strong of an impact as the internet. Comprised of a global network of computers, the internet allows for the transmission of information and connectivity at an unprecedented speed and scale. Some of the first computer networks began in the 1950s and 60s, but unlike today's global network these early networks were centralized within certain businesses and agencies.  It wasn't until 1969 when centralized computer networks became connected.


Funded by the US Department of Defense and developed by universities this host to host network connection called ARPANET. A direct ancestor of the internet ARPANET was the first of its kind. The network grew and by the 1980s incorporated networks at research institutions and other US federal agencies such as the National Science Foundation or NSF. The NSF connected these disparate networks into one large one NSFNET, which shifted over from being a federally run network to a commercial enterprise for Internet service providers.

By the late 1990s, this shift along with the rise of personal computers the world wide web and web browsers allowed the general public to access the Internet for the very first time. Today computers, smartphones, televisions, video game, consoles and other devices all tap into the network and transmit and receive data almost instantly. 


By clicking send in a messaging apps text audio and video are converted into pieces of electronic data called packets. These packets are then tagged with a port number and IP address, much like the mailing address on an envelope. The port number and IP address direct the packets to a given destination on the internet. From there the packets may travel over Wi-Fi cellular data or an Ethernet or phone line, through a series of routers modems and servers then through fibre optic cables or satellites. And through a similar process in Reverse to reach the packet's destination. Once the packets arrive their data is reassembled into the text audio or video that was originally sent. 

Since the days of the earliest computer networks, the internet has undergone a tremendous transformation, while also transforming the world that created it. From a closed off network to one that covers the globe, the Internet has provided access to information to every continent connecting people and ideas like never before.



Share:

Mystery Around Hypatia Stone

From snazzy banded agates to volcanic elephants, there are some pretty weird rocks out there. But the weirdest ones geologists find might be those that fall from space. One of them called the Hypatia stone, it might be, the strangest of them all. In fact, all signs currently suggest that this rock’s origin story is older than the solar system itself and if it is not well, we are going to have to rethink what we know about our cosmic neighbourhood.

The Hypatia stone was found in 1996 by a geologist in the southwest Egyptian Sahara. It is named after the first female astronomer and mathematician who managed to make the history books.



The stone was discovered in fragments no bigger than a centimetre across and in total, the pieces added up to a volume only about 20 cubic centimetres. Technically, it isn’t considered a meteorite, because the Meteoritical Society requires 20% of a rock’s original mass to be present to earn that title. And scientists have chipped this thing apart and sent pieces to so many labs that it no longer fits the bill.

But it’s definitely from space. When its extraterrestrial origins were confirmed in 2013, scientists assumed it was the very first comet nucleus or the rocky, central part of a comet, to be found on Earth. But its story is a bit more complicated and interesting.

First, exactly when the stone struck our planet is hard to pin down. It was found in an area of the Sahara which is full of these special rocks called Libyan Desert Glass, which are believed to have been created by a meteorite impact 28 million years ago. But the relationship between the Hypatia stone and this glass is far from certain. We are also not positive how big this rock was when it initially entered Earth’s atmosphere.



Based on its amount of a certain type of neon, we think it could not have been more than several meters in diameter. Or if it were bigger, the Hypatia stone itself had to have come from the upper few meters. These basic details are important to figure out, but what’s really strange about the Hypatia stone is what researchers discovered once they started analysing its composition. Because from what we can tell, Hypatia’s chemical makeup isn’t just out of this world.

It’s out of the entire solar system! See, everything in our neighbourhood formed out of the same cloud of dust and gas. And since astronomers believe that the cloud was relatively homogeneous, the rocky bits that formed should all have roughly the same chemical makeup.



But in 2015, scientists revealed that the Hypatia stone is different. It has a composition unlike any other solar system object studied in a lab. For example, its amount of the isotope nitrogen-15, a type of nitrogen with an extra neutron, was way off for it to be from a standard Comet.

Astronomers also found a type of xenon that’s created when one iodine isotope, one that predates the solar system, undergoes radioactive decay. So something about this thing totally isn’t normal. And in 2018, we got an even deeper analysis.

In February 2018, a team of astronomers announced that they would identify two separate, yet intermingled, matrices in the Hypatia stone, kinda like finding two different batters in the same cake. The matrices themselves had to have formed when the solar system did because Hypatia needed a cloud of dense interstellar dust to form. But they had the opposite composition of carbon and silicon that common meteorites do.



The ones we normally see, called chondritic meteorites, are low in carbon and high in silicon, but Hypatia has lots of carbon and basically no silicon. So again, not normal. But what was even more surprising about this analysis is that one of those matrices was also chock-full of deposits or inclusions. And each of them likely existed before the entire solar system! This includes moissanite grains, which are commonly a small part of some meteorites but are considered to be mostly pre-solar.

They also found a nugget of pure metallic aluminium in Hypatia, which is super rare in solar system rocks. And there were also a lot of these organic molecules called polycyclic aromatic hydrocarbons or PAHs, which are a big part of interstellar dust. PAHs are also inside certain comets and asteroids, so finding them in the Hypatia stone wasn’t unusual, but the abundance of them was. Conveniently, these PAHs were also a big reason we are able to study the stone today.



Many of them were turned into a crust of tiny diamonds, likely when Hypatia crashed into the Earth, and they protected and preserved the inside of the rock for millions of years. But that doesn’t explain where they came from. And there were other compounds found that haven’t been observed in any studied space rock, too. So the Hypatia stone is still completely unique.

At least as far as we know. Although it’s a pretty compelling case, we will still need further analysis of certain isotopes before we can definitively say that parts of this rock existed before the Sun. But the exciting news is, the authors of that 2018 paper hope to get that research out ASAP.

So, even if Hypatia turns out not to be pre-solar, that might be even weirder. That would imply that the early solar system wasn’t homogeneous after all, despite the generally accepted view. So we would have to change the way we think about our neighbourhood’s history.



Based on what we know so far, astronomers can at least tell that the stone had to have formed in a super cold environment, one below about -200°C. So if it is from around here after all, that likely means Hypatia had to have formed out in the Kuiper Belt where Pluto lives, or even farther away, like in the distant, mysterious Oort cloud.

We don’t actually know a lot about the composition of all the bodies that far out there, so it could totally turn out that there are other Hypatia-like space rocks. Mostly, all this means means we just have to keep looking. But no matter what the answer to this mystery is going to a cool one.

Also Read:-Pulsed Plasma Thrusters


Share:

Recent Posts

What is Time?

Time is something that everyone is familiar with 60 seconds is one minute, 60 minutes is one hour, 24 hours is one day and so on. This i...

Total Pageviews