Uncategorized

New robot does superior job sampling blood

Posted on Updated on


In the future, robots could take blood samples, benefiting patients and healthcare workers alike.

A Rutgers-led team has created a blood-sampling robot that performed as well or better than people, according to the first human clinical trial of an automated blood drawing and testing device.

The device provides quick results and would allow healthcare professionals to spend more time treating patients in hospitals and other settings.

The results, published in the journal Technology, were comparable to or exceeded clinical standards, with an overall success rate of 87% for the 31 participants whose blood was drawn. For the 25 people whose veins were easy to access, the success rate was 97%.

The device includes an ultrasound image-guided robot that draws blood from veins. A fully integrated device, which includes a module that handles samples and a centrifuge-based blood analyzer, could be used at bedsides and in ambulances, emergency rooms, clinics, doctors’ offices and hospitals.

newrobotdoes

Venipuncture, which involves inserting a needle into a vein to get a blood sample or perform IV therapy, is the world’s most common clinical procedure, with more than 1.4 billion performed daily in the United States. But clinicians fail in 27% of patients without visible veins, 40% of patients without palpable veins and 60% of emaciated patients, according to previous studies.

Repeated failures to start an IV line boost the likelihood of phlebitis, thrombosis and infections, and may require targeting large veins in the body or arteries — at much greater cost and risk. As a result, venipuncture is among the leading causes of injury to patients and clinicians. Moreover, a hard time accessing veins can increase procedure time by up to an hour, requires more staff and costs more than $4 billion a year in the United States, according to estimates.

“A device like ours could help clinicians get blood samples quickly, safely and reliably, preventing unnecessary complications and pain in patients from multiple needle insertion attempts,” said lead author Josh Leipheimer, a biomedical engineering doctoral student in the Yarmush lab in the biomedical engineering department in the School of Engineering at Rutgers University-New Brunswick.

In the future, the device could be used in such procedures as IV catheterization, central venous access, dialysis and placing arterial lines. Next steps include refining the device to improve success rates in patients with difficult veins to access. Data from this study will be used to enhance artificial intelligence in the robot to improve its performance.

Rutgers co-authors include Max L. Balter and Alvin I. Chen, who both graduated with doctorates; Enrique J. Pantin at Rutgers Robert Wood Johnson Medical School; Professor Kristen S. Labazzo; and principal investigator Martin L. Yarmush, the Paul and Mary Monroe Endowed Chair and Distinguished Professor in the Department of Biomedical Engineering. A researcher at Icahn School of Medicine at Mount Sinai Hospital also contributed to the study.


Story Source:

Materials provided by Rutgers UniversityNote: Content may be edited for style and length.


Journal Reference:

  1. Josh M. Leipheimer, Max L. Balter, Alvin I. Chen, Enrique J. Pantin, Alexander E. Davidovich, Kristen S. Labazzo, Martin L. Yarmush. First-in-human evaluation of a hand-held automated venipuncture device for rapid venous blood drawsTECHNOLOGY, 2020; 1 DOI: 10.1142/S2339547819500067

Accelerating chemical reactions without direct contact with a catalyst

Posted on


“Improving our understanding of the catalyst-intermediary-reaction relationship could greatly expand the possibilities of catalytic reactions,” said Harold Kung, Walter P. Murphy Professor of Chemical and Biological Engineering at the McCormick School of Engineering, who led the research. “By learning that a chemical reaction can proceed without direct contact with a catalyst, we open the door to using catalysts from earth-abundant elements to perform reactions they normally wouldn’t catalyze.”

The study, titled “Noncontact Catalysis: Initiation of Selective Ethylbenzene Oxidation by Au Cluster-Facilitated Cyclooctene Epoxidation,” was published January 31 in the journal Science Advances. Mayfair Kung, a research associate professor of chemical and , was a co-corresponding author on the paper. Linda Broadbelt, Sarah Rebecca Roland Professor of Chemical and Biological Engineering and associate dean for research, also contributed to the study.

SDS

The research builds on previous work in which the team investigated the selective oxidation of cyclooctene—a type of hydrocarbon—using gold (Au) as a catalyst. The study revealed that the reaction was catalyzed by dissolved gold nanoclusters. Surprised, the researchers set out to investigate how well the gold clusters could catalyze selective oxidation of other hydrocarbons.

Using a platform they developed called Noncontact Catalysis System (NCCS), the researchers tested the effectiveness of a gold catalyst against ethylbenzene, an organic compound prevalent in the production of many plastics. While ethylbenzene did not undergo any reaction in the presence of the gold clusters, the team found that when the gold clusters reacted with the cyclooctene, the resulting molecule provided the necessary intermediary to produce ethylbenzene oxidation.

Bending diamond at the nanoscale

Posted on


The discovery opens up a range of possibilities for the design and engineering of new nanoscale devices in sensing, defence and energy storage but also shows the challenges that lie ahead for future nanotechnologies, the researchers say.

Carbon-based nanomaterials, such as diamond, were of particular scientific and technological interest because, “in their natural form, their mechanical properties could be very different from those at the micro and nanoscale,” said the lead author of the study, published in Advanced Materials, PhD student Blake Regan from the University of Technology Sydney (UTS).

“Diamond is the frontrunner for emerging applications in nanophotonics, microelectrical mechanical systems and radiation shielding. This means a diverse range of applications in medical imaging, temperature sensing and quantum information processing and communication.

unprecedented-mechanical-behavior-of-diamond-scientists-discover-diamond-can-be-bent-at-the-nanoscale

“It also means we need to know how these materials behave at the nanoscale — how they bend, deform, change state, crack. And we haven’t had this information for single-crystal diamond,” Regan said.

The team, which included scientists from Curtin University and Sydney University, worked with diamond nanoneedles, approximately 20nm in length, or 10,000 times smaller than a human hair. The nanoparticles were subjected to an electric field force from a scanning electron microscope. By using this unique, non-destructive and reversible technique, the researchers were able to demonstrate that the nanoneedles, also known as diamond nanopillars, could be bent in the middle to 90 degrees without fracturing.

As well as this elastic deformation, the researchers observed a new form of plastic deformation when the nanopillar dimensions and crystallographic orientation of the diamond occurred together in a particular way.

Chief Investigator UTS Professor Igor Aharonovich said the result was the unexpected emergence of a new state of carbon (termed 08-carbon) and demonstrated the “unprecedented mechanical behaviour of diamond.”

“These are very important insights into the dynamics of how nanostructured materials distort and bend and how altering the parameters of a nanostructure can alter any of its physical properties from mechanical to magnetic to optical. Unlike many other hypothetical phases of carbon, 08-carbon appears spontaneously under strain with the diamond-like bonds progressively breaking in a zipper-like manner, transforming a large region from diamond into 08-carbon.

“The potential applications of nanotechnology are quite diverse. Our findings will support the design and engineering of new devices in applications such as super-capacitors or optical filters or even air filtration,” he said.

Induced flaws in metamaterials can produce useful textures and behaviour

Posted on


A new Tel Aviv University study shows how induced defects in metamaterials — artificial materials the properties of which are different from those in nature — also produce radically different consistencies and behaviors. The research has far-reaching applications: for the protection of fragile components in systems that undergo mechanical traumas, like passengers in car crashes; for the protection of delicate equipment launched to space; and even for grabbing and manipulating distant objects using a small set of localized manipulations, like minimally invasive surgery.

ImageForNews_52947_15809967760785559

“We’ve seen non-symmetric effects of a topological imperfection before. But we’ve now found a way to create these imperfections in a controlled way,” explains Prof. Yair Shokef of TAU’s School of Mechanical Engineering, co-author of the new study. “It’s a new way of looking at mechanical metamaterials, to borrow concepts from condensed-matter physics and mathematics to study the mechanics of materials.”

The new research is the fruit of a collaboration between Prof. Shokef and Dr. Erdal O?uz of TAU and Prof. Martin van Hecke and Anne Meeussen of Leiden University and AMOLF in Amsterdam. The study was published in Nature Physics on January 27. “Since we’ve developed general design rules, anyone can use our ideas,” Prof. Shokef adds.

“We were inspired by LCD-screens that produce different colors through tiny, ordered liquid crystals,” Prof. Shokef says. “When you create a defect — when, for example, you press your thumb against a screen — you disrupt the order and get a rainbow of colors. The mechanical imperfection changes how your screen functions. That was our jumping off point.”

The scientists designed a complex mechanical metamaterial using three-dimensional printing, inserted defects into its structure and showing how such localized defects influenced the mechanical response. The material invented was flat, made out of triangular puzzle pieces with sides that moved by bulging out or dimpling in. When “perfect,” the material is soft when squeezed from two sides, but in an imperfect material, one side of the material is soft and the other stiff. This effect flips when the structure is expanded at one side and squeezed at the other: stiff parts become soft, and soft parts stiff.

“That’s what we call a global, topological imperfection,” Prof. Shokef explains. “It’s an irregularity that you can’t just remove by locally flipping one puzzle piece. Specifically, we demonstrated how we can use such defects to steer mechanical forces and deformations to desired regions in the system.”

The new research advances the understanding of structural defects and their topological properties in condensed-matter physics systems. It also establishes a bridge between periodic, crystal-like metamaterials and disordered mechanical networks, which are often found in biomaterials.

The research team plans to continue their research into three-dimensional complex metamaterials, and to study the richer geometry of imperfections there.

The shape of water: What water molecules look like on the surface of materials

Posted on


Understanding the various molecular interactions and structures that arise among surface water molecules would enable scientists and engineers to develop all sorts of novel hydrophobic/hydrophilic materials or improve existing ones. For example, the friction caused by water on ships could be reduced through materials engineering, leading to higher efficiency. Other applications include, but are not limited to, medical implants and anti-icing surfaces for airplanes. However, the phenomena that occur in surface water are so complicated that Tokyo University of Science, Japan, has established a dedicated research center, called “Water Frontier Science and Technology,” where various research groups tackle this problem from different angles (theoretical analysis, experimental studies, material development, and so on). Prof Takahiro Yamamoto leads a group of scientists at this center, and they try to solve this mystery through simulations of the microscopic structures, properties, and functions of water on the surface of materials.

water-dropping-into-glass-108190099-5884c9413df78c2ccd007c7e

For this study in particular, which was published in the Japanese Journal of Applied Physics, the researchers from Tokyo University of Science, in collaboration with researchers from the Science Solutions Division, Mizuho Information & Research Institute, Inc., focused on the interactions between water molecules and graphene, a charge-neutral carbon-based material that can be made atomically flat. “Surface water on carbon nanomaterials such as graphene has attracted much attention because the properties of these materials make them ideal for studying the microscopic structure of surface water,” explains Prof Yamamoto. It had been already pointed out in previous studies that water molecules on graphene tend to form stable polygonal (2D) shapes in both surface water and “free” water (water molecules away from the surface of the material). Moreover, it had been noted that the probability of finding these structures was drastically different in surface water than in free water. However, the differences between surface and free water have to be established, and the transition between the two is difficult to analyze using conventional simulation methods.

Considering this situation, the research team decided to combine a method taken from data science, called persistent homology (PH), with simulations of molecular dynamics. PH allows for the characterization of data structures, including those contained in images/graphics, but it can also be used in materials science to find stable 3D structures between molecules. “Our study represents the first time PH was used for a structural analysis of water molecules,” remarks Prof Yamamoto. With this strategy, the researchers were able to obtain a better idea of what happens to surface water molecules as more layers of water are added on top.

When a single layer of water molecules is laid on top of graphene, the water molecules align so that their hydrogen atoms form stable polygonal structures with different numbers of sides through hydrogen bonds. This “fixes” the orientation and relative position of these first-layer water molecules, which are now forming shapes parallel to the graphene layer. If a second layer of water molecules is added, the molecules from the first and second layers form 3D structures called tetrahedrons, which resemble a pyramid but with a triangular base. Curiously, these tetrahedrons are mostly pointing downwards (towards the graphene layer), because this orientation is “energetically favorable.” In other words, the order from the first layer translates to the second one to form these 3D structures with a consistent orientation. However, as a third and more layers are added, the tetrahedrons that form don’t necessarily point downwards and instead appear to be free to point in any direction, swayed by the surrounding forces. “These results confirm that the crossover between surface and free water occurs within only three layers of water,” explains Prof Yamamoto.

The researchers have provided a video of one of their simulations where these 2D and 3D structures are highlighted, allowing one to understand the full picture. “Our study is a good example of the application of modern data analysis techniques to gain new and important insights,” adds Prof Yamamoto. What’s more, these predictions should not be hard to measure experimentally on graphene through atomic-force microscopy techniques, which would, without a doubt, confirm the existence of these structures and further validate the combination of techniques used. Prof Yamamoto concludes: “Although graphene is a rather simple surface and we could expect more complicated water structures on other types of materials, our study provides a starting point for discussions of more realistic surface effects, and we expect it will lead to the control of surface properties.”

Controlling light-with-light without nonlinearity

Posted on Updated on


In 1678, Christiaan Huygens stipulated that ‘…light beams traveling in different and even opposite directions pass though one another without mutual disturbance’1 and in the framework of classical electrodynamics, this superposition principle remains unchallenged for electromagnetic waves interacting in vacuum or inside an extended medium.2 Since the invention of the laser, colossal effort has been focused on the study and development of intense laser sources and nonlinear media for controlling light with light, from the initial search for optical bistability3 to recent quests for all-optical data networking and silicon photonic circuits. However, interactions of light with nanoscale objects provide some leeway for violation of the linear superposition principle. This is possible through the use of coherent interactions, which have been successfully engaged in applications ranging from phased array antennas to the manipulation of light distributions and quantum states of matter.4,5,6,7,8,9,10,11

Consider a thin light-absorbing film of sub-wavelength thickness: the interference of two counter-propagating incident beams A and B on such a film is described by two limiting cases illustrated in Figure 1: in the first, a standing wave is formed with a zero-field node at the position of the absorbing film. As the film is much thinner than the wavelength of the light, its interaction with the electromagnetic field at this minimum is negligible and the absorber will appear to be transparent for both incident waves. On the other hand, if the film is at a standing wave field maximum, an antinode, the interaction is strong and absorption becomes very efficient. Altering the phase or intensity of one beam will disturb the interference pattern and change the absorption (and thereby transmission) of the other. For instance, if the film is located at a node of the standing wave, blocking beam B will lead to an immediate increase in loss for beam A and therefore a decrease in its transmitted intensity. Alternatively, if the film is located at an antinode of the standing wave, blocking beam B will result in a decrease of losses for beam A and an increase in its transmitted intensity. In short, manipulating either the phase or intensity of beam B modulates the transmitted intensity of beam A.

Figure 1
figure1

Interaction of light with light on a nanoscale absorber. Two coherent counter-propagating beams A and B are incident on an absorber of sub-wavelength thickness, for instance, a lossy plasmonic metamaterial film. Two limiting regimes of interaction exist wherein the beams at the film interfere either (a) destructively or (b) constructively to effect total transmission or total absorption, respectively.

To optimize the modulation efficiency, the film should absorb half of the energy of a single beam passing through it. Under such circumstances, 100% light-by-light modulation can be achieved when signal A is modulated by manipulating the phase of beam B and 50% modulation can be achieved if control is encoded in the intensity of beam B. Moreover, one will observe that when the intensities of the two beams are equal and the film is located at an antinode, all light entering the metamaterial will be absorbed, while at a node, light transmitted by the film will experience no Joule losses.

Here, it should be noted that for fundamental reasons, an infinitely thin film can absorb not more than half of the energy of the incident beam.12,13 At the same time, a level of absorption of 50% is difficult to achieve in thin unstructured metal films: across most of the optical spectrum, incident energy will either be reflected or transmitted by such a film. Recently reported much higher absorption levels have only been achieved in layered structures of finite thickness14,15,16,17,18 that are unsuitable for implementation of the scheme presented in Figure 1. However, in the optical part of the spectrum, a very thin nanostructured metal film can deliver strong resonant absorption approaching the 50% target at a designated wavelength. Such metal films, periodically structured on the sub-wavelength scale, are known as planar plasmonic metamaterials.

Content Author

Model beats Wall Street analysts in forecasting business financials

Posted on


Knowing a company’s true sales can help determine its value. Investors, for instance, often employ financial analysts to predict a company’s upcoming earnings using various public data, computational tools, and their own intuition. Now MIT researchers have developed an automated model that significantly outperforms humans in predicting business sales using very limited, “noisy” data.

In finance, there’s growing interest in using imprecise but frequently generated consumer data — called “alternative data” — to help predict a company’s earnings for trading and investment purposes. Alternative data can comprise credit card purchases, location data from smartphones, or even satellite images showing how many cars are parked in a retailer’s lot. Combining alternative data with more traditional but infrequent ground-truth financial data — such as quarterly earnings, press releases, and stock prices — can paint a clearer picture of a company’s financial health on even a daily or weekly basis.

Model beats Wall Street analysts in forecasting business financials
Model beats Wall Street analysts in forecasting business financials

But, so far, it’s been very difficult to get accurate, frequent estimates using alternative data. In a paper published this week in the Proceedings of ACM Sigmetrics Conference, the researchers describe a model for forecasting financials that uses only anonymized weekly credit card transactions and three-month earning reports.

Tasked with predicting quarterly earnings of more than 30 companies, the model outperformed the combined estimates of expert Wall Street analysts on 57 percent of predictions. Notably, the analysts had access to any available private or public data and other machine-learning models, while the researchers’ model used a very small dataset of the two data types.

“Alternative data are these weird, proxy signals to help track the underlying financials of a company,” says first author Michael Fleder, a postdoc in the Laboratory for Information and Decision Systems (LIDS). “We asked, ‘Can you combine these noisy signals with quarterly numbers to estimate the true financials of a company at high frequencies?’ Turns out the answer is yes.”

The model could give an edge to investors, traders, or companies looking to frequently compare their sales with competitors. Beyond finance, the model could help social and political scientists, for example, to study aggregated, anonymous data on public behavior. “It’ll be useful for anyone who wants to figure out what people are doing,” Fleder says.

Joining Fleder on the paper is EECS Professor Devavrat Shah, who is the director of MIT’s Statistics and Data Science Center, a member of the Laboratory for Information and Decision Systems, a principal investigator for the MIT Institute for Foundations of Data Science, and an adjunct professor at the Tata Institute of Fundamental Research.

Tackling the “small data” problem

For better or worse, a lot of consumer data is up for sale. Retailers, for instance, can buy credit card transactions or location data to see how many people are shopping at a competitor. Advertisers can use the data to see how their advertisements are impacting sales. But getting those answers still primarily relies on humans. No machine-learning model has been able to adequately crunch the numbers.

Counterintuitively, the problem is actually lack of data. Each financial input, such as a quarterly report or weekly credit card total, is only one number. Quarterly reports over two years total only eight data points. Credit card data for, say, every week over the same period is only roughly another 100 “noisy” data points, meaning they contain potentially uninterpretable information.

“We have a ‘small data’ problem,” Fleder says. “You only get a tiny slice of what people are spending and you have to extrapolate and infer what’s really going on from that fraction of data.”

For their work, the researchers obtained consumer credit card transactions — at typically weekly and biweekly intervals — and quarterly reports for 34 retailers from 2015 to 2018 from a hedge fund. Across all companies, they gathered 306 quarters-worth of data in total.

Computing daily sales is fairly simple in concept. The model assumes a company’s daily sales remain similar, only slightly decreasing or increasing from one day to the next. Mathematically, that means sales values for consecutive days are multiplied by some constant value plus some statistical noise value — which captures some of the inherent randomness in a company’s sales. Tomorrow’s sales, for instance, equal today’s sales multiplied by, say, 0.998 or 1.01, plus the estimated number for noise.

If given accurate model parameters for the daily constant and noise level, a standard inference algorithm can calculate that equation to output an accurate forecast of daily sales. But the trick is calculating those parameters.

Untangling the numbers

That’s where quarterly reports and probability techniques come in handy. In a simple world, a quarterly report could be divided by, say, 90 days to calculate the daily sales (implying sales are roughly constant day-to-day). In reality, sales vary from day to day. Also, including alternative data to help understand how sales vary over a quarter complicates matters: Apart from being noisy, purchased credit card data always consist of some indeterminate fraction of the total sales. All that makes it very difficult to know how exactly the credit card totals factor into the overall sales estimate.

“That requires a bit of untangling the numbers,” Fleder says. “If we observe 1 percent of a company’s weekly sales through credit card transactions, how do we know it’s 1 percent? And, if the credit card data is noisy, how do you know how noisy it is? We don’t have access to the ground truth for daily or weekly sales totals. But the quarterly aggregates help us reason about those totals.”

To do so, the researchers use a variation of the standard inference algorithm, called Kalman filtering or Belief Propagation, which has been used in various technologies from space shuttles to smartphone GPS. Kalman filtering uses data measurements observed over time, containing noise inaccuracies, to generate a probability distribution for unknown variables over a designated timeframe. In the researchers’ work, that means estimating the possible sales of a single day.

To train the model, the technique first breaks down quarterly sales into a set number of measured days, say 90 — allowing sales to vary day-to-day. Then, it matches the observed, noisy credit card data to unknown daily sales. Using the quarterly numbers and some extrapolation, it estimates the fraction of total sales the credit card data likely represents. Then, it calculates each day’s fraction of observed sales, noise level, and an error estimate for how well it made its predictions.

The inference algorithm plugs all those values into the formula to predict daily sales totals. Then, it can sum those totals to get weekly, monthly, or quarterly numbers. Across all 34 companies, the model beat a consensus benchmark — which combines estimates of Wall Street analysts — on 57.2 percent of 306 quarterly predictions.

Next, the researchers are designing the model to analyze a combination of credit card transactions and other alternative data, such as location information. “This isn’t all we can do. This is just a natural starting point,” Fleder says.

Could every country have a Green New Deal? Report charts paths for 143 countries

Posted on


Ten years after the publication of their first plan for powering the world with wind, water, and solar, researchers offer an updated vision of the steps that 143 countries around the world can take to attain 100% clean, renewable energy by the year 2050. The new roadmaps, publishing December 20 in the journal One Earth, follow up on previous work that formed the basis for the energy portion of the U.S. Green New Deal and other state, city, and business commitments to 100% clean, renewable energy around the globe — and use the latest energy data available in each country to offer more precise guidance on how to reach those commitments.

In this update, Mark Z. Jacobson of Stanford University and his team find low-cost, stable grid solutions in 24 world regions encompassing the 143 countries. They project that transitioning to clean, renewable energy could reduce worldwide energy needs by 57%, create 28.6 million more jobs than are lost, and reduce energy, health, and climate costs by 91% compared with a business-as-usual analysis. The new paper makes use of updated data about how each country’s energy use is changing, acknowledges lower costs and greater availability of renewable energy and storage technology, includes new countries in its analysis, and accounts for recently built clean, renewable infrastructure in some countries.

Could every country have a Green New Deal? Report charts paths for 143 countries
Could every country have a Green New Deal? Report charts paths for 143 countries

“There are a lot of countries that have committed to doing something to counteract the growing impacts of global warming, but they still don’t know exactly what to do,” says Jacobson, a professor of civil and environmental engineering at Stanford and the co-founder of the Solutions Project, a U.S. non-profit educating the public and policymakers about a transition to 100% clean, renewable energy. “How it would work? How it would keep the lights on? To be honest, many of the policymakers and advocates supporting and promoting the Green New Deal don’t have a good idea of the details of what the actual system looks like or what the impact of a transition is. It’s more an abstract concept. So, we’re trying to quantify it and to pin down what one possible system might look like. This work can help fill that void and give countries guidance.”

The roadmaps call for the electrification of all energy sectors, for increased energy efficiency leading to reduced energy use, and for the development of wind, water, and solar infrastructure that can supply 80% of all power by 2030 and 100% of all power by 2050. All energy sectors includes electricity; transportation; building heating and cooling; industry; agriculture, forestry, and fishing; and the military. The researchers’ modeling suggests that the efficiency of electric and hydrogen fuel cell vehicles over fossil fuel vehicles, of electrified industry over fossil industry, and of electric heat pumps over fossil heating and cooling, along with the elimination of energy needed for mining, transporting, and refining fossil fuels, could substantially decrease overall energy use.

The transition to wind, water, and solar would require an initial investment of $73 trillion worldwide, but this would pay for itself over time by energy sales. In addition, clean, renewable energy is cheaper to generate over time than are fossil fuels, so the investment reduces annual energy costs significantly. In addition, it reduces air pollution and its health impacts, and only requires 0.17% of the 143 countries’ total land area for new infrastructure and 0.48% of their total land area for spacing purposes, such as between wind turbines.

“We find that by electrifying everything with clean, renewable energy, we reduce power demand by about 57%,” Jacobson says. “So even if the cost per unit of energy is similar, the cost that people pay in the aggregate for energy is 61% less. And that’s before we account for the social cost, which includes the costs we will save by mitigating health and climate damage. That’s why the Green New Deal is such a good deal. You’re reducing energy costs by 60% and social costs by 91%.”

In the U.S., this roadmap — which corresponds to the energy portion of the Green New Deal, which will eliminate the use of all fossil fuels for energy in the U.S. — requires an upfront investment of $7.8 trillion. It calls for the construction of 288,000 new large (5 megawatt) wind turbines and 16,000 large (100 megawatt) solar farms on just 1.08% of U.S. land, with over 85% of that land used for spacing between wind turbines. The spacing land can double, for instance, as farmland. The plan creates 3.1 million more U.S. jobs than the business-as-usual case, and saves 63,000 lives from air pollution per year. It reduces energy, health, and climate costs 1.3, 0.7, and 3.1 trillion dollars per year, respectively, compared with the current fossil fuel energy infrastructure.

And the transition is already underway. “We have 11 states, in addition to the District of Columbia, Puerto Rico, and a number of major U.S. cities that have committed to 100% or effectively 100% renewable electric,” Jacobson says. “That means that every time they need new electricity because a coal plant or gas plant retires, they will only select among renewable sources to replace them.”

He believes that individuals, businesses, and lawmakers all have an important role to play in achieving this transition. “If I just wrote this paper and published it and it didn’t have a support network of people who wanted to use this information,” he says, “it would just get lost in the dusty literature. If you want a law passed, you really need the public to be supportive.”

Like any model, this one comes with uncertainties. There are inconsistencies between datasets on energy supply and demand, and the findings depend on the ability to model future energy consumption. The model also assumes the perfect transmission of energy from where it’s plentiful to where it’s needed, with no bottlenecking and no loss of energy along power lines. While this is never the case, many of the assessments were done on countries with small enough grids that the difference is negligible, and Jacobson argues that larger countries like the U.S. can be broken down into smaller grids to make perfect transmission less of a concern. The researchers addressed additional uncertainties by modeling scenarios with high, mean, and low costs of energy, air pollution damage, and climate damage.

The work deliberately focuses only on wind, water, and solar power and excludes nuclear power, “clean coal,” and biofuels. Nuclear power is excluded because it requires 10-19 years between planning and operation and has high costs and acknowledged meltdown, weapons proliferation, mining, and waste risks. “Clean coal” and biofuels are not included because they both cause heavy air pollution and still emit over 50 times more carbon per unit of energy than wind, water, or solar power.

One concern often discussed with wind and solar power is that they may not be able to reliably match energy supplies to the demands of the grid, as they are dependent on weather conditions and time of year. This issue is addressed squarely in the present study in 24 world regions. The study finds that demand can be met by intermittent supply and storage throughout the world. Jacobson and his team found that electrifying all energy sectors actually creates more flexible demand for energy. Flexible demand is demand that does not need to be met immediately. For example, an electric car battery can be charged any time of day or night or an electric heat pump water heater can heat water any time of day or night. Because electrification of all energy sectors creates more flexible demand, matching demand with supply and storage becomes easier in a clean, renewable energy world.

Jacobson also notes that the roadmaps this study offers are not the only possible ones and points to work done by 11 other groups that also found feasible paths to 100% clean, renewable energy. “We’re just trying to lay out one scenario for 143 countries to give people in these and other countries the confidence that yes, this is possible. But there are many solutions and many scenarios that could work. You’re probably not going to predict exactly what’s going to happen, but it’s not like you need to find the needle in the haystack. There are lots of needles in this haystack.”

Bilingual children are strong, creative storytellers, study shows

Posted on


Bilingual children use as many words as monolingual children when telling a story, and demonstrate high levels of cognitive flexibility, according to new research by University of Alberta scientists.

“We found that the number of words that bilingual children use in their stories is highly correlated with their cognitive flexibility — the ability to switch between thinking about different concepts,” said Elena Nicoladis, lead author and professor in the Department of Psychology in the Faculty of Science. “This suggests that bilinguals are adept at using the medium of storytelling.”

Vocabulary is a strong predictor of school achievement, and so is storytelling. “These results suggest that parents of bilingual children do not need to be concerned about long-term school achievement, said Nicoladis. “In a storytelling context, bilingual kids are able to use this flexibility to convey stories in creative ways.”

_8759f27a-231b-11ea-95dc-bf2b3eebb1f0

The research examined a group of French-English bilingual children who have been taught two languages since birth, rather than learning a second language later in life. Results show that bilingual children used just as many words to tell a story in English as monolingual children. Participants also used just as many words in French as they did in English when telling a story.

Previous research has shown that bilingual children score lower than monolingual children on traditional vocabulary tests, meaning this results are changing our understanding of multiple languages and cognition in children.

“The past research is not surprising,” added Nicoladis. “Learning a word is related to how much time you spend in each language. For bilingual children, time is split between languages. So, unsurprisingly, they tend to have lower vocabularies in each of their languages. However, this research shows that as a function of storytelling, bilingual children are equally strong as monolingual children.”

This research used a new, highly sensitive measure for examining cognitive flexibility, examining a participant’s ability to switch between games with different rules, while maintaining accuracy and reaction time. This study builds on previous research examining vocabulary in bilingual children who have learned English as a second language.

385-million-year-old forest discovered

Posted on Updated on


While sifting through fossil soils in the Catskill region near Cairo, New York, researchers uncovered the extensive root system of 386-million-year old primitive trees. The fossils, located about 25 miles from the site previously believed to have the world’s oldest forests, is evidence that the transition toward forests as we know them today began earlier in the Devonian Period than typically believed.

“The Devonian Period represents a time in which the first forest appeared on planet Earth,” says first author William Stein, an emeritus professor of biological science at Binghamton University, New York. “The effects were of first order magnitude, in terms of changes in ecosystems, what happens on the Earth’s surface and oceans, in global atmosphere, CO2 concentration in the atmosphere, and global climate. So many dramatic changes occurred at that time as a result of those original forests that basically, the world has never been the same since.”

385-million-year-old forest discovered
385-million-year-old forest discovered

Stein, along with collaborators, including Christopher Berry and Jennifer Morris of Cardiff University and Jonathan Leake of the University of Sheffield,have been working in the Catskill region in New York, where in 2012 they uncovered “footprint evidence” of a different fossil forest at Gilboa, which, for many years has been termed the Earth’s oldest forest. The discovery at Cairo, about a 40-minute drive from the original site, now reveals an even older forest with dramatically different composition.

The Cairo site presents three unique root systems, leading Stein and his team to hypothesize that much like today, the forests of the Devonian Period were composed of different trees occupying different places depending on local conditions.

First, Stein and his team identified a rooting system that they believe belonged to a palm tree-like plant called Eospermatopteris. This tree, which was first identified at the Gilboa site, had relatively rudimentary roots. Like a weed, Eospermatopteris likely occupied many environments, explaining its presence at both sites. But its roots had relatively limited range and probably lived only a year or two before dying and being replaced by other roots that would occupy the same space. The researchers also found evidence of a tree called Archaeopteris, which shares a number of characteristics with modern seed plants.

“Archaeopteris seems to reveal the beginning of the future of what forests will ultimately become,” says Stein. “Based on what we know from the body fossil evidence of Archaeopteris prior to this, and now from the rooting evidence that we’ve added at Cairo, these plants are very modern compared to other Devonian plants. Although still dramatically different than modern trees, yet Archaeopteris nevertheless seems to point the way toward the future of forests elements.”

Stein and his team were also surprised to find a third root system in the fossilized soil at Cairo belonging to a tree thought to only exist during the Carboniferous Period and beyond: “scale trees” belonging to the class Lycopsida.

“What we have at Cairo is a rooting structure that appears identical to great trees of the Carboniferous coal swamps with fascinating elongate roots. But no one has yet found body fossil evidence of this group this early in the Devonian.” Stein says. “Our findings are perhaps suggestive that these plants were already in the forest, but perhaps in a different environment, earlier than generally believed. Yet we only have a footprint, and we await additional fossil evidence for confirmation.”

Moving forward, Stein and his team hope to continue investigating the Catskill region and compare their findings with fossil forests around the world.

“It seems to me, worldwide, many of these kinds of environments are preserved in fossil soils. And I’d like to know what happened historically, not just in the Catskills, but everywhere,” Says Stein. “Understanding evolutionary and ecological history — that’s what I find most satisfying.”