The Endoc

Extinct vegetarian cave bear diet mystery unravelled

Posted on


180728083510_1_540x360

During the Late Pleistocene period (between 125,000 to 12,000 years ago) two bear species roamed Europe: omnivorous brown bears  and the extinct mostly vegetarian cave bear.

Until now, very little is known about the dietary evolution of the cave bear and how it became a vegetarian, as the fossils of the direct ancestor, the Deninger’s bear , are extremely scarce.

However,  sheds new light on this. A research team from Germany and Spain found that Deninger’s bear likely had a similar diet to its descendant  the classic cave bear  as new analysis shows a distinct morphology in the cranium, mandible and teeth, which has been related to its dietary specialization of a larger consumption of vegetal matter.

To understand the evolution of the cave bear lineage, the researchers micro-CT scanned the rare fossils and digitally removed the sediments so as not to risk damaging the fossils. Using sophisticated statistical methods, called geometric morphometrics, the researchers compared the three-dimensional shape of the mandibles and skull of Deninger’s bear with that of classic cave bears and modern bears.

“The analyses showed that Deninger’s bear had very similarly shaped mandibles and skull to the classic cave bear,”  This implies that they were adapted to the same food types and were primarily vegetarian.

“There is an ongoing discussion on the extent to which the classic cave bear was a vegetarian. And, this is especially why the new information on the diet of its direct ancestor is so important, because it teaches us that a differentiation between the diet of cave bears and brown bears was already established by 500 thousand years ago and likely earlier,” doctoral candidate at the Universities of the Basque Country and Bordeaux and co-author of the study.

Interestingly, researchers also found there are shape differences between the Deninger’s bears from the Iberian Peninsula and those from the rest of Europe, which are unlikely to be related to diet.

They have come up with three possibilities to explain these differences: 1) the Iberian bears are chronologically younger than the rest, 2) the Pyrenees, acting as natural barrier, resulted in some genetic differentiation between the Iberian bears and those from the rest of Europe, 3) there were multiple lineages, with either just one leading to the classic cave bear, or each lineage leading to a different group of cave bears.

 

Galaxy outskirts likely hunting grounds for dying massive stars and black holes

Posted on


180726160951_1_540x360

Findings from a Rochester Institute of Technology study provide further evidence that the outskirts of spiral galaxies host massive black holes. These overlooked regions are new places to observe gravitational waves created when the massive bodies collide.

The study winds back time on massive black holes by analyzing their visible precursors — supernovae with collapsing cores. The slow decay of these massive stars creates bright signatures in the electromagnetic spectrum before stellar evolution ends in black holes.

Using data from the Lick Observatory Supernova Search, a survey of nearby galaxies, the team compared the supernovae rate in outer spiral galaxies with that of known hosts — dwarf/satellite galaxies — and found comparable numbers for typical spiral outskirts and typical dwarf galaxies, roughly two core-collapse supernovae per millennium.

Low levels of elements heavier than hydrogen and helium found in dwarf/satellite galaxies create favorable conditions for massive black holes to form and create binary pairs. A similar galactic environment in the outer disks of spiral galaxies also creates likely hunting grounds for massive black holes, said Sukanya Chakrabarti, lead author and assistant professor in the RIT School of Physics and Astronomy.

“If these core-collapse supernovae are the predecessors to the binary black holes detected by LIGO (Laser Interferometer Gravitational-wave Observatory), then what we’ve found is a reliable method of identifying the host galaxies of LIGO sources,” said Chakrabarti. “Because these black holes have an electromagnetic counterpart at an earlier stage in their life, we can pinpoint their location in the sky and watch for massive black holes.”

The study’s findings complement Chakrabarti’s 2017 study, which showed that the outer parts of spiral galaxies could contribute to LIGO detection rates. The regions form stars at a comparable rate to dwarf galaxies and are low in heavy element content, creating a conducive home for massive black holes. The current study isolates potential candidates within these favorable galactic environments.

“We see now that these are both important contributors.” “The next step is to do deeper surveys to see if we can improve the rate.”

“This work may help us determine which galaxies to be on the lookout for electromagnetic counterparts of massive black holes.”

Thin gap on stellar family portrait

Posted on


180726161004_1_540x360

A thin gap has been discovered on the Hertzsprung-Russell Diagram (HRD), the most fundamental of all maps in stellar astronomy, a finding that provides new information about the interior structures of low mass stars in the Milky Way Galaxy.

Just as a graph can be made of people with different heights and weights, astronomers compare stars using their luminosities and temperatures. The HRD is a “family portrait” of the stars in the Galaxy, where stars such as the Sun, Altair, Alpha Centauri, Betelgeuse, the north star Polaris and Sirius can be compared. The newly discovered gap cuts diagonally across the HRD and indicates where a crucial internal change occurs in the structures of stars. The gap outlines where stars transition from being larger and mostly convective with a thin radiative layer to being smaller and fully convective.

Radiation and convection are two ways to transfer energy from inside a star to its surface. Radiation transfers energy through space, and convection is the transfer of energy from one place to another by the movement of fluid.

The researchers estimate that stars above the gap contain more than about one-third the mass of the Sun, and those below have less mass. Because different types of stars have different masses, this feature reveals where different types of interior structures are on the HRD. The gap occurs in the middle of the region of “red dwarf” stars, which are much smaller and cooler than the Sun, but compose three of every four stars in the solar neighborhood.

“We were pretty excited to see this result, and it provides us new insights to the structures and evolution of stars,” first author of the study and a staff astronomer in the Department of Physics and Astronomy at Georgia State.

In 2013, the European Space Agency (ESA) launched the Gaia spacecraft to make a census of the stars in the Milky Way Galaxy and to create a three-dimensional map. In April 2018, the ESA released results of this mission, revealing an unprecedented map of more than one billion stars in the Galaxy, a 10,000-fold increase in the number of stars with accurate distances. The research team led by Georgia State plotted nearly 250,000 of the closest stars in the Gaia data on the HRD to reveal the gap. Georgia State’s researchers have studied the distances to nearby stars for years, which enabled them to interpret the results and notice this thin gap.

Using results from a theoretical computer model that simulates the activity inside the stars, it appears the gap is caused by a slight shrinking in size if a star is convective all the way through.

X-ray technology reveals never-before-seen matter around black hole

Posted on


180728084103_1_540x360.jpg

In an international collaboration between Japan and Sweden, scientists clarified how gravity affects the shape of matter near the black hole in binary system Cygnus X-1. may help scientists further understand the physics of strong gravity and the evolution of black holes and galaxies.

Near the center of the constellation of Cygnus is a star orbiting the first black hole discovered in the universe. Together, they form a binary system known as Cygnus X-1. This black hole is also one of the brightest sources of X-rays in the sky. However, the geometry of matter that gives rise to this light was uncertain. The research team revealed this information from a new technique called X-ray polarimetry.

Taking a picture of a black hole is not easy. For one thing, it is not yet possible to observe a black hole because light cannot escape it. Rather, instead of observing the black hole itself, scientists can observe light coming from matter close to the black hole. In the case of Cygnus X-1, this matter comes from the star that closely orbits the black hole.

Most light that we see, like from the sun, vibrates in many directions. Polarization filters light so that it vibrates in one direction. It is how snow goggles with polarized lenses let skiers see more easily where they are going down the mountain — they work because the filter cuts light reflecting off of the snow.

“It’s the same situation with hard X-rays around a black hole,” Hiroshima University Assistant Professor and study coauthor Hiromitsu Takahashi said. “However, hard X-rays and gamma rays coming from near the black hole penetrate this filter. There are no such ‘goggles’ for these rays, so we need another special kind of treatment to direct and measure this scattering of light.”

The team needed to figure out where the light was coming from and where it scattered. In order to make both of these measurements, they launched an X-ray polarimeter on a balloon called PoGO+. From there, the team could piece together what fraction of hard X-rays reflected off the accretion disk and identify the matter shape.

Two competing models describe how matter near a black hole can look in a binary system such as Cygnus X-1: the lamp-post and extended model. In the lamp-post model, the corona is compact and bound closely to the black hole. Photons bend toward the accretion disk, resulting in more reflected light. In the extended model, the corona is larger and spread around the vicinity of the black hole. In this case, the reflected light by the disk is weaker.

Since light did not bend that much under the strong gravity of the black hole, the team concluded that the black hole fit the extended corona model.

With this information, the researchers can uncover more characteristics about black holes. One example is its spin. The effects of spin can modify the space-time surrounding the black hole. Spin could also provide clues into the evolution of the black hole. It could be slowing down in speed since the beginning of the universe, or it could be accumulating matter and spinning faster.

“The black hole in Cygnus is one of many.”  “We would like to study more black holes using X-ray polarimetry, like those closer to the center of galaxies. Maybe we better understand black hole evolution, as well as galaxy evolution.”

Optical neural network demo

Posted on


180728084146_1_540x360.jpg

Researchers at the National Institute of Standards and Technology (NIST) have made a silicon chip that distributes optical signals precisely across a miniature brain-like grid, showcasing a potential new design for neural networks.

The human brain has billions of neurons (nerve cells), each with thousands of connections to other neurons. Many computing research projects aim to emulate the brain by creating circuits of artificial neural networks. But conventional electronics, including the electrical wiring of semiconductor circuits, often impedes the extremely complex routing required for useful neural networks.

The NIST team proposes to use light instead of electricity as a signaling medium. Neural networks already have demonstrated remarkable power in solving complex problems, including rapid pattern recognition and data analysis. The use of light would eliminate interference due to electrical charge and the signals would travel faster and farther.

“Light’s advantages could improve the performance of neural nets for scientific data analysis such as searches for Earth-like planets and quantum information science, and accelerate the development of highly intuitive control systems for autonomous vehicles.”

A conventional computer processes information through algorithms, or human-coded rules. a neural network relies on a network of connections among processing elements, or neurons, which can be trained to recognize certain patterns of stimuli. A neural or neuromorphic computer would consist of a large, complex system of neural networks.

Described in a new paper, the NIST chip overcomes a major challenge to the use of light signals by vertically stacking two layers of photonic waveguides — structures that confine light into narrow lines for routing optical signals, much as wires route electrical signals. This three-dimensional (3D) design enables complex routing schemes, which are necessary to mimic neural systems. Furthermore, this design can easily be extended to incorporate additional waveguiding layers when needed for more complex networks.

The stacked waveguides form a three-dimensional grid with 10 inputs or “upstream” neurons each connecting to 10 outputs or “downstream” neurons, for a total of 100 receivers. Fabricated on a silicon wafer, the waveguides are made of silicon nitride and are each 800 nanometers (nm) wide and 400 nm thick. Researchers created software to automatically generate signal routing, with adjustable levels of connectivity between the neurons.

Laser light was directed into the chip through an optical fiber. The goal was to route each input to every output group, following a selected distribution pattern for light intensity or power. Power levels represent the pattern and degree of connectivity in the circuit. The authors demonstrated two schemes for controlling output intensity: uniform (each output receives the same power) and a “bell curve” distribution (in which middle neurons receive the most power, while peripheral neurons receive less).

To evaluate the results, researchers made images of the output signals. All signals were focused through a microscope lens onto a semiconductor sensor and processed into image frames. This method allows many devices to be analyzed at the same time with high precision. The output was highly uniform, with low error rates, confirming precise power distribution.

“We’ve really done two things here,” Chiles said. “We’ve begun to use the third dimension to enable more optical connectivity, and we’ve developed a new measurement technique to rapidly characterize many devices in a photonic system. Both advances are crucial as we begin to scale up to massive optoelectronic neural systems.”

The big picture: Mouse memory cells are about experience, not place

Posted on


180726162738_1_540x360.jpg

When it comes to memory, it’s more than just “location, location, location.” New research suggests that the brain doesn’t store all memories in ‘place cells’, the main type of neuron in the hippocampus, a structure crucial for navigation and memory. Instead..

The hippocampus is well-known as the domain of place cells, whose discovery and function as mental maps of space was recognized with the 2014 Nobel Prize in Physiology or Medicine. On the other hand, as a hotspot for memory research, the hippocampus is proposed as the physical location for memories of experiences, stored in engram cells. “Neuroscience is still grappling with the engram memory concept,” says research group leader Thomas McHugh of the RIKEN Center for Brain Science in Japan. “We know what these cells do when they’re activated, but what do they represent and how do they function?”

The assumption is that memory engrams are just place cells, but McHugh’s group think they have an alternative explanation. In their experiments, mice spent time in one kind of cage to make a memory of that environment. The researchers used optogenetic methods to identify the cells that were active during that time and therefore contributed to the memory. These cells represented only a fraction of hippocampal place cells and had larger place fields — the corresponding real-world area that gets the cell excited when the mouse is exploring. Analysis of the activity across a large number of cells revealed that, while most place cells kept the same spatial map during both the initial and later visit to the cage, the engram cells had uncorrelated activity between the two time points. The only exception was very early during both visits when the cells’ activity was similar, which is what you would expect if they are involved in recall of the context.

When the mice were placed in a second, different cage, the engram cells remained inactive — they were already ‘occupied’ with the previous memory. In fact, the researchers were able to tell the first and second environments apart, just by comparing the activity of these cells. The engram cells are only active for the memory of the context itself, not to specific locations, while place cells on the other hand are active during exploration, creating and updating a spatial map. Recognizing a context or environment doesn’t require walking through or exploring, though, so location cells thus appear to be distinct from memory cells.

The instability of the spatial information signaled by engram cells compared with the majority of place cells indicates that they deal with the ‘big picture’, the macro scale of a context and not a specific location therein. The researchers propose that engram cells may not store memories per se but to act as an index that ties memory-relevant details together, wherever else in the brain those may be. “Their role is to track elements of a memory, whether those are from sound or vision or other senses, and then trigger their recall by activating other parts of the brain like the cortex,” McHugh hypothesizes. While the hippocampus clearly does underlie spatial memory, this newly revealed function as an index for contextual identity shows that this brain region is about more than just maps. “We long assumed memory is anchored to stable representations of locations,” “but it’s actually the opposite.”

Reversing cause and effect is no trouble for quantum computers

Posted on


180719094405_1_540x360.jpg

Watch a movie backwards and you’ll likely get confused — but a quantum computer wouldn’t. That’s the conclusion of researcher Mile Gu at the Centre for Quantum Technologies (CQT) at the National University of Singapore and Nanyang Technological University and collaborators.

The international team show that a quantum computer is less in thrall to the arrow of time than a classical computer. In some cases, it’s as if the quantum computer doesn’t need to distinguish between cause and effect at all.

The new work is inspired by an influential discovery made almost ten years ago by complexity scientists James Crutchfield and John Mahoney at the University of California, Davis. They showed that many statistical data sequences will have a built-in arrow of time. An observer who sees the data played from beginning to end, like the frames of a movie, can model what comes next using only a modest amount of memory about what occurred before. An observer who tries to model the system in reverse has a much harder task — potentially needing to track orders of magnitude more information.

This discovery came to be known as ‘causal asymmetry’. It seems intuitive. After all, modelling a system when time is running backwards is like trying to infer a cause from an effect. We are used to finding that more difficult than predicting an effect from a cause. In everyday life, understanding what will happen next is easier if you know what just happened, and what happened before that.

However, researchers are always intrigued to discover asymmetries that are linked to time-ordering. This is because the fundamental laws of physics are ambivalent about whether time moves forwards or in reverse.

“When the physics does not impose any direction on time, where does causal asymmetry — the memory overhead needed to reverse cause and effect — come from?” asks Gu.

The first studies of causal asymmetry used models with classical physics to generate predictions. Crutchfield and Mahoney teamed up with Gu and collaborators Jayne Thompson, Andrew Garner and Vlatko Vedral at CQT to find out whether quantum mechanics changes the situation.

They found that it did. Models that use quantum physics, the team prove, can entirely mitigate the memory overhead. A quantum model forced to emulate the process in reverse-time will always outperform a classical model modelling the process in forward-time.

The work has some profound implications. “The most exciting thing for us is the possible connection with the arrow of time,”  first author on the work. “If causal asymmetry is only found in classical models, it suggests our perception of cause and effect, and thus time, can emerge from enforcing a classical explanation on events in a fundamentally quantum world.”

Next the team wants to understand how this connects to other ideas of time. “Every community has their own arrow of time, and everybody wants to explain where they come from,” Crutchfield and Mahoney called causal asymmetry an example of time’s ‘barbed arrow’.

Most iconic is the ‘thermodynamic arrow’. It comes from the idea that disorder, or entropy, will always increase — a little here and there, in everything that happens, until the Universe ends as one big, hot mess. While causal asymmetry is not the same as the thermodynamic arrow, they could be interrelated. Classical models that track more information also generate more disorder. “This hints that causal asymmetry can have entropic consequence.”

The results may also have practical value. Doing away with the classical overhead for reversing cause and effect could help quantum simulation. “Like being played a movie in reverse time, sometimes we may be required to make sense of things that are presented in an order that is intrinsically difficult to model. In such cases, quantum methods could prove vastly more efficient than their classical counterparts.”

Alcohol-related cirrhosis deaths skyrocket in young adults

Posted on


180718183605_1_540x360

Deaths from cirrhosis rose in all but one state between 1999-2016, with increases seen most often among young adults.

The deaths linked to the end stages of liver damage jumped by 65 percent with alcohol a major cause, adults age 25-34 the biggest victims and fatalities highest among whites, American Indians and Hispanics.

Liver specialist Elliot B. Tapper, M.D., says he’s witnessed the disturbing shift in demographics among the patients with liver failure he treats at Michigan Medicine. confirms that in communities across the country more young people are drinking themselves to death.

The data shows adults age 25-34 experienced the highest average annual increase in cirrhosis deaths — about 10.5 percent each year. The rise was driven entirely by alcohol-related liver disease, the authors say.

“Each alcohol-related death means decades of lost life, broken families and lost economic productivity,” says Tapper, a member of the University of Michigan Division of Gastroenterology and Hepatology and health services researcher at the U-M Institute for Healthcare Policy and Innovation.

“In addition, medical care of those dying from cirrhosis costs billions of dollars.”

The rise in liver deaths is not where liver specialists expected to be after gains in fighting hepatitis C, a major liver threat seen often in Baby Boomers. Antiviral medications have set the course to one day eradicate hepatitis C.

Cirrhosis can be caused by a virus like hepatitis C, fatty liver disease or alcohol abuse. The increase in liver deaths highlights new challenges in preventing cirrhosis deaths beyond hepatitis.

“We thought we would see improvements, but these data make it clear: even after hepatitis C, we will still have our work cut out for us,” says Tapper.

That mortality due to cirrhosis began increasing in 2009 — around the time of the Great Recession when the economic downturn led to loss of people’s savings, homes and jobs — may offer a clue as to its cause.

“We suspect that there is a connection between increased alcohol use and unemployment associated with the global financial crisis. But more research is needed,” Tapper says.

Cirrhosis caused a total of 460,760 deaths during the seven-year study period; about one-third were attributed to hepatocellular carcinoma, a common type of liver cancer that is often caused by cirrhosis, researchers found.

In 2016 alone, 11,073 lives were lost to liver cancer which was doubled the number of deaths in 1999.

Researchers studied the trends in liver deaths due to cirrhosis by examining death certificates compiled by the Centers for Disease Control and Prevention’s Wide-ranging Online Data for Epidemiologic Research project.

“The rapid rise in liver deaths underscores gaps in care and opportunities for prevention,” says Parikh, study co-author and liver specialist at Michigan Medicine.

The study’s goal was to determine trends in liver disease deaths and which groups have been impacted most across the country. The research showed:

  • Fewer Asians and Pacific Islanders died of liver cancer.
  • It is hitting many places especially hard, namely Kentucky, Alabama, Arkansas and New Mexico, where cirrhosis deaths were highest.
  • A state-by-state analysis showed cirrhosis mortality is improving only in Maryland.

Deaths due to alcohol-related liver disease are entirely preventable, say authors who suggest strategies such as taxes on alcohol, minimum prices for alcohol and reducing marketing and advertising to curb problem drinking. Higher alcohol costs have been linked with decreased alcohol-related deaths.

New tools to systematically build cooperation: Theory of repeated games

Posted on


180704135323_1_540x360

When what we want as individuals clashes with what is best for the group, we have a social dilemma. How can we overcome these dilemmas, and encourage people to cooperate, even if they have reason not to? In a paper released today in Nature, Christian Hilbe and Krishnendu Chatterjee of the Institute of Science and Technology Austria (IST Austria), together with Martin Nowak of Harvard and Stepan Simsa of Charles University, have shown that if the social dilemma that individuals face is dependent on whether or not they work together, cooperation can triumph. This finding was the result of a new type of framework that they introduced — one that extends the entire theory of repeated games. Moreover, as their work pinpoints the ideal conditions for fostering cooperation, they have provided tools to systematically build cooperation.

The tragedy of the commons: if we can (ab)use a public good without seeing negative consequences, we will — without consideration of others or the future. We see examples of this in our daily lives, from climate change and forest depletion down to the stack of dirty dishes in the office kitchen. In game theory, scientists have used repeated games — repeated interactions where individuals face the same social dilemma each time — to understand when individuals choose to cooperate, i.e. their strategies. However, these games have always kept the value of the public resource constant, no matter how players acted in the previous round — something that does not reflect reality of the situation.

In their new framework, Hilbe, Simsa, Chatterjee, and Nowak consider repeated games in which cooperation does not only affect the players’ present payoffs, but also which game they face in the next round. “Repeated games have been studied intensely for over 40 years, and significant new developments are rare — especially such simple ones,” says Martin Nowak. “This addition actually extends the whole theory of repeated games, as a fixed environment is a special case of our new framework.”

When they explored the new model, the scientists found that this dependence on players’ actions could greatly increase the chance that players cooperate — provided the right conditions were in place. “Our framework shows which kinds of feedback are most likely to lead to cooperation,” says first author Christian Hilbe. These include, for instance, how quickly the resource degrades or how easy it is to return to a more valuable state. “Using this knowledge, you can design systems that maximize cooperation, or create an environment that encourages people to work together,” he adds. For example, these ideas could even be implemented by a business or corporation, to create a work community that encourages working together.

 

Plasma-spewing quasar shines light on universe’s youth, early galaxy formation

Posted on


180709101147_1_540x360

Carnegie’s Eduardo Bañados led a team that found a quasar with the brightest radio emission ever observed in the early universe, due to it spewing out a jet of extremely fast-moving material.

Bañados’ discovery was followed up by Emmanuel Momjian of the National Radio Astronomy Observatory, which allowed the team to see with unprecedented detail the jet shooting out of a quasar that formed within the universe’s first billion years of existence.

The findings, will allow astronomers to better probe the universe’s youth during an important period of transition to its current state.

Quasars are comprised of enormous black holes accreting matter at the centers of massive galaxies. This newly discovered quasar,  is one of a rare breed that doesn’t just swallow matter into the black hole but also emits a jet of plasma traveling at speeds approaching that of light. This jet makes it extremely bright in the frequencies detected by radio telescopes. Although quasars were identified more than 50 years ago by their strong radio emissions, now we know that only about 10 percent of them are strong radio emitters.

What’s more, this newly discovered quasar’s light has been traveling nearly 13 billion of the universe’s 13.7 billion years to reach us here in Earth. P352-15 is the first quasar with clear evidence of radio jets seen within the first billion years of the universe’s history.

“There is a dearth of known strong radio emitters from the universe’s youth and this is the brightest radio quasar at that epoch by an order of magnitude.”

“This is the most-detailed image yet of such a bright galaxy at this great distance.”

The Big Bang started the universe as a hot soup of extremely energetic particles that were rapidly expanding. As it expanded, it cooled and coalesced into neutral hydrogen gas, which left the universe dark, without any luminous sources, until gravity condensed matter into the first stars and galaxies. About 800 million years after the Big Bang, the energy released by these first galaxies caused the neutral hydrogen that was scattered throughout the universe to get excited and lose an electron, or ionize, a state that the gas has remained in since that time.

It’s highly unusual to find radio jet-emitting quasars such as this one from the period just after the universe’s lights came back on.

“The jet from this quasar could serve as an important calibration tool to help future projects penetrate the dark ages and perhaps reveal how the earliest galaxies came into being.”