Imagine a car that can drive itself to your desired destination notwithstanding the traffic and other hurdles. Scientists say this could soon be a reality.
Researchers at Yale University and New York University have developed a supercomputer based on human visual system that they say could allow cars to drive themselves.
Dubbed NeuFlow, the system takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it, said lead researcher Eugenio Culurciello of Yale’s School of Engineering and Applied Science.
Culurciello, who presented their research at the High Performance Embedded Computing (HPEC) workshop in Boston, said the system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications.
According to the scientists, NeuFlow processes tens of megapixel images in real time in order to be able to recognise the various objects encountered on the road, such as other cars, people, stoplights, sidewalks, not to mention the road itself.
The system is also extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts (that’s less than the power a cell phone uses) to accomplish what it takes bench-top computers with multiple graphic processors more than 300 watts to achieve.
“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said in a statement.
“The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places.” Beyond the autonomous car navigation, the scientists said, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations.
Researchers at Yale University and New York University have developed a supercomputer based on human visual system that they say could allow cars to drive themselves.
Dubbed NeuFlow, the system takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it, said lead researcher Eugenio Culurciello of Yale’s School of Engineering and Applied Science.
Culurciello, who presented their research at the High Performance Embedded Computing (HPEC) workshop in Boston, said the system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications.
According to the scientists, NeuFlow processes tens of megapixel images in real time in order to be able to recognise the various objects encountered on the road, such as other cars, people, stoplights, sidewalks, not to mention the road itself.
The system is also extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts (that’s less than the power a cell phone uses) to accomplish what it takes bench-top computers with multiple graphic processors more than 300 watts to achieve.
“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said in a statement.
“The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places.” Beyond the autonomous car navigation, the scientists said, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations.
No comments:
Post a Comment