Search This Blog

Monday, May 18, 2015

Silicon Chips That See Are Going to Make Your Car and Smartphone Brilliant

As reported by MIT Technology Review: Many of the devices around us may soon acquire powerful new abilities to understand images and video, thanks to hardware designed for the machine-learning technique called deep learning.

Companies like Google have made breakthroughs in image and face recognition through deep learning, using giant data sets and powerful computers (see “10 Breakthrough Technologies 2013: Deep Learning”). Now two leading chip companies and the Chinese search giant Baidu say hardware is coming that will bring the technique to phones, cars, and more.

Chip manufacturers don’t typically disclose their new features in advance. But at a conference on computer vision Tuesday, Synopsys, a company that licenses software and intellectual property to the biggest names in chip making, showed off a new image-processor core tailored for deep learning. It is expected to be added to chips that power smartphones, cameras, and cars. The core would occupy about one square millimeter of space on a chip made with one of the most commonly used manufacturing technologies.

Pierre Paulin, a director of R&D at Synopsys, told MIT Technology Review that the new processor design will be made available to his company’s customers this summer. Many have expressed strong interest in getting hold of hardware to help deploy deep learning, he said.

Synopsys showed a demo in which the new design recognized speed-limit signs in footage from a car. Paulin also presented results from using the chip to run a deep-learning network trained to recognize faces. It didn’t hit the accuracy levels of the best research results, which have been achieved on powerful computers, but it came pretty close, he said. “For applications like video surveillance it performs very well,” he said. The specialized core uses significantly less power than a conventional chip would need to do the same task.

The new core could add a degree of visual intelligence to many kinds of devices, from phones to cheap security cameras. It wouldn’t allow devices to recognize tens of thousands of objects on their own, but Paulin said they might be able to recognize dozens.


That might lead to novel kinds of camera or photo apps. Paulin said the technology could also enhance car, traffic, and surveillance cameras. For example, a home security camera could start sending data over the Internet only when a human entered the frame. “You can do fancier things like detecting if someone has fallen on the subway,” he said.

Jeff Gehlhaar, vice president of technology at Qualcomm Research, spoke at the event about his company’s work on getting deep learning running on apps for existing phone hardware. He declined to discuss whether the company is planning to build support for deep learning into its chips. But speaking about the industry in general, he said that such chips are surely coming. Being able to use deep learning on mobile chips will be vital to helping robots navigate and interact with the world, he said, and to efforts to develop autonomous cars.

“I think you will see custom hardware emerge to solve these problems,” he said. “Our traditional approaches to silicon are going to run out of gas, and we’ll have to roll up our sleeves and do things differently.” Gehlhaar didn’t indicate how soon that might be. Qualcomm has said that its coming generation of mobile chips will include software designed to bring deep learning to camera and other apps (see “Smartphones Will Soon Learn to Recognize Faces and More”).


Ren Wu, a researcher at Chinese search company Baidu, also said chips that support deep learning are needed for powerful research computers in daily use. “You need to deploy that intelligence everywhere, at any place or any time,” he said.

Being able to do things like analyze images on a device without connecting to the Internet can make apps faster and more energy-efficient because it isn’t necessary to send data to and fro, said Wu. He and Qualcomm’s Gehlhaar both said that making mobile devices more intelligent could temper the privacy implications of some apps by reducing the volume of personal data such as photos transmitted off a device.

“You want the intelligence to filter out the raw data and only send the important information, the metadata, to the cloud,” said Wu.

The First Self-Driving Vehicle You See May Have 18 Wheels

As reported by the NY Times:  Traveling about 55 miles per hour on a Nevada highway, the big rig's driver looked like The Thinker, with his elbow on the arm rest and his hand on his chin. No hands on the steering wheel, no feet on the pedals.

Mark Alvick was in "highway pilot" mode, the wheel moving this way and that as if a ghost were at the helm.

Daimler Trucks North America LLC says its "Inspiration" truck, the first self-driving semi-truck to be licensed to roll on public roads — in this case any highway or interstate in Nevada — is the future of trucking. It's a future that will still need drivers, but they might be called "logistics managers."

"The human brain is still the best computer money can buy," said Daimler Trucks North America LLC CEO Martin Daum on Wednesday.

Although much attention has been paid to autonomous vehicles being developed by Google and traditional car companies, Daimler believes that automated tractor-trailers will be rolling along highways before self-driving cars are cruising around the suburbs.

On freeways there are no intersections, no red lights, no pedestrians, making it a far less complex trip, said Wolfgang Bernhard, a management board member of Germany's Daimler AG, at an event in Las Vegas.

But it will be years before an autonomous truck hits the highway for anything more than tests and demonstrations, the company says.

The industry is watching the developments, said Ted Scott, director of engineering for American Trucking Associations, which represents trucking companies.

He questioned what the economic benefit would be, with companies paying a driver's salary on top of the new technology, even given the potential safety advantages including less-fatigued drivers.

"Being a tired driver is not as big of a problem as it's often made out to be," Scott said.

The group representing truck drivers — the Owner Operator Independent Drivers Association — isn't sure the technology would affect driving jobs, noting the abundance of job openings now and the industry's high turnover.

"We mainly have questions," said Norita Taylor, the group's director of public affairs, citing current laws regulating how long a driver can drive and prohibitions on texting while driving.

Al Pearson, Daimler Trucks' chief engineer of product validation, said all the same laws still apply: No texting, no napping while in motion.

"We need an attentive driver," he said, with the technology removing some of the stress.

Legal and philosophical questions stand in the way, as does perfecting the technology that links radar sensors and cameras to computers that can brake and accelerate the truck and handle any freeway situation.

Public perception of a self-driving car will also be a hurdle. Daum said society might forgive a number of deaths caused by tired truck drivers at the wheel but they would never forgive a single fatal crash blamed on a fully automated big rig.

For now four states, including Nevada, and the District of Columbia, certify testing of autonomous vehicles on public roads as long as a human driver is behind the wheel, and a few others are keen on allowing the tests.

Bernhard said more states need to allow testing of autonomous driving before fleets of self-driving semi-trucks fill U.S. freeways and interstates anytime soon.

The company is still far from taking customer orders for the trucks.

"We're just getting people inspired," he said.

The US Government Wants to Speed Up Deployment of Vehicle-To-Vehicle Communication

As reported by The VergeVehicle-to-vehicle (V2V) communication is one of the next big sea changes to hit the auto industry — within a few years, every new car on the road will be wirelessly talking with every other new car on the road, delivering position and speed information that can help prevent accidents. NHTSA had already committed to delivering a set of proposed rules for V2V by next year, but USDOT secretary Anthony Foxx doesn't think that's fast enough: he's asked the agency to "accelerate the timetable" in comments made this week. Additionally, he says that he's gearing up for "rapid testing" in concert with the FCC to make sure that there are no radio interference issues with V2V systems. (Various industry groups have been concerned that efforts to expand Wi-Fi spectrum in the US could cause issues with V2V.)
Even in its most rudimentary form, V2V can make a huge difference in safety by basically allowing drivers (and self-driving cars) to see things beyond their field of vision. I had a chance to test V2V-equipped cars at CES last year, and was immediately impressed: the system warns you of things like cars at intersections that may not be slowing down for a red light and emergency braking beyond the car ahead of you — scenarios that you'd have no way to detect otherwise before a crash was inevitable.

Sunday, May 17, 2015

Apple Bought a Company Focused on Super-Accurate GPS

As reported by EngadgetApple has snapped up more than a few companies that know how to deal with yourlocation data, but it now appears to be focused on improving the accuracy of that data from the get-go. MacRumors has discovered evidence that Apple recently acquired Coherent Navigation, a company specializing in very accurate GPS. It combined the usual GPS positioning with information from Iridium's low-orbit communication satellites to pinpoint your whereabouts within inches, rather than feet.

It's not clear just what the Coherent team is doing under Apple's wing. Its CEO and co-founders have taken positions in Maps and wireless technologies teams, but that's about as far as the revelations go.

We've reached out to Apple to confirm the deal, but it doesn't historically reveal what its plans are following buyouts. However, it could be for more than just ensuring that your Maps directions are on the mark. Apple is rumored to be developing an electric car with self-driving features that, by their nature, would depend on very accurate GPS info to get you around safely. There's no guarantee that Apple took on these new hires with autonomous vehicles in mind, but the move would at least make sense in that light.

Saturday, May 16, 2015

Russian Rocket Carrying Mexican Satellite Is Said to Crash in Siberia

As reported by the NY Times:  A Russian-made rocket ferrying a Mexican telecommunications satellite crashed in eastern Siberia minutes after its launching on Saturday, Russian news agencies reported, citing officials at the country’s space agency.

The Proton-M rocket was launched from the Baikonur Cosmodrome in Kazakhstan at 11:47 a.m. and crashed in the Chita region of Siberia about eight minutes later, the reports said.

The failure appeared to have occurred with the rocket’s third stage, which was intended to bring the satellite to an altitude of about 110 miles. At that point, it was supposed to be propelled by engines into geostationary orbit.

Instead, there was a catastrophic failure. The stream of telemetry data sent back by the rocket failed about a minute before the satellite was to enter orbit, the news agencies reported.

The Interfax agency quoted an unidentified official at Roscosmos, the Russian space agency, as saying there had been an “emergency engine shutdown of the third stage.”

The Proton rocket is the mainstay transporter for International Launch Services, a joint Russian-American satellite carrier business. The satellite, called Centenario, was being sent into orbit on behalf of Mexico’s Ministry of Communications and Transportation and had been manufactured by Boeing Satellite Systems.

According to a statement issued by International Launch Services before the launching, it was intended to provide “mobile satellite services to support national security, civil and humanitarian efforts and will provide disaster relief, emergency services, telemedicine, rural education and government agency operations.”

The Proton-M is regarded as a workhorse but has encountered numerous problems in its decades of service. In 2013, a leadership shake-up at Roscosmos was prompted in part by the fourth failed launch of a Proton-M rocket within three years.

Officials said further launchings would be suspended until the cause of Saturday’s crash was determined.

The Mexican ministry said International Launch Services would create a commission to investigate the accident.

It said the satellite loss was “100 percent” covered by insurance, a point that seemed aimed at a domestic population often skeptical of the government’s spending on big projects.

The ministry said it still planned to launch another communications satellite from Cape Canaveral, Fla., aboard a Lockheed Martin rocket in October.

Gerardo Ruiz Esparza, the transportation and communications secretary, said that the lost satellite and its launching were valued at $390 million.

“I regret the mission was not a success,” Mr. Esparza said. “If Mexico is joining in these high technologies, we are going to have to learn to live with the risks that are not uncommon in this industry. The benefit is not so much being in the space era so much as the service it could provide to Mexicans.”
The Mexsat 1 is enclosed inside the Proton rocket's payload fairing before Saturday's launch. 

Friday, May 15, 2015

GE 3D Prints a Working Jet Engine

As reported by Computer World: General Electric this week revealed that it has completed a multi-year project to print a working jet engine.

The engine, small enough to fit in a backpack, was built by a team of technicians, machinists and engineers at GE Aviation's Additive Development Center outside Cincinnati. The lab is working with additive manufacturing as a way to produce next-generation jet parts using a technique known as (DMLM).

The engine also required some post-printing machining and polishing of parts. The research team then rigged up a data acquisition system to measure exhaust temperature, speed and thrust.


The engine, which consisted of more than a dozen parts, was printed on an M270 industrial 3D printer from EOS. The machine can melt a variety of alloys, including cobalt chrome, nickel alloy, titanium and stainless steel.

The M270 3D printer allowed the GE engineers to use high-temperature alloys not typically available to the radio-controlled engine industry. The resulting engine from GE's 3D printing process endured numerous tests and the turbine achieved 33,000 rpm.

The GE research team had been working on the engine for several years. The researchers have already build working aircraft components using the DMLM 3D printing method, including an FAA-approved part for a GE90 jet engine The part is a metal housing for a sensor, known as T25. Unable to yet build a full-sized working jet engine, the engineers used a radio controlled airplane jet engine model plans, and tweaked them to improve performance.

The jet turbine engine is GE's first functioning prototype.


Earlier this year, a group of researchers at an Australian university, along with its spinoff company, used 3D printing to make two metal jet engines that, while only proof-of-concept designs, have all the working parts of a functioning gas turbine engine.

The two engines, created by Monash University and its spinoff Amaero Engineering, are garnering a lot of attention from leading aeronautics companies. Airbus, Boeing and defense contractor Raytheon are lining up at the Monash Centre for Additive Manufacturing in Melbourne to develop new components with 3D printing.

The proof of concepts are replicas of an auxiliary power unit used in aircraft such as the Falcon 20 French business jet, which was provided by Microturbo.



Thursday, May 14, 2015

Why are We Still Coordinating Disaster Relief Over Radios?

As reported by The Verge:Tuesday night, Philadelphia's emergency dispatch channels lit up. The city was the site of a catastrophic Amtrak derailment, resulting in seven deaths and dozens of injuries, and first responders were scrambling to cut through the chaos. If you listened in to the open radio channels, you could hear it — EMS drivers looking for a hospital with room for patients, or dispatchers directing resources. Even finding the right staging area was a challenge at points, given the flood of different agencies rushing to help, and conducting everything over regular non-digital radio channels only made it more of a challenge.
That's not to say the first responders were anything other than professional. Some chaos is inevitable, as responders rush in before the situation is fully understood. What's more remarkable is that despite the dawn of the internet age and the smartphone revolution, some of our most critical communication is still happening over primitive, walkie-talkie-style radio. The basic nature of the technology makes simple tasks like group formation and private messaging difficult, and imposes a constant stress over maintaining a clear channel. Aren’t there better tools for this job?
Some jurisdictions already have them. First responders in California can access information through the Next-Generation Incident Command System (also known as NICS), an MIT project sponsored by the Department of Homeland Security. Developed over the last three years, NICS is meant to move all that communication to the web, plotting information on a constantly updating map of the area. The developers describe it as "carefully designed for the responder under extreme stress," accessible through mobile browsers alongside desktops. It’s also built on open standards, allowing developers to build apps on top of it or allow third-party logins from services like Google.
There are a lot of advantages to a system like NICS, starting with some of the basic questions responders faced on Tuesday night. If an ambulance driver is looking for the nearest hospital with room for new patients, a map will be more useful than a radio tool. But like any new product, adoption has been the biggest problem. NICS is available to anyone who wants it, and while it’s seen some adoption outside of California (particularly in pre-planned events like marathons), most departments have been slow to see the benefits. Even in California, the system is more of a supplement than a legitimate replacement for the standard radio system.
There’s also the issue of tracking victims in the wake of the disaster, which has also been tackled by digital systems. Wireless medical record systems have huge advantages over conventional paper methods in a crisis situation, and in 2009, the Agency for Healthcare Research and Quality made recommendations for a "National Mass Patient and Evacuee Movement, Regulating, and Tracking System." An ideal system would have the ability to locate and track patients, as well as regulate their flow during mass casualty events, the Agency said. RFID tags could be used to passively monitor location, providing continuous data without requiring patients to actively check in. The system could also include real-time resources availability data from hospitals, and send automatic notifications to first responders once it’s been decided where to send a given patient. Still, six years after the system was proposed, it’s still just a wish list, and there is no comprehensive federal system for patient tracking.
So why haven’t more first responders adopted these systems? Bureaucratic inertia is certainly part of the answer, but there are also real practical concerns about how more complex systems will function in situations where the slightest technical failure can have life-or-death consequences.
In some cases, the problem is simply maintaining a signal. Thick smoke from a fire or explosion can interfere with signals coming from Bluetooth headsets and phones, which would cause real problems for any setup relying on a continuous connection. It’s a problem for voice radio signals too, but the nature of a responders’ channel makes it easy to drop off for a few minutes without disrupting the larger group. In other settings, the volume of traffic is a problem. Verizon, AT&T, Sprint, and T-Mobile networks were all overloaded in the aftermath of the Boston Marathon bombing in 2013, leaving many unable to contact loved ones. As a result, many are wary of moving responders to conventional data channels without more comprehensive connectivity solutions in place. Fortunately, first responders have call priority during emergencies, and could potentially get more spectrum help from the FCC if needed, but it still raises a world of problems that radio channels have already solved.
Even more practically, conventional electronics doesn’t play well with a lot of first responder gear. Firefighters’ gloves make it hard to use tiny electronic devices with touchscreens, which is part of the reason why you’re more likely to see a rescue team with a gigantic radio unit in hand than the latest smartphone. There are solutions — different gloves, different devices — but they would have to be deployed at tremendous scale. There are about 1,140,000 firefighters, 239,000 EMTs, and 590,000 local police officers in the US, and they’re all trained on the proper use of radio channels. Training them on the latest tablet system is an immense task, particularly when we aren’t entirely sure what we want from the system itself.
Seen in that light, the conventional radio channels look pretty good. It’s not the most powerful communication channel, but everyone responding knows how to use it, and there’s rarely any technical difficulty in signing on. As long as you can find the staging area, you can join the effort, with no passwords or downloads required, and that’s an important feature. First responders are working in situations of extreme urgency and chaos, and as a result, they end up using the most reliable and universal systems they can find. If that means forgoing the latest tech, it might be worth the tradeoff.