Search This Blog

Monday, September 7, 2015

Researcher Hacks LIDAR Self-driving Car Sensors

As reported by IEEE SpectrumThe multi-thousand-dollar laser ranging (lidar) systems that most self-driving cars rely on to sense obstacles can be hacked by a setup costing just $60, according to a security researcher.
“I can take echoes of a fake car and put them at any location I want,” says Jonathan Petit, Principal Scientist at Security Innovation, a software security company. “And I can do the same with a pedestrian or a wall.”
Using such a system, attackers could trick a self-driving car into thinking something is directly ahead of it, thus forcing it to slow down. Or they could overwhelm it with so many spurious signals that the car would not move at all for fear of hitting phantom obstacles.
In a paper written while he was a research fellow in the University of Cork’s Computer Security Group and due to be presented at the Black Hat Europe security conference in November, Petit describes a simple setup he designed using a low-power laser and a pulse generator. “It’s kind of a laser pointer, really. And you don’t need the pulse generator when you do the attack,” he says. “You can easily do it with a Raspberry Pi or an Arduino. It’s really off the shelf.”
Petit set out to explore the vulnerabilities of autonomous vehicles, and quickly settled on sensors as the most susceptible technologies. “This is a key point, where the input starts,” he says. “If a self-driving car has poor inputs, it will make poor driving decisions.”
Other researchers had previously hacked or spoofed vehicle’s GPS devices and wireless tire sensors.
While the short-range radars used by many self-driving cars for navigation operate in a frequency band requiring licensing, lidar systems use easily-mimicked pulses of laser light to build up a 3-D picture of the car’s surroundings and were ripe for attack.
Petit began by simply recording pulses from a commercial IBEO Lux lidar unit. The pulses were not encoded or encrypted, which allowed him to simply replay them at a later point. “The only tricky part was to be synchronized, to fire the signal back at the lidar at the right time,” he says. “Then the lidar thought that there was clearly an object there.”
Petit was able to create the illusion of a fake car, wall, or pedestrian anywhere from 20 to 350 meters from the lidar unit, and make multiple copies of the simulated obstacles, and even make them move. “I can spoof thousands of objects and basically carry out a denial of service attack on the tracking system so it’s not able to track real objects,” he says. Petit’s attack worked at distances up to 100 meters, in front, to the side or even behind the lidar being attacked and did not require him to target the lidar precisely with a narrow beam.
Petit acknowledges that his attacks are currently limited to one specific unit but says, “The point of my work is not to say that IBEO has a poor product. I don’t think any of the lidar manufacturers have thought about this or tried this.” 
Sensor attacks are not limited to just robotic drivers, of course. The same laser pointer that Petit used could carry out an equally devastating denial of service attack on a human motorist by simply dazzling her, and without the need for sophisticated laser pulse recording, generation, or synchronization equipment.
But the fact that a lidar attack could be carried out without alerting a self-driving car’s passengers is worrying. Karl Iagnemma directs the Robotic Mobility Group at MIT and is CEO of nuTonomy, a start-up focused on the development of software for self-driving cars. He says: “Everyone knows security is an issue and will at some point become an important issue. But the biggest threat to an occupant of a self-driving car today isn’t any hack, it’s the bug in someone’s software because we don’t have systems that we’re 100-percent sure are safe.”
Petit argues that it is never too early to start thinking about security. “There are ways to solve it,” he says. “A strong system that does misbehavior detection could cross-check with other data and filter out those that aren’t plausible. But I don’t think carmakers have done it yet. This might be a good wake-up call for them.”

Friday, September 4, 2015

SpaceX Falcon Heavy set to Launch in Spring of 2016

As reported by EngadgetWe've heard quite a bit about the SpaceX Falcon Heavy spacecraft since it was first announced. What we haven't seen is a launch. However, it's now planned for next spring. 

Earlier this week, SpaceX vice president of mission and launch operations Lee Rosen said that the company is aiming for a "late April early May time-frame" for that first launch. Rosen also explained that the crew is finishing renovations to the Falcon Heavy's launch pad for the initial test flight. 

That's the Pad 39A that's designed to handle launches of both the Falcon Heavy and Falcon 9. The rocket was first announced back in 2011 with a launch planned for 2013 that didn't pan out. And this summer's Falcon 9 disaster pushed things back even further. 

After the first test launch, the Falcon Heavy is scheduled to carry a load of satellites for the Air Force in September 2016 under the Space Test Program (STP-2): an Integrated Payload Stack (IPS) consisting of two co-prime space vehicles (SVs), with up to six auxiliary payloads (APLs), and up to eight separate Poly-PicoSatellite Orbital Deployers (P-PODs) and cubesats.

As a refresher, the spacecraft uses 4.5 million pounds of thrust to launch and is capable of carrying a payload of 53,000 kg (116,845 lbs.) into low Earth orbit. 

NYC Teacher Arrested for Flying Drone at the U.S. Open

As reported by the Wall Street Journal: A New York City public school teacher was arrested after he flew a drone into the stands at the U.S. Open on Friday, police said.

Daniel Varley, 26 years old, was charged with reckless endangerment after he crash-landed the device in the upper stands at Louis Armstrong Stadium of the USTA National Tennis Center in Flushing just before 9 p.m., police said.

He was also charged with reckless operation of a drone and operating a drone in a New York City park outside of a prescribed area for doing so.

Mr. Varley had flown the device from a nearby park, where he was found a short time later, police said.

The drone incident happened during the second-to-last match of the night between Flavia Pennetta and Monica Niculescu.
“A little bit scary, I have to say,” Ms. Pennetta told the Associated Press, saying she initially thought it was a bomb.

Play briefly stopped while the device was inspected by police and fire personnel.

“The chair umpire just wanted to wait for an OK from the police to be able to continue, even if, truthfully, I don’t think even they knew what it was,” Ms. Pennetta told the AP.

Mr. Varley was released and issued a desk appearance ticket, police said. He could not be reached for comment.

No one was injured in the incident, officials said.

Thursday, September 3, 2015

Google/Waze Sued for Allegedly Stealing Map Data

As reported by Slashgear
Waze, the navigation app that Google bought and cops hate, is facing a lawsuit for allegedly copying a proprietary navigation database. The lawsuit is being brought by San Francisco law firm Kronenberger Rosenfeld, LLP, and was filed on behalf of PhantomALERT, Inc., which is described as a “GPS navigation technology company.” Waze wasacquired by Google, but before that happened, according to the lawsuit, it copied PhantomALERT’s database without authorization. This data was then reportedly used in Waze’s own Android and iPhone apps
According to the lawsuit, PhantomALERT has spent more than seven years curating a database that includes a “systematic process” designed to identify nearby law enforcement officers, road conditions (hazards and traffic) and points of interest. Waze’s CEO reportedly approached PhantomALERT in 2010 regarding a deal in which the two companies would share database information.
PhantomALERT states it declined the offer, partly because Waze hadn’t yet developed “a substantial database” of its own. This, according to the lawsuit, lead to Waze “repeatedly [copying] its Points of Interest database”; the copied information is said to still be in use by Google.
Map makers of old would determine if their maps had been stolen by inserting small fictitious cities and roads. PhantomALERT is said to have done something similar, identifying Waze’s alleged copying by locating fake points of interest that had been inserted into its database - the same database that is reportedly being used in Google’s navigation app.
Said PhantomALERT’s CEO Joseph Scott Seyoum in a statement:
The financial and reputational damages we have incurred from having our unique and carefully built database stolen are staggering. While we cannot undo the past, we can ensure that those who took our intellectual property no longer profit from it at our expense … I started PhantomALERT seven years ago as an entrepreneur with a dream, and now that dream has been crushed by companies that are profiting from the years of blood, sweat and tears our team put into our product.
Google acquired Waze for $1 billion.

Fuel Ships Take 4,000-Mile Africa Detour as Oil Prices Plunge

As reported by BloombergSlumping oil prices are spurring 4,000-mile (6,400-kilometer) diversions of tankers filled with diesel and jet fuel as the price of ship fuel plunges, opening up trading opportunities.
At least five tankers will deliver refined products to European ports in August and September, sailing around South Africa rather than using the normal shortcut through Egypt’s Suez Canal, ship tracking data show. The falling cost of fuel oil, used to power ships, has made longer voyages viable at a time when there are advantages for traders to keep cargoes at sea. Long-distance shipments between continents have increased this year, according to Torm A/S, world’s second-biggest publicly traded product-tanker owner.
Plunging oil opens up new trades as product tankers take the long route to Europe
Brent crude futures plunged about 50 percent since August last year as OPEC nations kept pumping more than the market needs. Across oil markets, the rout triggered what traders call contango, a price pattern that lessens the need for speedy oil deliveries because future fuel prices are higher than immediate ones.
“There’s massive demand to move oil products over very long distances,” Erik Nikolai Stavseth, a shipping analyst at Arctic Securities ASA in Oslo, said by phone Aug. 27. “These shipments tell me that there are very good times ahead for product-tanker owners,” he said, referring to ships that carry refined fuels like gasoline and diesel.
Scorpio Tankers Inc. and Torm are the biggest and second-biggest publicly traded owners of product tankers worldwide, according to data from Clarkson Plc, the world’s largest shipbroker.

Rates Surge

Rates for hauling these fuels are surging. The sort of long-range tankers being used to sail around Africa will earn $28,375 a day this year, according to a survey of shipping specialists compiled by Bloomberg. That’s the most since at least 2010 and 19 percent more than anticipated at the end of last year.
The Baltic Clean Tanker Index, an overall measure of the cost of moving gasoline, diesel and other fuels, averaged 695 points since the start of January, the highest in four years, data from the Baltic Exchange in London show.
The journey around Africa only works occasionally, when fuel prices are low and trading conditions are right. The normal route remains through the Suez Canal, which is what most vessels are doing now, according to lists of charters and tracking compiled by Bloomberg. 
Each long-range tanker is designed to deliver about 80,000 metric tons of cargo.

Extended Journeys

Extended journeys help owners because they keep ships employed for longer, effectively cutting fleet supply. As well as lower fuel prices, traders also like the option of selling cargoes en route, to West Africa, for example, where demand is rising, and to other regions including Latin America, according to Erik Broekhuizen, the head of tanker research and consulting at Poten & Partners Inc. in New York.
The trend to sail around Africa could intensify if the price slides to new lows, reducing fuel costs further and as more export refineries come on stream, in particular in the Middle East and countries like India, Broekhuizen said. That would increase demand for long-range product carriers.
A widening of the contango-- where a near-term oversupply makes gasoline and other products cheaper today than in future months -- could also spur demand.
"Low fuel prices make the longer route around the Cape increasingly competitive relative to the Suez Canal," Broekhuizen said. “Another advantage of taking the longer route is that it gives traders more options of where to sell their cargoes.”

Wednesday, September 2, 2015

Simple Self-Driving Golf Carts Point the Way for Autonomous Cars

Autonomous golf carts deployed in public gardens in Singapore and can be summoned with mobile devices.  While the sensors cost around $30,000, prices are falling and capabilities are increasing.
As reported by Computer World: One of the obstacles to deploying autonomous vehicles is the high cost of some components, but researchers are experimenting with self-driving golf carts that use minimal and relatively cheap gear.

The scientists from MIT and Singaporean universities deployed two modified Yamaha electric golf carts in Singapore. They envision the self-driving vehicles being used in a shared transportation system, as rental bicycles are used in many cities. 


As seen in a YouTube video, the carts transported 500 people along winding paths in public gardens while autonomously navigating and watching for obstacles such as pedestrians and animals. 

The carts picked up people at 10 stations in the gardens. They traveled at a maximum speed of only 24 kilometers per hour (15 mph), so the computers had time to process all the obstacles.

The only problem was when a slow-moving monitor lizard crossed a cart's path, causing it to stop and wait, according to MIT. Nearly all the passengers said they would ride in the golf carts again.

The researchers, part of the Singapore-MIT Alliance for Research and Technology (SMART) collaboration, focused on using less gear than that used in self-driving vehicles while relying on computation-efficient algorithms.


An algorithm known as the Dynamic Virtual Bumper handles the navigation and obstacle avoidance, setting the cart's path. It is a computational "tube zone" with its center line as the path, according to a paper on the research that will be presented at the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems in Hamburg later this month.

The size of the virtual tube is a function of the cart's speed and position, and when an obstacle is detected, the tube is redrawn to exclude it. 



In addition to a webcam, each cart is equipped with four single-beam LIDAR (light detection and ranging) sensors from German maker Sick that have a field of view of about 270 degrees.

"We do not use the 3D scanners that have a very high price point and produce a panoramic image," Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory and a coauthor of the paper, said via email.

Two of the sensors were mounted in the cart's front and used for determining its position and obstacle detection. The other two were cheaper, shorter-range sensors and were mounted on the back corners of the cart to scan for obstacles behind and on either side of it.

The cost of the sensors was still high -- on the order of $30,000 -- but that's less than solutions used in more sophisticated robotic vehicles. Google has used $80,000 Velodyne LIDARs on its earlier self-driving cars.

"Prices for sensors keep coming down, and capabilities are increasing," Emilio Frazzoli, a professor of aeronautics and astronautics at MIT who also coauthored the paper, said via email.

The researchers plan to improve the booking system for the carts, and develop a method that would let the vehicles communicate their intentions to nearby pedestrians.


Google’s Driverless Cars Run Into Problem: Cars With Drivers

A Google self-driving car in Mountain View, Calif. Google cars regularly take the most cautious approach, but that can put them out of step with the other vehicles on the road.
As reported by the New York Times: Google, a leader in efforts to create driverless cars, has run into an odd safety conundrum: humans.

Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.

It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.

“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”

Traffic wrecks and deaths could well plummet in a world without any drivers, as some researchers predict. But wide use of self-driving cars is still many years away, and testers are still sorting out hypothetical risks — like hackers — and real world challenges, like what happens when an autonomous car breaks down on the highway.

For now, there is the nearer-term problem of blending robots and humans. Already, cars from several automakers have technology that can warn or even take over for a driver, whether through advanced cruise control or brakes that apply themselves. Uber is working on the self-driving car technology, and Google expanded its tests in July to Austin, Tex.

Google cars regularly take quick, evasive maneuvers or exercise caution in ways that are at once the most cautious approach, but also out of step with the other vehicles on the road.

“It’s always going to follow the rules, I mean, almost to a point where human drivers who get in the car and are like ‘Why is the car doing that?’” said Tom Supple, a Google safety driver during a recent test drive on the streets near Google’s Silicon Valley headquarters.

Since 2009, Google cars have been in 16 crashes, mostly fender-benders, and in every single case, the company says, a human was at fault. This includes the rear-ender crash on Aug. 20, and reported Tuesday by Google. The Google car slowed for a pedestrian, then the Google employee manually applied the brakes. The car was hit from behind, sending the employee to the emergency room for mild whiplash.

Google’s report on the incident adds another twist: While the safety driver did the right thing by applying the brakes, if the autonomous car had been left alone, it might have braked less hard and traveled closer to the crosswalk, giving the car behind a little more room to stop.

Would that have prevented the collision? Google says it’s impossible to say.

There was a single case in which Google says the company was responsible for a crash. It happened in August 2011, when one of its Google cars collided with another moving vehicle. But, remarkably, the Google car was being piloted at the time by an employee.  Another human at fault.

Humans and machines, it seems, are an imperfect mix. Take lane departure technology, which uses a beep or steering-wheel vibration to warn a driver if the car drifts into another lane. A 2012 insurance industry study that surprised researchers found that cars with these systems experienced a slightly higher crash rate than cars without them.

Bill Windsor, a safety expert with Nationwide Insurance, said that drivers who grew irritated by the beep might turn the system off. That highlights a clash between the way humans actually behave and how the cars wrongly interpret that behavior; the car beeps when a driver moves into another lane but, in reality, the human driver is intending to change lanes without having signaled so the driver, irked by the beep, turns the technology off.

Mr. Windsor recently experienced firsthand one of the challenges as sophisticated car technology clashes with actual human behavior. He was on a road trip in his new Volvo, which comes equipped with “adaptive cruise control.” The technology causes the car to automatically adapt its speeds when traffic conditions warrant.

But the technology, like Google’s car, drives by the book. It leaves what is considered the safe distance between itself and the car ahead. This also happens to be enough space for a car in an adjoining lane to squeeze into, and, Mr. Windsor said, they often tried.

Dmitri Dolgov, head of software for Google’s Self-Driving Car Project, said that one thing he had learned from the project was that human drivers needed to be “less idiotic.”

On a recent outing with New York Times journalists, the Google driverless car took two evasive maneuvers that simultaneously displayed how the car errs on the cautious side, but also how jarring that experience can be. In one maneuver, it swerved sharply in a residential neighborhood to avoid a car that was poorly parked, so much so that the Google sensors couldn’t tell if it might pull into traffic.

More jarring for human passengers was a maneuver that the Google car took as it approached a red light in moderate traffic. The laser system mounted on top of the driverless car sensed that a vehicle coming the other direction was approaching the red light at higher-than-safe speeds. The Google car immediately jerked to the right in case it had to avoid a collision. In the end, the oncoming car was just doing what human drivers so often do: not approach a red light cautiously enough, though the driver did stop well in time.

Courtney Hohne, a spokeswoman for the Google project, said current testing was devoted to “smoothing out” the relationship between the car’s software and humans. For instance, at four-way stops, the program lets the car inch forward, as the rest of us might, asserting its turn while looking for signs that it is being allowed to go.

The way humans often deal with these situations is that “they make eye contact. On the fly, they make agreements about who has the right of way,” said John Lee, a professor of industrial and systems engineering and expert in driver safety and automation at the University of Wisconsin.  “Where are the eyes in an autonomous vehicle?” he added.

But Mr. Norman, from the design center in San Diego, after years of urging caution on driverless cars, now welcomes quick adoption because he says other motorists are increasingly distracted by cellphones and other in-car technology.

Witness the experience of Sena Zorlu, a co-founder of a Sunnyvale, Calif., analytics company, who recently saw one of Google’s self-driving cars at a red light in Mountain View. She could not resist the temptation to grab her phone and take a picture.

“I don’t usually play with my phone while I’m driving. But it was right next to me so I had to seize that opportunity,” said Ms. Zorlu, who posted the picture to her Instagram feed.