Search This Blog

Thursday, September 1, 2016

Centimeter-Level GPS Positioning for Cars

As reported by IEEE SpectrumSuper-accurate GPS may soon solve three robocar bugbears: bad weather, blurred lane markings, and over-the-horizon blindspots. These are things that cameras and LIDAR can’t always see through, and radar can’t always see around.

A group led by Todd Humphreys, an aerospace engineer at the University of Texas at Austin, has just tested a software-based system that can run on the processors in today’s cars, using data from scattered ground stations, to locate a car to within 10 centimeters (4 inches). That’s good enough to keep you smack in the middle of your lane all the time, even in a blizzard.  
“When there’s a standard deviation of 10 cm, the probability of slipping into next lane is low enough—meaning 1 part in a million,” he said. Today’s unaided GPS gives meter-plus accuracy, which raises the chances that the car will veer outside of its lane to maybe 1 part in 10, or even more, he adds.
Image result for autonomous car 10 cm accuracy
Those aren’t great (well, at least not desirable) odds, particularly if you’re driving a semi. Lane-keeping discipline is non-negotiable for a robocar.
The Texas team, which was backed by Samsung, began with the idea of giving smartphones super-GPS-positioning power. But though that idea worked, it was limited by handsets’ inadequate antennas, which neither Samsung nor any other vendor is likely to improve unless some killer app should come along to justify the extra cost.
“We pivoted then, to cars,” Humphreys says.
Image result for Radiosense cm GPS
Humphreys works on many aspects of GPS; just last month, he wrote for IEEE Spectrum on how to protect the system from malicious attack. He continues to do basic research, but he also serves as the scientific adviser to 'Radiosense', a firm his students recently founded. Ford has recently contacted them, as has Amazon, which may be interested in using the positioning service in its planned fleet of cargo-carrying drones. Radiosense is already working with its own drones—“dinner-plate-size quadcopters,” Humphreys says.
Image result for semi in a snow storm
Augmented GPS has been around since the 1980s, when it finally gave civilians the kind of accuracy that the military had jealously reserved to itself. Now the military uses it too, for instance to land drones on aircraft carriers. It works by using not just satellites’ data signals, but also the carrier signals on which the data are encoded. And, to estimate distances to satellites without being misled by the multiple pathways a signal may take, these systems use a range of sightings—say, taken while the satellite moves in the sky. They then use algorithms to locate the receiver on a map.
But until now, it only worked if you had elaborate antennas, powerful processing, and quite a bit of time. It could take one to five minutes for the algorithm to “converge,” as the jargon has it, onto an estimate.
“That’s not good, I think,” Humphreys says. “My vision of the modern driver is one who’s impatient, who wants to snap into 10-cm-or-better accuracy and push the ‘autonomy’ button. Now that does require that the receiver be up and running. But once it’s on, when you exit a tunnel, boom, you’re back in.” And in your own lane.
Another drawback of existing systems is cost. “I spoke with Google,” says Humphreys. “They gave me a ride in Mountain View, Calif. in November, and I asked them at what price point this would be worth it to them. They originally had this Trimble [PDF] thing—$60,000 a car—but they shed it, thinking that that was exorbitant. They [said they] want a $10,000 [total] sensor package.”
Image result for Radiosense gpsThe Texas student team keeps the materials cost of the receiver system in the car at just $35 per car, running their software-defined system entirely on a $5 Raspberry Pi processor. Of course, the software could piggyback, almost unnoticed, on the powerful robocar processors that are coming down the pike from companies like Nvidia and NXP.
Just as important as the receivers is the ground network of base stations, which the Texas team has shown must be spaced within 20 kilometers (12 miles) for full accuracy. And, because the students’ solar-powered, cellphone-network-connected base stations cost only about $1000 to build, it wouldn’t be too hard to pepper an entire region with them.
Image result for gps in urban canyons
You’d need more stations per unit of territory where satellite signals get bounced around or obscured, as is the case in heavily-built-up parts of most cities. It’s tough, Humphreys admits, in the urban canyons of Manhattan. Conveniently, though, it is in just such boxed-in places that the robocar’s cameras, radar, and lidar work the best, thanks to the many easily recognized buildings there that can serve as landmarks.
“Uber’s engineers hate bridges, because there are not a lot of visual features,” Humphreys notes. “They would do well to have 10-cm precise positioning; it can turn any roadway into a virtual railway.”
So, what’s next, after cars? Humphreys is still looking for the killer app to justify super-accurate GPS in handheld systems.
“We’re looking into outdoor virtual reality,” Humphreys says. “You  could put on a visor and go on your favorite running trail, and it would represent it to you in a centimeter-accurate way, but artistically enhanced—maybe you’d always have a blue sky. You could craft the world to your own liking.” While staying on the path, of course.

SpaceX Falcon 9 Rocket Explodes on Launch Pad in Cape Canaveral

As reported by Engadget: Based on several posts on Twitter this morning, a SpaceX Falcon 9 rocket has exploded in Cape Canaveral, Florida. The spacecraft was reportedly sitting on a launchpad ahead of a scheduled launch this Saturday to take a communications satellite into orbit. Several people are reporting that the explosion shook office buildings some distance away. SpaceX was reportedly static firing the engines ahead of this weekend's launch at Space Launch Complex 40 (SLC-40). During that process, the Air Force Museum at the Kennedy Space Center closes, however SpaceX does have a personnel building close to the launchpad.
The Verge's Loren Gush reports that there are no known casualties and there's not a threat to the general public as of this time.
Developing...

Monday, August 29, 2016

NFL Reportedly Using Ball Tracking Chip Sensors in 2016 Pre-Season Games

As reported by EngadgetThe NFL is using sensors inside footballs during pre-season to track quarterback throwing speeds, running back acceleration, ball position and other stats, according to Recode. The chips are reportedly made by Zebra, a company that already already tracks player statistics for the league using shoulder pad-mounted chips. The NFL used the same ball tracking tech before at the Pro Bowl last year, but the experiment is a first for pre-season. Officials haven't decided if they'll continue it once the regular season starts. 

Zebra teamed up with Wilson to install the RFID-like chips under the football's laces. Sensors located around the stadium can ping the chips and give stats like velocity, acceleration and ball location (within six inches) to Zebra employees within a half second. (The sensors can't track a ball's air pressure to prevent another Deflategate, though.) The NFL is also tracking kicking balls to see if the goalposts should be moved closer together, but that seems to be a different experiment.

Zebra's shoulder pad trackers, now used by all 32 teams, collect data that can be used to evaluate personnel, scout players and improve safety. It could also provide interesting data to broadcasters, though there's no indication the league has allowed that yet. Last year, the NFL released the shoulder pad data at the end of the season, but in 2016, it will reportedly give it to teams just hours after games end. If the Zebra's ball tracking tech is adopted the same way, the devices should soon arrive to regular season games, giving teams (and hopefully fans) more stats to geek out on.

SpaceX's Rival ULA Ramps-Up Plans With 'Space Trucks'

As reported by Christian Science MonitorEver since NASA began contracting with commercial companies to get cargo to the International Space Station (ISS), SpaceX has been the face of commercial space shipping, despite several prominent competitors. Now, the company may have a serious rival.
Tony Bruno, chief executive officer of the United Launch Alliance (ULA), a partnership between aerospace engineering companies Lockheed Martin and Boeing, discussed the company’s plans to create a cargo carrier that he nicknamed the “space truck” with news outlet Quartz this week.
Does this signal a shift towards greater specialization in the space engineering market?
SpaceX is not the only company sending shipments of food, experiments, and other supplies to the international space station, but it is the best known – and offers a significantly lower cost. The company’s successful quest to create a reusable rocket has prompted rivals to aggressively seek ways to lower their costs.
The United Launch Alliance is one of those rivals. The company’s launch contract with NASA is due to expire in 2019, meaning that ULA needs to innovate to remain relevant.
Mr. Bruno’s vision for ULA’s future is expansive, and includes plans for space infrastructure that can support lunar colonization by 2020. To that end, the company’s current project aims to make it cheaper and easier to get into space.
Like SpaceX, ULA hopes to achieve its aims by developing a reusable rocket. According to Bruno, unlike rival SpaceX, ULA’s rocket will have a reusable second stage (the portion of the rocket that finishes the journey) as well as a reusable first stage (the stage that propels the rocket from Earth into orbit).
And unlike SpaceX, which has developed the technology to bring reusable rockets back to Earth, ULA plans to leave the reusable second stages in space.
“We realized that you don’t have to bring it back in order for it to be reusable,” Bruno told Quartz “That’s the big paradigm change in the way that you look at the problem – if you have an upper stage that stays on orbit and is reusable.”
ULA’s second stage design looks like a fuel tank, and can be refueled and reloaded while still in orbit, where it would wait for cargo loads sent up from Earth. Due to the relief rockets (ULA’s “space trucks”) waiting in orbit, a cargo load could be incredibly heavy and still be able to make it to its final destination, whether that be a lunar colony or the ISS.
Once you have these second stage fuel capsules in space, Bruno says, “It starts becoming practical to construct large-scale infrastructure and support economic activities in space, a transportation system between here and the moon, practical microgravity manufacturing, commercial habitats, prospecting in the asteroids.”
As of last year, ULA also had plans to develop a reusable first stage Vulcan rocket that could be recovered in mid-air. As with its most recently announced second-stage plan, the company placed cost-effectiveness at a premium, with senior staff engineer Mohamed Ragab telling SpaceNews: 
“If you work the math, you see that you’re carrying a lot of fuel to be able to bring the booster back and it takes much longer to realize any savings in terms of the number of missions that you have to fly – and they need to be all successful.”
While there are several other companies with commercial space ambitions and Silicon Valley interest, they are lesser known entities such as Moon Express and Planetary Resources.
Much of today’s space innovation is occurring at the commercial level, making it ever more valuable for companies such as ULA and SpaceX to specialize. 
On Thursday, ULA announced that it has been chosen by NASA to launch the next Mars rover exploration project in 2020. Even the ISS could soon be commercially run.
"Ultimately, our desire is to hand the space station over to either a commercial entity or some other commercial capability,” said NASA’s deputy associate administrator for exploration systems development Bill Hill last Sunday, “so that research can continue in low-earth orbit, we figure that will be in the mid-20s."

Tesla Autopilot Crash Exposes Industry Divide

As reported by IEEE SpectrumThe first death of a driver in a Tesla Model S with its Autopilot system engaged has exposed a fault line running through the self-driving car industry. In one camp, Tesla and many other carmakers believe the best route to a truly driverless car is a step-by-step approach where the vehicle gradually extends control over more functions and in more settings. Tesla’s limited Autopilot system is currently in what it calls “a public beta phase,” with new features arriving in over-the-air software updates.

Google and most self-driving car startups take an opposite view, aiming to deliver vehicles that are fully autonomous from the start, requiring passengers to do little more than tap in their destinations and relax.
The U.S. National Highway Traffic Safety Administration (NHTSA) classifies automation systems from Level 1, sporting basic lane-keeping or anti-lock brakes, through to Level 4, where humans need never touch the wheel (if there is one).
A Level 2 system like Tesla’s Autopilot can take over in certain circumstances, such as highways, but requires human oversight to cope with situations that the car cannot handle—such as detecting pedestrians, cyclists, or, tragically, a white tractor-trailer crossing its path in bright sunlight.
Proponents of Level 4 technologies say that such an incremental approach to automation can, counter-intuitively, be more difficult than leap-frogging straight to a driverless vehicle. “From a software perspective, Level 2 technology may be simpler to develop than Level 4 technology,” says Karl Iagnemma, CEO of autonomous vehicle startup nuTonomy. “But when you include the driver, understanding, modeling and predicting behavior of that entire system is in fact pretty hard.”
Anthony Levandowski, who built Google’s first self-driving car and now runsautonomous trucking startup Otto, goes even further. “I would expect that there would be plenty of crashes in a system that requires you to pay attention while you’re driving,” he says. “It’s a very advanced cruise control system. And if people use it, some will abuse it.”
Even if drivers are following Tesla’s rules—keeping their hands on the wheel and trying to pay attention to the road—many studies have shown that human motorists with little to do are easily distracted.
At this point, of course, Tesla is extremely unlikely to remotely deactivate the Autopilot system until it reaches Level 4. So what are its options? Experts think that in one respect, at least, Tesla is on the right track. “Putting the self-driving hardware on all your vehicles then activating it with a software update later seems like a great idea,” says Levandowski. “It’s shocking that nobody else did that.”
“There are very few examples of software of this scale and complexity that are shipped in perfect form and require no updating,” agrees Iagnemma. “Developers will certainly need to push out updates, for either improved performance or increased safety or both.”
One sensor noticeably absent from Tesla’s Model S and X is lidar—the laser ranging system favored by the majority of autonomous car makers. It can build up a 360-degree image of a vehicle’s surroundings in the blink of an eye. “The introduction of an additional sensor would help improve system performance and robustness,” says Iagnemma. “What Tesla was thinking, I believe, is that maybe a lidar sensor wasn’t necessary because you have the human operator in the loop, acting as a fail-safe input.”
Mark Halverson is CEO of transportation automation company Precision Autonomy and a member of the IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems. He thinks that roads with a mix of connected human drivers and self-driving cars would benefit from a cloud-based traffic management system like the one NASA is developing for drones.
“In this accident, the truck driver and the Tesla driver both knew where they were going,” he says. “They had likely plugged their destinations into GPS systems. If they had been able to share that, it would not have been that difficult to calculate that they would have been at the same position at the same time.”
Halverson thinks that a crowdsourced system would also avoid the complexities and bureaucratic wrangling that have dogged the nearly decade-long effort to roll out vehicle-to-vehicle (V2V) technologies. “A crowdsourcing model, similar to the Waze app, could be very attractive because you can start to introduce information from other sensors about road conditions, pot holes, and the weather,” he says.
Toyota, another car company that favors rolling out safety technologies before they reach Level 4, has been struggling with the same issues as Tesla. Last year, the world’s largest carmaker announced the formation of a US $1-billion AI research effort, the Toyota Research Institute, to develop new technologies around the theme of transportation. The vision of its CEO, Gill Pratt, is of “guardian angel” systems that allow humans to drive but leap in at the last second if an accident seems likely. His aspiration is for vehicles to cause fatal accidents at most once every trillion miles.
Such technologies might not activate in the entire lifetime of a typical driver, requiring all the expensive hardware and software of a driverless Level 4 vehicle but offering none of its conveniences. “It’s important to articulate these challenges, even if they’re really hard,” says John Leonard, the MIT engineering professor in charge of automated driving at TRI. “A trillion miles is a lot of miles. If I thought it would be easy, I wouldn’t be doing it.”
Elon Musk will surely be hoping that the next Autopilot accident, when it inevitably comes, will be nearly as many miles off.

Wednesday, August 24, 2016

NVIDIA's Made-For-Autonomous-Cars CPU is Freaking Powerful

As reported by Engadget:NVIDIA debuted its Drive PX2 in-car supercomputer at CES in January, and now the company is showing off the Parker system on a chip powering it. The 256-core processor boasts up to 1.5 teraflops of juice for "deep learning-based self-driving AI cockpit systems," according to a post on NVIDIA's blog. That's in addition to 24 trillion deep learning operations per second it can churn out, too. For a perhaps more familiar touchpoint, NVIDIA says that Parker can also decode and encode 4K video streams running at 60FPS -- no easy feat on its own.

However, Parker is significantly less beefy than NVIDIA's 
other deep learning initiative, the DGX-1 for Elon Musk's OpenAI, which can hit 170 teraflops of performance. This platform still sounds more than capable of running high-end digital dashboards and keeping your future autonomous car shiny side up without a problem, regardless.

On that front, NVIDIA says that in addition to the previously-announced partnership with Volvo (which puts Drive PX2 into the XC90), there are currently "80 carmakers, tier 1 suppliers and university research centers" using Drive PX2 at the moment.

Thursday, August 18, 2016

Uber’s First Self-Driving Fleet Arrives in Pittsburgh This Month

As reported by BloombergNear the end of 2014, Uber co-founder and Chief Executive Officer Travis Kalanick flew to Pittsburgh on a mission: to hire dozens of the world’s experts in autonomous vehicles. The city is home to Carnegie Mellon University’s robotics department, which has produced many of the biggest names in the newly hot field. Sebastian Thrun, the creator of Google’s self-driving car project, spent seven years researching autonomous robots at CMU, and the project’s former director, Chris Urmson, was a CMU grad student. 

“Travis had an idea that he wanted to do self-driving,” says John Bares, who had run CMU’s National Robotics Engineering Center for 13 years before founding Carnegie Robotics, a Pittsburgh-based company that makes components for self-driving industrial robots used in mining, farming, and the military. “I turned him down three times. But the case was pretty compelling.” Bares joined Uber in January 2015 and by early 2016 had recruited hundreds of engineers, robotics experts, and even a few car mechanics to join the venture. The goal: to replace Uber’s more than 1 million human drivers with robot drivers—as quickly as possible.

The plan seemed audacious, even reckless. And according to most analysts, true self-driving cars are years or decades away. Kalanick begs to differ. “We are going commercial,” he says in an interview with Bloomberg Businessweek. “This can’t just be about science.”

Starting later this month, Uber will allow customers in downtown Pittsburgh to summon self-driving cars from their phones, crossing an important milestone that no automotive or technology company has yet achieved. Google, widely regarded as the leader in the field, has been testing its fleet for several years, and Tesla Motors offers Autopilot, essentially a souped-up cruise control that drives the car on the highway. Earlier this week, Ford announced plans for an autonomous ride-sharing service. But none of these companies has yet brought a self-driving car-sharing service to market.

Uber’s Pittsburgh fleet, which will be supervised by humans in the driver’s seat for the time being, consists of specially modified Volvo XC90 sport-utility vehicles outfitted with dozens of sensors that use cameras, lasers, radar, and GPS receivers. Volvo Cars has so far delivered a handful of vehicles out of a total of 100 due by the end of the year. The two companies signed a pact earlier this year to spend $300 million to develop a fully autonomous car that will be ready for the road by 2021.

The Volvo deal isn’t exclusive; Uber plans to partner with other automakers as it races to recruit more engineers. In July the company reached an agreement to buy Otto, a 91-employee driverless truck startup that was founded earlier this year and includes engineers from a number of high-profile tech companies attempting to bring driverless cars to market, including Google, Apple, and Tesla. Uber declined to disclose the terms of the arrangement, but a person familiar with the deal says that if targets are met, it would be worth 1 percent of Uber’s most recent valuation. That would imply a price of about $680 million. Otto’s current employees will also collectively receive 20 percent of any profits Uber earns from building an autonomous trucking business.

Otto has developed a kit that allows big-rig trucks to steer themselves on highways, in theory freeing up the driver to nap in the back of the cabin. The system is being tested on highways around San Francisco. Aspects of the technology will be incorporated into Uber’s robot livery cabs and will be used to start an Uber-like service for long-haul trucking in the U.S., building on the intracity delivery services, like Uber Eats, that the company already offers.

The Otto deal is a coup for Uber in its simmering battle with Google, which has been plotting its own ride-sharing service using self-driving cars. Otto’s founders were key members of Google’s operation who decamped in January, because, according to Otto co-founder Anthony Levandowski, “We were really excited about building something that could be launched early.”Levandowski, one of the original engineers on the self-driving team at Google, started Otto with Lior Ron, who served as the head of product for Google Maps for five years; Claire Delaunay, a Google robotics lead; and Don Burnette, another veteran Google engineer. Google suffered another departure earlier this month when Urmson announced that he, too, was leaving.



“The minute it was clear to us that our friends in Mountain View were going to be getting in the ride-sharing space, we needed to make sure there is an alternative [self-driving car],” says Kalanick. “Because if there is not, we’re not going to have any business.” Developing an autonomous vehicle, he adds, “is basically existential for us.” (Google also invests in Uber through Alphabet’s venture capital division, GV.)

Unlike Google and Tesla, Uber has no intention of manufacturing its own cars, Kalanick says. Instead, the company will strike deals with auto manufacturers, starting with Volvo Cars, and will develop kits for other models. The Otto deal will help; the company makes its own laser detection, or lidar, system, used in many self-driving cars. Kalanick believes that Uber can use the data collected from its app, where human drivers and riders are logging roughly 100 million miles per day, to quickly improve its self-driving mapping and navigation systems. “Nobody has set up software that can reliably drive a car safely without a human,” Kalanick says. “We are focusing on that.”

In Pittsburgh, customers will request cars the normal way, via Uber’s app, and will be paired with a driverless car at random. Trips will be free for the time being, rather than the standard local rate of $1.05 per mile. In the long run, Kalanick says, prices will fall so low that the per-mile cost of travel, even for long trips in rural areas, will be cheaper in a driverless Uber than in a private car. “That could be seen as a threat,” says Volvo Cars CEO Hakan Samuelsson. “We see it as an opportunity.”

Although Kalanick and other self-driving car advocates say the vehicles will ultimately save lives, they face harsh scrutiny for now. In July a driver using Tesla’s Autopilot service died after colliding with a tractor-trailer, apparently because both the driver and the car’s computers didn’t see it. (The crash is currently being investigated by the National Highway Traffic Safety Administration.) Google has seen a handful of accidents, but they’ve been less severe, in part because it limits its prototype cars to 25 miles per hour. Uber’s cars haven’t had any fender benders since they began road-testing in Pittsburgh in May, but at some point something will go wrong, according to Raffi Krikorian, the company’s engineering director. “We’re interacting with reality every day,” he says. “It’s coming.”

For now, Uber’s test cars travel with safety drivers, as common sense and the law dictate. These professionally trained engineers sit with their fingertips on the wheel, ready to take control if the car encounters an unexpected obstacle. A co-pilot, in the front passenger seat, takes notes on a laptop, and everything that happens is recorded by cameras inside and outside the car so that any glitches can be ironed out. Each car is also equipped with a tablet computer in the back seat, designed to tell riders that they’re in an autonomous car and to explain what’s happening. “The goal is to wean us off of having drivers in the car, so we don’t want the public talking to our safety drivers,” Krikorian says.

On a recent weekday test drive, the safety drivers were still an essential part of the experience, as Uber’s autonomous car briefly turned un-autonomous, while crossing the Allegheny River. A chime sounded, a signal to the driver to take the wheel. A second ding a few seconds later indicated that the car was back under computer control. “Bridges are really hard,” Krikorian says. “And there are like 500 bridges in Pittsburgh.”


Bridges are hard in part because of the way that Uber’s system works. Over the past year and a half, the company has been creating extremely detailed maps that include not just roads and lane markings, but also buildings, potholes, parked cars, fire hydrants, traffic lights, trees, and anything else on Pittsburgh's streets. As the car moves, it collects data, and then using a large, liquid-cooled computer in the trunk, it compares what it sees with the preexisting maps to identify (and avoid) pedestrians, cyclists, stray dogs, and anything else. Bridges, unlike normal streets, offer few environmental cues—there are no buildings, for instance—making it hard for the car to figure out exactly where it is. Uber cars have Global Positioning System sensors, but those are only accurate within about 10 feet; Uber’s systems strive for accuracy down to the inch.

When the Otto acquisition closes, likely this month, Otto co-founder Levandowski will assume leadership of Uber’s driverless car operation, while continuing to oversee his company's robotic trucking business. The plan is to open two additional Uber R&D centers, one in the Otto office, a cavernous garage in San Francisco’s Soma neighborhood, a second in Palo Alto. “I feel like we’re brothers from another mother,” Kalanick says of Levandowski.

The two men first met at the TED conference in 2012, when Levandowski was showing off an early version of Google’s self-driving car. Kalanick offered to buy 20 of the prototypes on the spot—“It seemed like the obvious next step,” he says with a laugh—before Levandowski broke the bad news to him. The cars were running on a loop in a closed course with no pedestrians; they wouldn't be safe outside the TED parking lot. “It was like a roller coaster with no track,” Levandowski explains. “If you were to step in front of the vehicle, it would have just run you over.”

Kalanick began courting Levandowski this spring, broaching the possibility of an acquisition during a series of 10-mile night walks from the Soma neighborhood where Uber is also headquartered to the Golden Gate Bridge. The two men would leave their offices separately—to avoid being seen by employees, the press, or competitors. They’d grab takeout food, then rendezvous near the city’s Ferry Building. Levandowski says he saw a union as a way to bring the company’s trucks to market faster. 

For his part, Kalanick sees it as a way to further corner the market for autonomous driving engineers. “If Uber wants to catch up to Google and be the leader in autonomy, we have to have the best minds,” he says, and then clarifies: “We have to have all the great minds.”